- Oops!Something went wrong.Please try again later.
Antigone Davis, the global head of safety at the company, claimed that Facebook was not only “responsive” with regards to taking down posts, but that it actively searched out problem content.
Ms Davis gave the answer when asked about Apple’s threat to take Facebook and Instagram off iPhones after it found human trafficking was organised on its apps. Facebook is also struggling to identify and remove trafficking cartels based in Mexico, including violent images and recruitment materials.
“Most of the things that are brought to our attention are managed within 48 hours”, Ms Davis claimed. “Our AI is not perfect, it’s something we’re always looking to improve.”
The committee, which is asking for evidence in developing online safety legislation, which would force social media companies to regulate “legal but harmful” content.
Ms Davis also said that Facebook has “no business incentive, no commercial incentive to actually provide people with a negative experience”, and said that “three million businesses in the UK use our platform to grow their business. If they aren’t safe, if they don’t feel safe, they aren’t going to use our platform”.
This statement stands in contrast to leaked audio from Mark Zuckerberg, who said that he expected advertisers to be on the platform “soon enough” and that he would not “change our policies or approach on anything because of a threat to a small percent of our revenue, or to any percent of our revenue” after a boycott by advertisers because of the amount of racist content on Facebook.
With regards to Facebook’s algorithm, and the insurrection attempt on January 6, Ms Davis claimed that the company put in “serious measures to address those issues well before January 6”.
Those measures have, however, repeatedly being criticised for failing to fully deal with the extremist content that was available on the platform.
Reporting has for instance suggested that Facebook was alerted to a ‘Stop the Steal’ group on 3 November, the day of the US election, when it was “flagged for escalation because it contained high levels of hate and violence and incitement (VNI) in the comments.” Two days later, the group had grown to over 300,000 members.
Mr Zuckerberg, Facebook’s chief executive, later told Congress that the company “made our services inhospitable to those who might do harm”.
When asked whether Facebook changed the algorithm following the January 6 event, Ms Davis did not provide a clear answer.
Ms Davis was also asked why, when Facebook can identify harmful content, its algorithms continue to promote it. She answered that the company “tr[ies] to remote content that is divisive, for example, or polarising”.
In May 2020, it was reported that Facebook executives took the decision to end research that would make the social media site less polarising for fears that it would unfairly target right-wing users. Proposals to make the site less polarising were described as “antigrowth” and requiring “a moral stance”.
“Our algorithms exploit the human brain’s attraction to divisiveness,” a 2018 presentation warned.
Ms Davis also told MPs that Facebook was “committed to providing more transparency [and that it had] taken steps to do that”.