Facebook has again been criticized for failing to removechild exploitation imagery from its platform following a BBC investigation into itssystem for reporting inappropriate content.

Last yearthe news organization reported thatclosed Facebook groups were being used by pedophiles to share images of child exploitation. At the timeFacebooks head of public policy told it he was committed to removing content that shouldnt be there, and Facebook hassince told the BBCit hasimproved its reporting system.

However,in a follow-up articlepublished today, theBBC againreportsfinding sexualized images of children being shared on Facebook the vast majority of which the social networking giant failed to remove after the BBC initially reportedthem.

TheBBC saidit used the Facebook report button to alert the company to 100 images thatappeared to break its guidelines against obscene and/or sexually suggestive content including from pages that it said were explicitly for men with a sexual interest in children.

Of the 100 reported images only 18 were removed by Facebook, according to the BBC. It also foundfive convicted pedophiles with profiles and reported them to Facebook via its own systembut says none of the accounts were taken down despite Facebooks own rules forbidding convicted sex offenders from having accounts.

In response to the report, the chairman of the UK House of Commons media committee, Damian Collins, told the BBC he hasgrave doubts about the effectiveness of Facebookscontent moderation systems.

I think it raises the question of how can users make effective complaints to Facebook about content that is disturbing, shouldnt be on the site, and have confidence that that will be acted upon, he said.

In a further twist,the news organization was subsequently reported to the police by Facebook after sharing some of the reported images directly with Facebook when it asked to sendexamples of reported content that had not been removed.

TechCrunch understands Facebookwas following CEOP guidelines at this point although the BBCclaims it only sent images after being asked by Facebook to share examples of reported content. However viewing or sharing child exploitation images is illegal in the UK. The BBC would have to have sent Facebook links to illegalcontent, rather than shared images directly to avoid being reported so its possible this aspect of the story boils down to a miscommunication.

Facebook declined to answer ourquestions and declined to be interviewed on a flagship BBC news program about its content moderation problems but inan emailed statement UK policy director, Simon Milner, said: We have carefully reviewed the content referred to us and have now removed all items that were illegal or against our standards. This content is no longer on our platform. We take this matter extremely seriously and we continue to improve our reporting and take-down measures. Facebook has been recognized as one of the best platforms on the internet for child safety.

It is against the law for anyone to distribute images of child exploitation. When the BBC sent us such images we followed our industrys standard practice and reported them to CEOP. We also reported the child exploitation images that had been shared on our own platform. This matter is now in the hands of the authorities, he added.

The wider issue here is that Facebooks content moderation system remainsvery clearly very far from perfect. And contextual content moderation is evidently a vast problem that requires far more resourcesthat are being devoted to it by Facebook. Even if the companyemploys thousands of human moderators, distributedin officesaroundthe world (such as Dublin for European content) to ensure 24/7 availability, itsstill a drop in the ocean for a platform with more than a billion active users sharing multiple types of content on an ongoing basis.

Technology solutions canbe part of the solution such as Microsofts PhotoDNA cloud service, whichcan identify known child abuse images, for example but such systems canthelp identifyunknown obscene material. Its a problem thatnecessitateshuman-moderation and enough human moderators to review user reports in a timely fashion so that problem content can be identified accurately and removed promptly in other words, the opposite of what appears to havehappened in this instance.

Facebooks leadership cannot be accused of beingblind toconcerns about its content moderation failures. Indeed,CEO Mark Zuckerberg recently discussedthe issue in an open letter conceding the company needs to do more. He also talked about hishope that technology will be able to take a bigger role in fixingthe problem in future, arguing thatartificial intelligence can help provide a better approach, and sayingFacebook is working on AI-powered content flagging systems to scale to the ever-growing challenge although he also cautioned these will take many years to fully develop.

And thats really the problem in a nutshell. Facebook is not putting in the resources needed to fix the current problem it has with moderation even asit directs resources into trying to come up with possible future solutions where AI-moderation can be deployed at scale. But if Zuckerberg wants to do more right now, the simple fix is to employ more humansto review and act on reports.

Read more: https://techcrunch.com/2017/03/07/facebooks-content-moderation-system-under-fire-for-child-safety-failures/

, , , , ,
Similar Posts
Latest Posts from Slackup