UK markets closed
  • FTSE 100

    7,486.67
    +20.07 (+0.27%)
     
  • FTSE 250

    19,545.70
    +5.36 (+0.03%)
     
  • AIM

    847.07
    +2.68 (+0.32%)
     
  • GBP/EUR

    1.1629
    -0.0005 (-0.04%)
     
  • GBP/USD

    1.2097
    -0.0017 (-0.14%)
     
  • BTC-GBP

    13,702.92
    -19.33 (-0.14%)
     
  • CMC Crypto 200

    386.97
    +4.32 (+1.13%)
     
  • S&P 500

    4,026.12
    -1.14 (-0.03%)
     
  • DOW

    34,347.03
    +152.97 (+0.45%)
     
  • CRUDE OIL

    76.28
    -1.66 (-2.13%)
     
  • GOLD FUTURES

    1,754.00
    +8.40 (+0.48%)
     
  • NIKKEI 225

    28,283.03
    -100.06 (-0.35%)
     
  • HANG SENG

    17,573.58
    -87.32 (-0.49%)
     
  • DAX

    14,541.38
    +1.82 (+0.01%)
     
  • CAC 40

    6,712.48
    +5.16 (+0.08%)
     

Twitter put ads next to tweets sharing images of child sexual abuse

Twitter Climate Disinformation (Copyright 2021 The Associated Press. All rights reserved)
Twitter Climate Disinformation (Copyright 2021 The Associated Press. All rights reserved)

Twitter put promoted advertisements next to tweets soliciting child pornography, leading many brands to pull their marketing campaigns on the social media site.

Major advertisers including Disney, NBC, and Coca Cola were among over 30 advertisers that appeared next to accounts peddling links to the exploitative material, discovered by cybersecurity group Ghost Data.

These tweets include key words related to “rape” and “teens,” with one user saying they were “trading teen/child” content. In another example, a user tweeted searching for content of “Yung girls ONLY, NO Boys,” which was immediately followed by a promoted tweet for Texas-based Scottish Rite Children’s Hospital. Scottish Rite did not return multiple requests for comment.

“We’re horrified,” David Maddocks, brand president at Cole Haan, a shoe and accessory brand whose promotions appeared next to those tweets, told Reuters, which first reported on the news.

“Either Twitter is going to fix this, or we’ll fix it by any means we can, which includes not buying Twitter ads.”

Twitter said that the company “has zero tolerance for child sexual exploitation” and is investing more resources dedicated to child safety, including hiring for new positions to write policy and implement solutions. The company added that it is working closely with its advertising clients and partners to investigate and take steps to prevent the situation from happening again.

Twitter, like all social media platforms, Twitter bans depictions of child sexual exploitation, which are illegal in most countries. But it permits adult content generally and is home to a thriving exchange of pornographic imagery, which comprises about 13 per cent of all content on Twitter, according to an internal company document seen by Reuters. Twitter declined to comment.

Ghost Data identified the more than 500 accounts that openly shared or requested child sexual abuse material over a 20-day period this month. Twitter failed to remove more than 70 per cent of the accounts during the study period, according to the group, which shared the findings exclusively with Reuters.

Reuters could not independently confirm the accuracy of Ghost Data’s finding in full, but reviewed dozens of accounts that remained online and were soliciting materials for “13+” and “young looking nudes.”

The traffickers often use code words such as “cp” for child pornography and are “intentionally as vague as possible,” to avoid detection, according to the internal documents. The more that Twitter cracks down on certain keywords, the more that users are nudged to use obfuscated text, which “tend to be harder for (Twitter) to automate against,” the documents said.

For the accounts identified by Ghost Data, nearly all the traders of child sexual abuse material marketed the materials on Twitter, then instructed buyers to reach them on messaging services such as Discord and Telegram in order to complete payment and receive the files, which were stored on cloud storage services like New Zealand-based Mega and Dropbox. Discord, Telegram, and Mega, and Dropbox all say they take action against users attempting to share such content.

After Reuters shared a sample of 20 accounts with Twitter last Thursday, the company removed about 300 additional accounts from the network, but more than 100 others still remained on the site the following day, according to Ghost Data and a Reuters review.

Reuters then on Monday shared the full list of more than 500 accounts after it was furnished by Ghost Data, which Twitter reviewed and permanently suspended for violating its rules, said Twitter’s Carswell on Tuesday.

“Twitter needs to fix this problem ASAP, and until they do, we are going to cease any further paid activity on Twitter,” said a spokesperson for Forbes.

“There is no place for this type of content online,” a spokesperson for carmaker Mazda USA said in a statement to Reuters, adding that in response, the company is now prohibiting its ads from appearing on Twitter profile pages.

A Disney spokesperson called the content “reprehensible” and said they are “doubling-down on our efforts to ensure that the digital platforms on which we advertise, and the media buyers we use, strengthen their efforts to prevent such errors from recurring.”

A spokesperson for Coca-Cola, which had a promoted tweet appear on an account tracked by the researchers, said it did not condone the material being associated with its brand and said “any breach of these standards is unacceptable and taken very seriously.”

NBCUniversal said it has asked Twitter to remove the ads associated with the inappropriate content.

Twitter’s transparency reports on its website show it suspended more than 1 million accounts last year for child sexual exploitation. It made about 87,000 reports to the National Center for Missing and Exploited Children, a government-funded non-profit that facilitates information sharing with law enforcement, according to that organization’s annual report.

Additional reporting by Reuters