Twitter is reportedly confusing photos of rockets for “intimate” content due to the platform’s increased use of machine learning tools for image recognition.
Several accounts, including that of journalists covering space news, were suspended from the social media platform earlier this week due to the confusion, Quartz reported, citing a former Twitter employee.
Following a recent SpaceX launch, many accounts that shared the video of the rocket returning to Earth were booted off Twitter, including that of space journalist Michael Baylor and of the Spaceflight Now blog.
The microblogging platform flagged Spaceflight Now’s tweet as “violating our rules against posting or sharing privately produced/distributed intimate media of someone without their express consent.”
“Our account has been locked by Twitter for violating unspecified rules while covering a [SpaceX] launch,” Spaceflight Now editor Stephen Clark tweeted.
The suspended accounts seem to have been caught by Twitter’s automated content moderation system the use of which predates Mr Musk’s takeover of the company.
BBC reported in November last year that an Oxfordshire astronomer’s account was suspended for three months after sharing a video of a meteor that the platform’s automated moderation tool flagged as “intimate content”.
“You can imagine how a rocket might be misidentified,” the former employee said.
“Seems like our image recognition needs some work!” Mr Musk responded on Twitter to one of the suspended accounts.
Since the Tesla and SpaceX chief’s takeover of Twitter, content moderation approaches have seen major changes.
Last month, Twitter said it would rely more on artificial intelligence to moderate content instead of banking on its staff to conduct manual checks, even as hate speech has reportedly surged on the site.
Ella Irwin, the company’s vice president of Trust and Safety Product, told Reuters in December that the platform was doing away with manual reviews.
“The biggest thing that’s changed is the team is fully empowered to move fast and be as aggressive as possible,” Ms Irwin said.
Twitter’s entire human rights and machine learning ethics teams as well as outsourced contract workers working on safety concerns, were all reduced to no staff or a handful of people following layoffs in November that slashed the company’s workforce from 7,500 to roughly 2,000.
A key team at the company dedicated to removing child sexual abuse material across Japan and the Asia-Pacific region was also left with only one person after the layoffs, reported Wired magazine.
“If Twitter wanted to decrease reliance on human moderators while not resulting in a flood of sensitive content, an obvious way to do that is lowering the precision thresholds of machine learning models responsible for detecting sensitive content,” the former employee told Quartz.