In the spring of 2022, they wanted to make radical changes on Twitter. For a year, the service was calm about the abundance of adult material on the platform, and the company seriously thought about monetizing this content on a fee-subscription basis, as it was done on OnlineFans. However, plans were not meant to come true because of the platform's inability to quickly remove prohibited content, reports The Verge.
If the project could be launched, Twitter would certainly have lost many advertisers, but it would have been a great deal to compensate for the income from adult content creators: Only this year, the OnlyFans' income could reach $2.5 billion — about half of the Twitter revenue in 2021. The microblogging management initially found the idea to be a good idea, especially since many of AllyFans authors are promoting their channels on Twitter. Therefore, resources were allocated for the implementation of the idea, and the project was named ACM.
In order to develop the project, a Red Team unit of 84 people was formed, and very quickly, as early as April 2022, the team concluded that Twitter was unable to secure the safe sale of adult subscriptions because the platform had not controlled and still had no control over the publication of illegal content. As it turned out, the company simply lacked tools to check whether both creators and users of adult content had reached the age of majority.
Red Team staff stated that the launch of ACM would only add to the problem because more adult material would be published on the platform, and thus the amount of illegal content would grow. However, after the suspension of the project, a manual that was well aware of the problem was still doing nothing about it. It became known 15 months earlier, when the company had commissioned another team of experts to start working on improving the content on Twitter. It was then discovered that the company ' s tools for detecting materials containing child abuse had lost relevance and effectiveness.
At the same time, technological giants like Google and Facebook* are far more serious: in 2019, Mark Zuckerberg boasted that his Facebook* only spends more on security than Twitter earns. Technology platforms support each other in this fight: the PhotoDNA service, created by Microsoft in 2009, is one of the main tools. However, this system only responds to known prohibited materials.
Unlike other large companies that have already developed their own solutions, Twitter staff have to manually try to detect content that PhotoDNA has missed, and the company's main working tool is a RedPanda complex that is now out of date and has lost its support. Engine learning systems are still not able to detect prohibited material in any available format, and the moderators are not always able to do their job — in one case, video footage with illegal content remained on the platform for more than 23 hours, even after users complained about it. Although, according to Katie Rosborough's representative, Twitter has already significantly increased its investment in identifying such materials in February 2021, and even now that the company has tightened its human resources policy, there are four open vacancies in this profile.
On August 23, Twitter announced that the Anti-Dangerous Content Unit was joining with the Spam Bots Unit, a measure that the company had taken, of course, because Ilon Mask had accused the Twitter authorities of providing unreliable information on spam on the platform. Perhaps the businessman had decided that the issue was the most important one, and it was possible that Twitter problems with Ilon Mask were just beginning.
* Listed on the list of voluntary associations and religious organizations in respect of which the court has taken a legally enforceable decision to abolish or prohibit activities on the grounds provided for in Federal Act No. 114-FZ of 25 July 2002 on countering extremist activities.