Social media giants made decisions which allowed more harmful content on people’s feeds, after internal research into their algorithms showed how outrage fuelled engagement, whistleblowers told the BBC.
More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail and terrorism as they battled for users’ attention.
An engineer at Meta, which owns Facebook and Instagram, described how he had been told by senior management to allow more “borderline” harmful content – which includes misogyny and conspiracy theories – in users’ feeds to compete with TikTok.
“They sort of told us that it’s because the stock price is down,” the engineer said.
A TikTok employee gave the BBC rare access to the company’s internal dashboards of user complaints, as well as other evidence showing that staff had been instructed to prioritise several cases involving politicians over a series of reports of harmful posts featuring children.
Decisions were being made to “maintain a strong relationship” with political figures to avoid threats of regulation or bans, not because of the risks to users, the TikTok staffer said.
The whistleblowers who spoke to the BBC documentary Inside the Rage Machine offer a close-up view of how the industry responded to the explosive growth of TikTok, whose highly engaging algorithm for recommending short-form videos upended social media, leaving rivals scrambling to catch up.
A senior Meta researcher, Matt Motyl, said the company’s TikTok competitor, Instagram Reels, was launched in 2020 without sufficient safeguards. Internal research shared with the BBC showed comments on Reels had significantly higher prevalence of bullying and harassment, hate speech, violence or incitement than elsewhere on Instagram.
Matt Motyl said Meta launched Instagram Reels without sufficient safeguards
The company invested in 700 staff to grow Reels, while safety teams were denied two specialist staff to protect children and 10 more to help ensure the integrity of elections, another former senior Meta employee said.
Motyl gave the BBC dozens of what he described as “high-level research documents showing all sorts of harms to users on these platforms”. Among them was evidence that shows Facebook was aware of problems caused by its algorithm.
The algorithm offered content creators a “path that maximises profits at the expense of their audience’s wellbeing”, and the “current set of financial incentives our algorithms create does not appear to be aligned with our mission” to bring the world closer together, according to one internal study.
It said Facebook can “choose to be idle and keep feeding users fast-food, but that only works for so long”.
In response to the whistleblowers’ claims, Meta said: “Any suggestion that we deliberately amplify harmful content for financial gain is wrong.” TikTok said these were “fabricated claims,” and the company invested in technology that prevented harmful content from ever being viewed.
Source: MyJoyOnline | Read the Full Story…





