Whistleblowers Reveal Social Media Giants Amplifying Harmful Content for Engagement
In a shocking exposure, whistleblowers from prominent social media companies, TikTok and Meta (the parent company of Facebook and Instagram), have claimed that internal policies allowed harmful content to flourish on their platforms. The revelation highlights how these companies prioritized user engagement—often driven by outrage—over user safety.
According to insiders, several decisions within these organizations were influenced by internal research showing that outrage can significantly increase engagement rates. More than a dozen employees have come forward, stating that both firms recognized the risks posed by violent and harmful content but opted to capitalize on the engagement it drove.
One engineer from Meta shared that he was instructed by higher management to permit borderline harmful content, including misogyny and conspiracy theories, as a strategy to combat TikTok's growing popularity. “They sort of told us that it's because the stock price is down,” he recounted.
A TikTok employee provided rare access to internal reports, showcasing a disturbing trend of prioritizing content related to politicians over complaints about harmful posts featuring children. Such revelations depict a disturbing preference for maintaining relationships with political figures over ensuring user safety.
Whistleblower and senior researcher of Meta, Matt Motyl, asserted that despite launching Instagram Reels to compete with TikTok’s success, the platform experienced more incidents of bullying and harassment compared to its main feed. Research showed that Reels had 75% higher rates of bullying and harassment, along with increased hate speech.
Motyl detailed how the company directed substantial resources—700 employees—to bolster Reels while neglecting safety measures to protect vulnerable users, especially minors.
Moreover, engineers behind the algorithms admit to a lack of control over the content processed by their systems. Ruofan Ding, a former machine learning engineer at TikTok, highlighted this 'black box' nature of algorithms, explaining that they often do not differentiate between harmful and harmless content.
Teenagers have reported experiences of encountering disturbing content on these platforms, leading to radicalization. One young individual, Calum, openly shared that he was radicalized by algorithm from a young age, indicating the drastic impacts of such content algorithms.
Despite the serious allegations, Meta and TikTok have denied any wrongdoing. Meta implored that it does not deliberately amplify harmful content, while TikTok insisted that its moderation systems are effective and prioritize user safety.



















