Whistleblower insights reveal that social media giants like Meta and TikTok have made troubling decisions concerning the moderation of harmful content. Reports indicate a shift in focus towards maximizing engagement, influenced by internal research which showed that outrage can drive user interactions.
Over a dozen insider accounts unveiled risks associated with the spread of violent, sexual, and extremist content. An engineer at Meta noted directives from higher management aimed at allowing more 'borderline' harmful content to compete with TikTok's algorithm.
A TikTok employee exposed internal practices prioritizing political content over reports involving child safety, causing concerns about regulatory avoidance overshadowing user protection. The whistleblowers assert that rapid growth strategies prioritized user engagement over legitimate safety concerns.
Evidence presented by whistleblowers highlights the prevalence of harmful content on Instagram Reels, showing a sharp increase in bullying, hate speech, and incitement compared to traditional Instagram posts. The rising tide of digital aggression raises questions about the responsibilities of social media platforms in creating a safe online environment.
The commentary from former Meta staff reveals a 'black box' of algorithms that perpetuate harmful content, often overshadowed by commercial interests. As platforms grapple with public backlash, calls for accountability in the handling of user safety continue to grow.



















