In 2026, the “algorithm arms race” between TikTok and Meta intensified, turning social media into a battleground for user attention. As per BBC report, both giants prioritized competition over safety protocols, leading to normalization of harmful content like violence, misogyny, sexual abuse, and harassment.
Meta engineer revealed that senior management instructed him to allow more harmful content in users’ feeds to compete with TikTok’s dominance and boost engagement. According to Meta researcher Matt Motyl, Instagram reels launched in 2020 without sufficient safeguards, leading to high rates of violence, hate speech, and harassment.
A research paper by Motyl showed Reels had significantly higher harm rates compared to the main feed. A TikTok whistleblower revealed that reports involving politicians were prioritized over child safety issues and sexual harassment to avoid regulatory threats or bans.
Real-world examples also exist, with users like 19-year-old Calum reporting radicalization due to misogynistic and racist content showing on his feed at age 14. UK counter-terror police reported normalization of far-right and antisemitic content, noting users becoming desensitized to real-world violence.
TikTok dismissed these claims as fabricated, stating robust parallel review structures do not jeopardize child safety. Meta denied the allegations, emphasizing strict policies and new Teen Accounts feature with built-in protections for parents.


