For over two decades, social media platforms have quietly evolved into powerful engines of emotional manipulation. Fueled by sophisticated algorithms designed by psychologists and data scientists, these platforms learn how to engage, enrage, and ultimately reshape minds. What was once a tool for connection has become, for many, an instrument of subliminal propaganda and brainwashing — and the damage is real, measurable, and rising.
What The Data Tells Us: Mental Health Crisis Among Youth
Recent surveys paint a distressing picture of adolescent mental health in the U.S.:
- In 2023, nearly 40% of U.S. high school students (≈ four in ten) reported persistent feelings of sadness or hopelessness.
- About 20.4% seriously considered suicide, and 9.5% attempted suicide in the past year.
- Frequent social media use (defined as several times daily) is strongly associated with poorer mental health outcomes, including depression, anxiety, suicidal ideation, and being more likely to experience bullying (online or offline).
- Youth who spend more than 3 hours per day on social media face double the risk of experiencing symptoms of depression and anxiety compared to those who use less.
These are not small or isolated issues—they represent a broad, worsening mental health crisis that roughly tracks the rise of social media usage and more aggressive algorithmic targeting.
Trends in Media: The Rise of Outrage Language & Social Justice Terms
While direct quantitative surveys of word usage via Lexis-Nexis are less frequently published, a few studies offer clues:
- A global study of over 98 million news/opinion articles across 36 countries found that starting in the early 2010s (post-2010) there was a sharp increase in the use of terms denouncing prejudice: racism, sexism, homophobia, Islamophobia, anti-Semitism, etc. These terms became significantly more prevalent particularly around and after 2015.
- PolitiFact investigated a graphic showing frequency increases (2010-2020) of “racist(s)/racism” in several major U.S. newspapers:
- The New York Times: ~712% increase
- Los Angeles Times: ~756%
- Washington Post: ~361%
- Wall Street Journal: ~468%
This suggests what you observed in Lexis-Nexis research is in fact reflected in published media: a large increase in content featuring alert or outrage terms related to race, identity, injustice.
Connecting the Dots: Algorithms, Anger, and Conditioning
Given this data, here’s how the mechanism works — and why it amounts to subliminal brainwashing:
- Platform incentives: Algorithms are rewarded for maximizing user engagement. Emotional arousal (especially anger, fear, outrage) tends to generate more clicks, shares, time spent.
- Personalization: Once the system detects what kind of content a user engages with (outrage, injustice, identity politics), it feeds more of that content. Over time, exposure becomes heavy and unbalanced.
- Reinforcement loop: For a young user, especially one with limited exposure to diverse viewpoints, or strong emotional sensitivity, this constant exposure can shift perception. Events, people, ideas get filtered through frames of “us vs them,” injustice, threat.
- Mental health costs & radicalization risk: Exposure to repeated negative/emotional content correlates with depression, anxiety, suicidal ideation. Also leads to echo chambers and radicalization by pushing people toward more extreme narratives.
- Lack of transparency & accountability: Most users don’t know how these algorithms operate. There’s little oversight, limited regulation, and platforms often resist revealing internal metrics or design choices.
Legal and Ethical Angle: Brainwashing or Free Speech?
Is this legal? Is it ethical?
- Subliminal persuasion and propaganda are historically regulated in many contexts: advertising restrictions, limits on manipulative content, laws preventing exploitation of minors.
- When platforms design systems that intentionally or recklessly exploit emotional vulnerabilities, especially in those under 18, there’s an argument to be made they violate ethical norms, possibly legal ones (consumer protection, child protection, mental health laws).
- Some recent litigation supports this. In France, families are suing TikTok, claiming that algorithmic exposure to harmful content contributed to their children’s suicides.
- Public health authorities are sounding the alarm: the U.S. Surgeon General’s Advisory highlights how children spending more than ~3 hours/day on social media are much more likely to suffer mental health harms.
The Human Cost
Imagine a 13-year-old in a normal family. Conservative Christian values. Maybe shielded in upbringing. Yet, when they open YouTube, TikTok, Facebook:
- They see dozens of videos every day about injustice, racism, systemic oppression, fascism, “they’re oppressing you,” “you must resist,” “they hate people like you,” etc.
- That content fires up fear, indignation. The algorithm notices, rewards those emotional spikes — shows more content like it.
- Over time, their worldview shifts: everything becomes about conflict. People they once trusted become suspect. Moderate voices sound naive. Nuance disappears.
- Mental health follows: anxiety, anger, despair. Perhaps suicidal thoughts if the narrative consistently says “they’re out to get you,” “truth is hidden,” “justice is denied.”
That pattern matches what the data shows.
What It Would Take to Push Back
- Transparency mandates: Platforms should disclose how recommendation algorithms work, what metrics prioritize engagement, especially emotional reactions. Auditors should be able to see what content is being boosted and why.
- Content moderation & limits: Not just remove hate speech or self-harm — also regulate the design that amplifies outrage. Maybe limit “engagement-based boosting” of emotionally inflammatory content.
- Age protections & screen time limits: Enforce stricter rules for minors. Perhaps limit how often content with high emotional intensity can be shown to young people.
- Legal/regulatory oversight: Consumer protection, mental health laws, child welfare laws may need to include algorithmic harms as a category. Just as government regulates tobacco or gambling for their addictive potential, so too must we regulate attention-economy tools.
- Media literacy & education: Teach young people how to recognize this manipulation. Teach critical thinking, source evaluation, how algorithms shape what you see.
Conclusion
Since about 2010, we’ve seen two parallel trends: a surge in terms like racism, fascism, supremacy in media; and a rapid increase in adolescent mental health issues strongly associated with heavy social media use. These are not coincidences. Algorithms have learned what triggers outrage, and once they detect your emotional weak point, they don’t stop — they amplify, repeat, normalize.
This is not just “bad content” or “bad actors.” The platform design itself has become a machine for shaping minds. It is brainwashing by design. And unless there is transparency, regulation, and real protection especially for the young, we are raising generations conditioned by algorithmic rage, suspicion, and despair, not reason, balance, or hope.
Your support brings the truth to the world.
Catholic Online News exists because of donors like you. We are 100% funded by people who believe the world deserves real, uncensored news rooted in faith and truth — not corporate agendas. Your gift ensures millions can continue to access the news they can trust — stories that defend life, faith, family, and freedom.
When truth is silenced, your support speaks louder.