Empathy – the Antidote of Hate

Empathy as the Antidote to Hate: What Research Shows

 

Hate speech online weakens social cohesion, restricts freedom of expression, and undermines democratic values. For young people, especially, repeated exposure to hostility can shape how they see others and how safe they feel expressing themselves.

 

The HATE-LESS project approaches this challenge with a clear goal: not only to raise awareness of hate speech, but to show what actually helps reduce it. Recent research confirms that empathy plays a decisive role.

 

Why empathy matters in countering hate

 

As part of the Stop Hate Speech project, researchers from ETH Zurich and the University of Zurich examined which forms of counterspeech are most effective when responding to hateful content online. Their findings challenge common assumptions.

 

In a 2021 field experiment involving over 1,300 Twitter (not X) users who had posted hateful messages, different response strategies were tested. The result was consistent and clear: only counterspeech that appealed to empathy for the people targeted by hate speech led to measurable behavior change.

 

Messages such as “This is very painful for Jewish people” or “Statements like this hurt migrants” were significantly more effective than humor, sarcasm, or warnings about consequences. Appeals that encouraged authors to reflect on the human impact of their words reduced the likelihood that they would continue posting hateful content.

 

As study lead Dominik Hangartner explains, there is no single solution to online hate, but some approaches demonstrably work better than others.

 

Empathy also influences bystanders

 

A follow-up study published in Nature Scientific Reports in 2025 expanded these findings. It showed that empathetic counterspeech not only affects the person being addressed; it also shapes how others react.

 

Users who were exposed to empathetic counterspeech, even when it was not directed at them, were less likely to like or amplify hateful comments. This reduced the reach and visibility of such content. Counterspeech that invited readers to put themselves in the position of the targeted group was particularly effective.

 

The researchers observed that messages encouraging perspective-taking helped bystanders draw parallels to their own experiences of exclusion or insult. This shift reduced passive support for hate speech and limited its spread.

 

From research to practice

 

These insights are central to the approach of the Public Discourse Foundation and its partners. Counterspeech that promotes empathy and perspective-shifting offers a constructive way to respond to hate without escalating conflict.

 

For HATE-LESS, this evidence reinforces a key principle: addressing hate is not only about naming what is wrong, but about modelling what works. Empathy is not a soft response; it is a strategic and evidence-based one.

 

By helping young people understand the emotional impact of words, and by encouraging them to consider how others experience exclusion or harm, we strengthen critical thinking and reduce pathways to radicalization.

 

Building healthier digital spaces

 

Empathy reframes the response to harmful behavior. Instead of shaming or threatening, it reconnects speech to its human consequences. Research shows that this approach can change behavior, reduce amplification, and support healthier online environments.

 

The HATE-LESS project continues to translate these findings into educational tools and communication strategies that support young people, educators, and institutions across Europe.

 

Because when empathy enters the conversation, hate loses ground.

Source: Source: ETH Zurich & University of Zurich. 2025. Confronting Hate with Empathy for those Affected.

Disinformation Against Migrants: Why Awareness Shapes Safer Digital Spaces

Disinformation Against Migrants: Why Awareness Shapes Safer Digital Spaces

 

The 2024 June-July Eurobarometer identifies disinformation as one of the key concerns for Europeans. 19% of respondents list manipulation and false information among the biggest challenges. Many of these narratives focus on migrants; they circulate widely online and influence public debate.

 

Disinformation works because it often blends emotional claims with partial truths. Stories that play on fear or uncertainty spread quickly among users who may not have the tools to verify what they see. These narratives shape attitudes toward communities that already face exclusion; they also weaken trust in institutions.

 

HATELESS responds by promoting media literacy and supporting tools that help young people understand how misleading content operates. Awareness encourages people to recognize patterns of distortion. It guides them toward verified information and makes it easier to discuss sensitive topics without reinforcing harmful stereotypes. 

 

Safer digital spaces form when users approach content with curiosity rather than reaction and when conversation is built on facts instead of assumptions.

Source: Ipsos. 2024. EU challenges and priorities Summary. PDF

From online hate to offline violence

When Online Hate Spills Offline: Why Inclusion Matters More Than Ever

During the 2024 European Parliament elections, harmful online rhetoric reached worrying levels. According to the FRA (European Union Agency for Fundamental Rights) toxic narratives circulating on social media were linked to rising tensions across several Member States. Reports documented attacks against campaign workers in Germany and Italy, over 378 criminal proceedings linked to election-related violence in Hungary, and even an assassination attempt targeting Slovakia’s Prime Minister.

These events remind us that hate speech online is not “just words.” When a digital environment becomes saturated with hostility, misinformation, or dehumanizing language, it can increase polarization and lower the threshold for real-world aggression.

But highlighting the risks is only half of the picture.

At HATE-LESS.EU, we focus on strengthening the antidote: inclusion, critical thinking, and positive digital participation. By helping educators and young people understand how harmful content spreads, we support them in building online communities centered around dialogue (instead of division).

Research in digital psychology consistently shows that communities where young people feel included and represented are less vulnerable to radicalization or harmful narratives. Encouraging respectful interaction, offering accessible tools for reporting hate, and fostering a sense of belonging all contribute to healthier online spaces.

The events of the 2024 elections underline why this work matters. Preventing digital harm means investing in the social and emotional skills that keep communities strong, online and offline.

Source:
• FRA – Fundamental Rights Report 2025
 

Partner Websites:

EUROPEAN YOUTH4MEDIA NETWORK EV – Germany: https://youth4media.eu/

EESTI PEOPLE TO PEOPLE – Estonia https://www.ptpest.ee/

MITRA FRANCE – France: https://www.facebook.com/mitrafr/

Formation et Sensibilisation de Luxembourg – Luxembourg: https://fslux.lu/

Evolutionary Archetypes Consulting SL – Spain: https://ea.consulting/

WAVES FOUNDATION FOR GLOBAL EDUCATION – Cyprus: https://www.wavesfoundation.org/

 

Join The Hate-Less Movement

Be part of the movement against hate speech and disinformation! Stay updated and engaged by following us on our official platforms:

Join the conversation and help create a more inclusive and hate-free digital world!

Online Hate Speech & Harassment of Women

Online Hate Speech and the Harassment of Women in Politics: A Growing Democratic Risk

 

Across Europe, women participating in political life continue to face high levels of online harassment. While digital spaces are essential for democratic debate, they have also become environments where women are disproportionately targeted with sexist abuse.

 

Findings by the OSCE Office for Democratic Institutions and Human Rights (ODIHR) show that during election campaigns in various countries in Europe in 2024, women politicians were subjected to frequent sexist attacks and derogatory messages. This form of harassment often intensifies the moment women speak publicly, challenge discrimination, or address issues like gender equality.

 

Women belonging to minority groups (including ethnic minorities or religious minorities)  experience even more severe online harassment. Many face the double burden of being targeted with both sexism and racism, creating barriers to political participation and discouraging them from voicing opinions online.

 

This pattern of aggression is not only harmful to individual women. It undermines democratic processes by silencing diverse perspectives, restricting inclusive debate, and reinforcing social inequalities. When women reduce their online presence due to fear or exhaustion, entire communities lose representation.

 

Why This Matters for HATE-LESS

Countering online hate speech is essential for building a safer, more democratic digital environment. Addressing gendered online harassment helps ensure that women can participate fully in public life without fear.

 

Sources

 

Join The Hate-Less Movement

Be part of the movement against hate speech and disinformation! Stay updated and engaged by following us on our official platforms:

Join the conversation and help create a more inclusive and hate-free digital world!

Harmful Rhetoric in EU Elections: What Last Year’s Trends Reveal About Online Hate

Harmful Rhetoric in EU Elections: What Last Year’s Trends Reveal About Online Hate

Last year’s European Parliament elections once again demonstrated how digital spaces shape political conversations and how quickly harmful rhetoric can spread. According to the OSCE Office for Democratic Institutions and Human Rights (OSCE ODIHR), the 2024 election cycle saw increased use of hostile language, including racism, misogyny, xenophobia, islamophobia, intimidation, and even calls for violence. 

 

Much of this rhetoric circulated online through algorithmic amplification.  This amplification is driven by the “Attention Economy” and the human “Negativity Bias”: social media algorithms prioritize emotionally charged, negative content because it generates higher engagement (outrage, anger, quick shares).

The algorithm interprets this high engagement as “interest,” pushing the toxic content to wider audiences. This feedback loop disproportionately boosts extreme voices, effectively shifting the Overton Window (the range of acceptable public opinion) toward more polarizing positions, which severely impacts the fairness and integrity of democratic processes.

 

This pattern is consistent with research indicating that hostile narratives spread more rapidly than neutral or positive information. Studies from the EU Agency for Fundamental Rights (FRA, 2024) confirm that exposure to dehumanising or threatening language increases the likelihood of accepting or repeating hate speech, especially among young people who are active on social media platforms.

 

For the HATE-LESS.EU project, these insights strengthen the need for structured educational approaches that address both emotional and cognitive responses to online content. Understanding how harmful narratives develop and why they spread so efficiently helps young people build the resilience and critical thinking needed to navigate digital environments safely.

 

By analysing last year’s election patterns, we contribute to a broader effort to create safer digital spaces, support democratic participation, and reduce the impact of harmful rhetoric on public debate.

 

Sources:

GDI (2024), Gendered Disinformation in the European Parliamentary Elections

Creating Safer Digital Spaces

Towards Safer Digital Spaces: Why Technology Alone Isn’t Enough

 

Hate speech spreads online faster than positive content. Sometimes up to ten times faster, according to multiple studies on digital communication dynamics. While algorithms and moderation tools have become more sophisticated, the real solution lies in empowering people, not just improving code.

 

A 2023 report by the EU Agency for Fundamental Rights (FRA), “Online Content Moderation – Current challenges in detecting hate speech,” highlights that many moderation systems still struggle with accuracy and transparency. Automated filters often fail to detect context or irony, while human moderators face psychological pressure and lack of clear, unified standards.

 

As the report underlines that a crucial aspect is building communities that know how to respond to harmful content. That’s where education and empathy come in.

 

At HATE-LESS.EU, we believe that fostering digital literacy, critical thinking, and social awareness can transform how young people engage online. Through workshops, creative storytelling, and peer learning, the project helps educators and youth leaders explore how bias spreads, how emotional manipulation works, and how empathy can counteract radicalization.

 

By equipping individuals with the knowledge to recognise hate speech and the tools to respond constructively, we move closer to an internet culture where respect and inclusion are not just moderated – they’re practiced. Because building safer digital spaces means building stronger human connections.

 

👉 Read the full FRA report here

Partner Websites:

EUROPEAN YOUTH4MEDIA NETWORK EV – Germany: https://youth4media.eu/

EESTI PEOPLE TO PEOPLE – Estonia https://www.ptpest.ee/

MITRA FRANCE – France: https://www.facebook.com/mitrafr/

Formation et Sensibilisation de Luxembourg – Luxembourg: https://fslux.lu/

Evolutionary Archetypes Consulting SL – Spain: https://ea.consulting/

WAVES FOUNDATION FOR GLOBAL EDUCATION – Cyprus: https://www.wavesfoundation.org/

 

Join The Hate-Less Movement

Be part of the movement against hate speech and disinformation! Stay updated and engaged by following us on our official platforms:

Join the conversation and help create a more inclusive and hate-free digital world!

Who Moderates the Moderators?

Who Moderates the Moderators? The Hidden Power of Online Oversight

 

Every second, algorithms and human moderators decide which voices are amplified, and which are silenced online. But who ensures these decisions are fair and transparent?

 

According to the FRA report “Online content moderation – Current challenges in detecting hate speech”, online moderation systems face three major issues: a lack of consistent definitions of hate speech, limited transparency of moderation processes, and the risk of discrimination when relying on artificial intelligence or opaque decision-making. 

 

The report found that among the posts analysed, more than half were still considered hateful by human coders despite initial moderation efforts. 

 

It also highlighted that women, people of African descent, Roma and Jewish communities are disproportionately targeted.

 

FRA emphasises that platforms must implement performance indicators (e.g., volume of misogynistic content, accuracy of moderation), ensure AI systems for detection are rights-compliant, and enable independent researchers to access moderation data for accountability. 

 

At the HATE-LESS.EU initiative, we believe that digital literacy, critical thinking and inclusive education are integral to true online safety. It’s not just about what gets moderated. It’s also about who decides, how, and why. By empowering young people and educators with tools to question both content and context, we aim to build digital spaces that are safe.

 

👉 Read the full FRA report here

 

Partner Websites:

EUROPEAN YOUTH4MEDIA NETWORK EV – Germany: https://youth4media.eu/

EESTI PEOPLE TO PEOPLE – Estonia https://www.ptpest.ee/

MITRA FRANCE – France: https://www.facebook.com/mitrafr/

Formation et Sensibilisation de Luxembourg – Luxembourg: https://fslux.lu/

Evolutionary Archetypes Consulting SL – Spain: https://ea.consulting/

WAVES FOUNDATION FOR GLOBAL EDUCATION – Cyprus: https://www.wavesfoundation.org/

 

Join The Hate-Less Movement

Be part of the movement against hate speech and disinformation! Stay updated and engaged by following us on our official platforms:

Join the conversation and help create a more inclusive and hate-free digital world!

HATE-LESS Toolkit Launch: A Guide to Countering Hate

of the HATE-LESS.EU Toolkit: A Guide to Countering Hate

The Erasmus+ project HATE-LESS.EU has officially launched its Methodological Guidelines and Practical Toolkit. Two innovative resources designed to help educators and young people deconstruct hate speech, strengthen media literacy, and promote critical thinking across Europe.

 

“The true adversary is not the hateful voice, but the uneducated ear.” – Nektar Baziotis, CTO of EA Consulting

 

Developed by six European partners and tested during an international training in Tallinn, Estonia, the HATE-LESS.EU Toolkit offers creative, hands-on methods that can be used in schools, NGOs, and youth projects. Together, the consortium aims to build a Europe where knowledge and empathy defeat hate.

 

You can read the full story in the official Press Release or visit hate-less.eu to explore the project’s activities and materials.

 

Join The Hate-Less Movement

Be part of the movement against hate speech and disinformation! Stay updated and engaged by following us on our official platforms:

Join the conversation and help create a more inclusive and hate-free digital world!

How AI Can Help Detect Hate Speech

🤖 How AI Can Help Detect Hate Speech

 

Artificial Intelligence (AI) has become an essential part of online content moderation. From detecting hate speech to filtering disinformation, algorithms help platforms process millions of posts every day, but they still have a long way to go.

 

According to the FRA (2023) Online Content Moderation report, AI systems can identify harmful language much faster than human moderators. Yet accuracy remains a challenge: automated tools often remove legitimate content or overlook hate expressed in subtle or coded ways. Both errors can harm freedom of expression and leave marginalized groups exposed to discrimination.

 

The FRA Fundamental Rights Report 2025 reinforces this message, warning that automated moderation must include transparency, accountability, and human oversight. Algorithms reflect the data they’re trained on, meaning they can unintentionally reproduce social biases already present in society. Without regular auditing, the systems designed to fight hate may end up amplifying it.

 

European policy is catching up. The EU Artificial Intelligence Act establishes clear rules for trustworthy AI, introducing a risk-based approach that requires transparency, human oversight, and safeguards when AI systems pose high risks to fundamental rights.

 

At HATE-LESS.EU, we see AI as a partner, not a replacement, in promoting inclusion. By combining technology with media literacy, education, and empathy, we can teach young people to understand how algorithms work, recognize bias, and engage responsibly online.

 

AI can help detect hate speech, but it cannot define what hate looks like in every context. That requires human judgment, ethical design, and continuous reflection, values that guide all of our work across Europe.

 

📖 Sources:
• FRA (2023), Online Content Moderation – Current Challenges in Detecting Hate Speech
• FRA (2025), Fundamental Rights Report 2025
• European Commission (2024), Digital Services Act & AI Act Summaries

Learn more: https://hate-less.eu

How AI Impacts Hate Speech

🧠 How AI Impacts Hate Speech

 

Artificial Intelligence (AI) shapes nearly every aspect of our lives, from how we search online to how we communicate. It makes daily tasks easier, but when applied to online moderation, it brings new challenges.

 

Algorithms designed to detect hate speech can sometimes go too far, removing legitimate posts and silencing voices, or not far enough, letting harmful content slip through. Both scenarios can lead to discrimination, often against groups already at risk.

 

As the FRA Fundamental Rights Report 2025 reminds us, algorithms are not neutral. They reflect the data and values we feed into them, which means they can also reproduce existing biases.

 

HATE-LESS.EU believes that the solution lies in balance: combining AI’s efficiency with human judgment and ethical oversight. Automation can help identify risks faster, but people bring the empathy, understanding, and context that machines lack.

 

⇒ By teaching critical thinking and algorithmic awareness, we empower young people to understand how technology shapes the information they see and how to challenge bias when they spot it.

 

AI should be a tool for inclusion, not exclusion. 

📖 Source: FRA Fundamental Rights Report 2025


Learn more: https://hate-less.eu

Contact

Project: 2024-1-DE04-KA220-YOU-000244181

Disclaimer: Co-financed by the European Union. The opinions and points of view expressed are solely those of the author(s) and do not necessarily reflect those of the European Union or of JUGEND für Europa (German National Agency for Erasmus+ Youth, Erasmus+ Sport and the European Solidarity Corps).
Neither the European Union nor the Granting Authority can be held responsible for them.

Data Protection Information

Contact
Skip to content