Understanding AI: A Journey from Hate Speech to Bots and Sectarian Wars

by sailforchange | September 16, 2025 | UA at SAIL Blogs

Zeina Hammoud

Imagine this: scrolling through Twitter on a slow evening. Your feed is suddenly filled with users from different religious groups arguing with each other in abusive language. There are some calling others “devils,” while others threaten conspiracies. Then you notice a handle posting updates every few seconds or so, sounding robotic in some way. You just keep scrolling and see negative remarks concerning an ongoing war, but under them are sane responses refuting propaganda. You wonder: Who came up with these remarks? Human beings or AI? How did they know what to say?

This is not social media drama. It reveals deep-seated problems in our world today – hate, manipulation, disinformation, and ontological uncertainty about what is real and who or what is on the other end of the screen. This path of problems leads us to uncover Artificial Intelligence (AI), not as a distant technical field, but as a life-forming force shaping our lives, minds, and societies.

Sectarian Hate Speech on the Internet: The Case of Sunni-Shia Twitter Wars

In her study, Sectarian Twitter Wars, Alexandra Siegel analyzed over 7 million Arabic tweets between February and August 2015 (Siegel, 2015, p.4), uncovering a disturbing link between violent events and surges in sectarian hate speech.

Figure 2
Anti-Shia Tweet Volume in 2015 Note. Adapted from Sectarian Twitter Wars: Sunni-Shia Conflict and Cooperation in the Digital Age (p. 10), by A. Siegel, 2015, Carnegie Endowment for International Peace. © 2015 Carnegie Endowment.           

The most rapid spike in anti-Shia tweets, with a peak of over 350,000 tweets in one day, occurred immediately after the Saudi-led military campaign known as Operation Decisive Storm (Event 1). Subsequent violent acts, such as mosque bombings and ISIS attacks, also led to somewhat lower but noticeable spikes in online hate speech. This is a trend that confirms that offline violence actually can precipitate waves of sectarian discourse on the Internet, especially when the conflict is framed in religious rather than political terms.

Figure 5 Anti-Shia Tweet Volume in 2015 Note. Adapted from Sectarian Twitter Wars: Sunni-Shia Conflict and Cooperation in the Digital Age (p. 10), by A. Siegel, 2015, Carnegie Endowment for International Peace. © 2015 Carnegie Endowment.

Figure 6 Anti-Shia Tweet Volume in 2015 Note. Adapted from Sectarian Twitter Wars: Sunni-Shia Conflict and Cooperation in the Digital Age (p. 10), by A. Siegel, 2015, Carnegie Endowment for International Peace. © 2015 Carnegie Endowment.

Unlike the dense and tightly connected anti-Shia network, the second figure shows that anti-Sunni (blue) and anti-ISIS (red) tweets are less concentrated and do not have centralized accounts. The green nodes—anti-Shia accounts using anti-Sunni language—are scattered and few. This weaker network captures the asymmetry of sectarian discourse online: while there is anti-Sunni language, it is less centralized and less widely diffused.

Why is this significant for us to grasp? Because online words define realities offline. Portraying wars as religious wars, not political ones, mobilized popular support and silenced dissent. When ISIS uploaded Hollywood-ready videos laced with sectarian venom, they converted latent bias into recruitment messages. Such extremist propaganda does not merely mirror reality; it constructs it.

The Rise of Bots and AI-Generated Content

Yet not all these accounts generating these stories are human. Another report, Social Bot Detection in the Age of ChatGPT, explains how social bots – automated accounts masquerading as humans – spread propaganda, misinformation, and conspiracy theories (Ferrera, 2023). Bots were once simple scripts that posted repetitive messages. Now, with AI software like ChatGPT, bots post fluent, human-sounding dialogue. Some of them share weather or news, but others intentionally create harm, manipulate public opinion before elections, or propagate hate speech.

For example, a bot can retweet radical content repeatedly to make it popular or secretly spread conspiracy theories to polarize countries. Experts claim that now it is extremely difficult to identify such bots. AI-generated content tends to bypass traditional detection methods, and the better the quality of AI available, the more difficult bots will be to identify bots as compared to real users.

Counter Speech: Combating Hate with AI

Yet, neither is AI here the sole villain. Researchers, while working on a research paper regarding the Russia-Ukraine conflict, developed an AI pipeline to reduce hate speech on Twitter (Leekha, Simek, & Dagli, 2024). Utilizing Large Language Models (LLMs) like ChatGPT along with Retrieval-Augmented Generation (RAG), their model analyzed hateful tweets, understood their meaning, and generated counter-narratives to refute lies and reduce hate.

For instance, when responding to a hate post claiming a group was producing bioweapons, the AI response employed evidence from facts to refute the claim in a calm, concise manner. Their results posted 97% accuracy and an F1-score of 0.965 for the detection of hate speech, and counter-speech responses were well-rated regarding relevance and grammaticality. This application illustrates the capabilities of AI for decreasing tensions rather than increasing them.

Leekha, R., Simek, O., & Dagli, C. (2024). War of words: Harnessing the potential of large language models and retrieval augmented generation to classify, counter and diffuse hate speech (Figure X, p. X). MIT Lincoln Laboratory. https://doi.org/10.32473/flairs.37.1.135484

Existential Fear and Why We Need to Know About AI

Every one of these developments brings with it more essential questions for the human being. As more data starts to envelop us from AI, there is existential fear. Who is shaping what we think – humans or machines? Are we becoming manipulated by powers beyond our reach that we cannot see or comprehend? Are we being shown our biases by robots or verified by existence?

This is not just philosophy to suffer. If we do not make sense of technology, we risk being passive consumers of narratives from the powerful ones, ranging from hate groups that use hate speech to governments launching bots as weapons to businesses developing AI for profit, not truth. Real-world examples – ranging from sectarian bloodshed on Twitter to rigging of election outcomes by networks of bots – illustrate why technological literacy is important to all citizens, not just engineers.

Overcoming These Challenges

Three main ways to overcome these challenges are proposed:

1. Develop advanced detection systems: Combining deep learning, graph analysis, and explainable AI for bot identification across all platforms in real-time.

2. Develop counter narratives: Using AI not only for detection but also for creating fact-based, respectful responses to hate speech.

  1. Encourage media literacy and critical thinking: Though supported by AI, the initial line of defense against deception is human users. Teaching the spread of propaganda – i.e., sectarian invective posing as religious fact – provides individuals with the skills to push back against the polarizing narrative.

AI is neither good nor evil. Being language itself, it amplifies the intent of its users. To navigate our future with AI ethically, we need to:

  • Mandate accountability of AI technology employed in social media curations and counter speech.
  • Hold technology firms that deploy bots or propagate disinformation accountable.
  • Educate and create awareness about AI-generated content, its mechanism, how it may be misleading us, and how to authenticate the facts.
  • Endorse policies that weigh ethical development of AI against safeguarding free speech and privacy.

Finally, we must be willing to realize that our existential dilemma – fear of losing control to computers – is not resolved by avoiding technology, but by understanding it deeply. As soon as we understand how AI operates, we can use it wisely, to detect hate bots that propagate hate, to build counter-narratives of peace, or to ensure that our societies will remain humane during this age of intelligent machines.

 

References

About the Author: 

Zeina Hammoud is a CMPS Student at AUB.