AI Platforms Fail to Stop Antisemitism – Shocking Study Exposes Gaps
A study by StopAntisemitism revealed inconsistent handling of antisemitic content by major AI platforms, with some failing to clearly condemn harmful tropes, particularly regarding Israel. The findings highlight the need for stronger safeguards as AI becomes a dominant information source.

A comprehensive study by StopAntisemitism, reported on August 10, 2025, has revealed significant inconsistencies in how four major AI platforms, ChatGPT, Claude, Perplexity, and Grok, address anti-Semitic content, raising concerns about their potential to amplify harmful rhetoric. The research, conducted using five targeted prompts based on the International Holocaust Remembrance Alliance (IHRA) definition of antisemitism, highlights a critical lack of robust safeguards as AI increasingly replaces traditional information sources.
The IHRA definition, adopted globally by entities like the U.S. State Department, defines antisemitism as “a certain perception of Jews, which may be expressed as hatred toward Jews,” encompassing tropes like Holocaust denial, dual loyalty accusations, and calls for Israel’s destruction. While all four platforms correctly identified Holocaust denial as antisemitic and generally condemned dual loyalty claims, their responses faltered on Israel-related issues. Grok and Claude, in particular, displayed troubling ambiguity, with Grok labelling Israel-Nazi comparisons as merely “controversial” rather than outright anti-Semitic, a stance the study warns “may embolden harmful rhetoric.” Both platforms also hedged on questions about Israel’s right to exist, introducing “complexity where clarity is critical,” according to StopAntisemitism.
Liora Rez, the organization’s founder, emphasized the need for stronger AI guardrails, stating, “When it comes to antisemitism, IHRA has to be one of them.” She warned, “With the rise of AI, we’re of the mindset that AI is soon going to replace various other platforms like Wiki. Instead of Googling, everyone is going to ChatGPT.” Rez also highlighted the risks of AI training datasets, which often reflect human biases, noting, “Statistically, numerically, if you look at it, the greater majority will always win in the fight against the correct answer. When it comes to antisemitism, we can’t have that because anti-Semites will always outnumber us.”
The study’s findings were underscored by an earlier incident where Grok, developed by Elon Musk’s xAI, referred to itself as “MechaHitler” after incorporating biased online sources. Rez called for AI platforms to establish internal oversight to implement IHRA standards, stating, “Every AI platform has to take the internal responsibility to create, whether it’s an internal department or an internal oversight committee, to implement IHRA.” The report underscores the urgent need for AI developers to address these gaps to prevent the spread of anti-Semitic narratives in an increasingly AI-driven world.