Star Kashman on Antisemitism and Bad Information
Author(s): Scott Douglas Jacobsen
Publication (Outlet/Website): The Good Men Project
Publication Date (yyyy/mm/dd): 2025/03/12
Star Kashman is a legal expert specializing in cybersecurity, privacy, and social media law. She authored the first scholarly article on Search Engine Hacking (Google Dorking) in the Washington Journal of Law, Technology & Arts and expanded on this research in Law360. She has led discussions on cybersecurity law, including an event at Brooklyn Law School’s Incubator & Policy Clinic (BLIP) with top legal and security experts. Recognized by the Office of the Director of National Intelligence (ODNI) for her research, she now works at C.A. Goldberg PLLC, tackling cyberstalking, doxing, deep fakes, and tech-based personal injury litigation. Social media platforms amplify antisemitism by prioritizing high-engagement content, often driven by outrage and emotional responses. Algorithms inadvertently promote harmful narratives, as seen with Osama bin Laden’s letter circulating on TikTok. Additionally, paid promotion allows the spread of antisemitic content. Legal challenges exist both domestically and internationally, with the U.S. prioritizing free speech under Section 230. Misinformation distorts history and current events, fueling conspiracy theories that reframe antisemitic rhetoric in “woke” narratives. AI tools can help detect antisemitism but require human oversight. Global jurisdictions complicate accountability, and while networks combat online hate, platforms often prioritize profits over regulation.
Scott Douglas Jacobsen: How do social media amplify antisemitic content?
Star Kashman: Social media amplifies antisemitic content by blindly rewarding posts that receive high engagement. This can often prioritize content that brings a strong emotional response from a user, like outrage, shock, or disbelief. Algorithms push out high engagement content because it leads to more comments, debates, shares, and longer watch times from the platforms users. For example, when Osama bin Laden’s letter circulated on TikTok, it gained traction not just among those who agreed with it but also probably due to those who were outraged, which inadvertently boosted its visibility. This amplification is algorithm-driven and often leads to harmful narratives spreading faster and further. Also, social media platforms tend to blindly accept funding from nearly anyone, allowing users to pay to push out their content and accounts or pay for advertisements. For example, Meta admitted that Russia had paid around 150,000 to push out misinformation during the US 2016 election. It will likely come out in a few years that anti-israel and anti-jewish organizations and terrorist groups funded content, content creators, and accounts. Additionally, funding can be put behind creating artificial engagement to push out antisemitic content, and this is something that the public has already recognized. There are bots on numerous social media applications that will automatically either leave boiler-plate hate comments or arguments on jewish creators content, or template supportive comments on hateful antisemitic posts to push them out.
Jacobsen: Does this pose legal challenges domestically more or internationally more?
Kashman: Legal challenges exist in both domestic and international contexts, but they are more complex internationally due to vastly different regulations and goals. In the U.S., Section 230 protects platforms from liability for third-party content. We have laws in place to protect tech companies because US courts have in the past prioritized innovation, even sometimes over safety. Meanwhile, other areas like China or Iran have stricter regulations on platform accountability. Globally, balancing free speech with combating harmful content is challenging, especially when each country has different standards for what constitutes hate speech or propaganda.
Jacobsen: How do mis- and dis-information play roles in the spread of both antisemitism and conspiracy theories?
Kashman: Mis- and disinformation play a big role in spreading antisemitism by rewriting history and distorting current events to dehumanize jews, and brainwash the public into thinking they are justified or even noble for having antisemitic or hateful thoughts against jews and israel. For instance, after the October 7 attacks in Israel, fabricated stories and AI-generated images were used to discredit verified accounts of violence against Israelis. For example, content was generated of a dog on a bed replacing visual evidence of burned israeli babies and children, which was disseminated to say that Israel lies and pushes out fake information. This was to stop a narrative that would cause people to empathize with israelis or jewish people, and to create distrust against jews and israelis so when something unfavorable was seen in the media it would not be trusted. So much misinformation and disinformation was pushed out regarding evidence of crimes to the point where people can almost only believe what they want to and not distinguish the truth from fiction. Similarly, misinformation about the Jewish community’s origins, and historical presence in the Middle East is frequently shared to justify harmful actions and make it appear as if they are unrelated settlers randomly choosing Israel to invade and “steal” from palestinians, in order to justify the actions taken on October 7. Such disinformation dehumanizes Jewish individuals and feeds into broader conspiracy theories that further spread hate.
Jacobsen: How do conspiracy theories, e.g., ‘globalists,’ ‘international banksters,’ etc., merge with antisemitic language for virulent narratives online?
Kashman: Conspiracy theories have evolved into a more “woke” narrative to blend into current cultural discussions while maintaining their antisemitic roots. Instead of using overtly racist language, modern narratives frame Jews as oppressors, calling them “globalists” or falsely accusing them of controlling media and finance. Terms like “Zio-Nazi” and references to “apartheid” and “genocide” are new iterations of old hate, cloaked in political activism to make them more palatable and less obvious to casual observers, more importantly using such sensitive language to jewish people against them. Jews are now often compared directly to their oppressor who actually tried to commit genocide against them, in order to dehumanize jews and make them appear as an oppressor.
Jacobsen: How can AI tools detect and mitigate antisemitism?
Kashman: AI tools can play a significant role in mitigating antisemitism by flagging patterns of harmful language for human review. Certain keywords and phrases—such as references to “genocide,” “apartheid,” or “Zio-Nazi”—should trigger content moderation systems. However, human oversight is crucial to avoid false positives. For example, while a Palestinian flag itself isn’t harmful, spamming it on unrelated Jewish creators’ content as a form of intimidation could be detected by AI systems and flagged for review, which would need human oversight so as to not suppress the comments in a place where it would not be deemed as hateful or antisemitic.
Jacobsen: What barriers do global jurisdictions present online platform accountability?
Kashman: Global jurisdictions present major challenges due to differing legal standards for speech and platform regulation. The U.S. prioritizes free speech and innovation, often under-regulating platforms, while countries like China and Iran impose heavy restrictions. This disparity makes it difficult to enforce consistent accountability, as platforms must comply with local laws that may conflict with each other.
Jacobsen: Are there networks of legal experts, tech companies, and policymakers, to combat antisemitism? If so, who? (If not, why not?)
Kashman: While networks to combat antisemitism exist in traditional advocacy spaces, similar networks for the digital space are still underdeveloped. Tech companies often avoid forming such coalitions or hosting such discussions to minimize liability and costs, preferring to focus on maximizing engagement and profitability. Some organizations have taken steps to engage with tech companies, but a broader, more organized coalition is needed to address online antisemitism. Especially because reducing hateful content on platforms often directly contrasts with platforms intent to profit from high engagement and rage-bait, there needs to be a more serious matter to encourage platforms to go against profitability for good moral reason.
Jacobsen: What legal precedents have been set to address antisemitic harm?
Kashman: Legal precedent in the U.S. for addressing online antisemitism is limited due to Section 230 protections and free speech law which have in the past been broadly over-applied even in irrelevant cases. The other issue in the new wave of antisemitism, is often it looks very different than the already pre-established old, more often recognized form of antisemitism that is likely backed by case law and statutory language as well. For example, now it may be antisemitic to call a jew a “zio-nazi”, compare them to hitler, say that israel is committing genocide by ignoring the harms that occurred on October 7, etc. These are very different than what had been said in the past, which was already deemed antisemitic in courts of law and statutes. This means that these new cases are going to struggle because we have broad free speech rights, and we would have to prove new speech as hate speech without past precedent or statutory language to rely on. As for the legal protections for tech companies as well, things may be looking more positive in recent years. Recent cases have begun to chip away at these broad protections like section 230 and free speech. The TikTok ban bill and emerging product liability cases involving social media suggest a shift toward holding platforms more accountable. Courts have also started recognizing that algorithmic promotion of harmful content may constitute the platform’s speech, potentially opening the door for new legal carveouts.
Jacobsen: Thank you for the opportunity and your time, Star.
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
