Advertisement Writers wanted
Screenshot from an AI-generated deepfake video published by the Israeli organization, presented as a testimony from an alleged Iranian victim.
Screenshot from an AI-generated deepfake video published by the Israeli organization, presented as a testimony from an alleged Iranian victim.

Exposed: How Israeli AI and Social Media Networks Fuel a Global Deepfake Misinformation Campaign

Meet the Israelis behind AI-generated “sexual assault testimonies” of entirely imagined women. A misleading post amplified by Donald Trump has exposed an Israeli network using deepfakes, emotional storytelling, and social media to spread unverified claims and shape global narratives.

A misleading post amplified by US President Donald Trump has exposed more than just false information; it has revealed Israeli efforts to shape a digital ecosystem where artificial intelligence, social media, and political messaging intersect to shape global narratives based on misinformation.

The controversy began when Trump reshared a post from a Etal Yaoby, an Israeli figure known for publishing misleading and fake content. The post featured images of eight women and claimed they were Iranian prisoners on the verge of execution. Trump publicly urged Iranian authorities to release them, framing the claim as a humanitarian appeal.

But the narrative quickly collapsed under scrutiny.

Iranian opposition activists abroad were among the first to respond. They rejected the claim and provided evidence contradicting it. According to opposition figures, at least one of the women—Dr. Golnarch Naraghi—has been free for months. Others also raised doubts about the authenticity of the images and the identities presented.

Iran’s judiciary later issued a formal statement denying the allegations. It confirmed that none of the women faced execution. Some had already been released, while others were dealing with legal charges that could lead to prison sentences if confirmed in court. The judiciary also pointed to a pattern of fabricated reports, including previous false claims about individuals allegedly sentenced to death who were later found to be free.

This rare overlap between state denial and opposition rebuttal raised deeper concerns. If both sides agree the story is false, where did it, and other similar stories, originate?

A Manufactured Narrative

The answer may lie in a broader, coordinated influence strategy that leverages emerging AI technologies.

Group Image

Jordan Jesin, a consulting intern at ‘Generative AI for Good’, posing with Israeli diplomats.


An Israeli initiative known as “Generative AI for Good” has drawn increasing attention in this context. Even if it was not directly behind the story of the eight women, the organization has been linked to more concerning forms of misinformation. It presents itself as a platform that uses artificial intelligence to drive positive social impact. However, the organization's methods suggest a more complex role in shaping political narratives.

The group produces highly emotional video content that features AI-generated characters presented as real individuals. These digital figures deliver testimonies about alleged human rights abuses, aligning with pro-Israel narratives. The testimonies cannot be independently fact-checked. The videos frequently combine authentic footage with synthetic elements, creating a hybrid narrative designed to appear credible and easily mislead audiences. Crucially, many of these testimonies lack independently verifiable evidence.

Speaking to Ynet, the founder of the organization, Shiran Mlamdovsky Somech, proudly says "We experienced firsthand how technology can be used to create empathy, to really generate emotion in very broad audiences and lead to a widespread viral response."

This technique allows content to appear credible while remaining largely untraceable. It relies on emotional engagement rather than factual transparency, increasing the likelihood of viral spread across platforms.

 

Inside the Network

The leadership behind the initiative provides further insight into its strategic direction.

Founder Shiran Mlamdovsky Somech has openly discussed the power of AI to influence public perception. In interviews with Israeli media, she emphasized the ability of technology to “generate emotion” and create large-scale engagement. She described these efforts as part of a broader struggle over global narratives.

 file_69e91acb1dafe 

Shiran Mlamdovsky Somech, the founder of ‘Generative AI for Good’: “Israel has the opportunity not only to survive, but also to lead the way in using advanced technologies to shape positive narratives and protect its future.”


Her work is closely linked to major Israeli and international institutions. Collaborations have included organizations such as the Anti-Defamation League, Israel’s Ministry of Diaspora Affairs, and networks directly associated with Unit 8200, Israel’s elite military intelligence division.

“He who controls the past controls the future; he who controls the present controls the past”—a line from 1984 by George Orwell—“sounds more realistic today than ever,” Shiran Mlamdovsky Somech wrote in a Hebrew op-ed published in Calcalist. “What if history could be rewritten at the touch of a button? What if any historical event, fact, or figure could change in an instant?” she added.

She explicitly says that, “used wisely", the technology can be used to "combat disinformation and incitement directed at Israel", create "positive narratives", and "increase monitoring capabilities for hostile content distributed on social media,” adding that “Israel has the opportunity not only to survive, but also to lead the way in using advanced technologies to shape positive narratives and protect its future.”

Other members of the team bring similar backgrounds. Noa Rosenberg, a senior marketing figure in the organization, previously served in a psychotechnical unit within the Israeli military, according to her Linkdin profile. Another member, Tal Harary, works as a creative director and storyteller. On her LinkedIn profile, she describes her role as “turning human stories into action” and crafting narratives that “move people” at the intersection of storytelling, social change, and AI innovation.

file_69e91ced70328

Noa Rosenberg, a senior marketing figure in the organization, previously served in a psychotechnical unit within the Israeli military.

 

Her emphasis on emotionally driven storytelling raises questions about how such narratives are constructed and presented, particularly when they appear as personal testimonies that cannot be verified independently. The broader team consists largely of Israeli professionals trained in analytics, finance, and strategic communication—fields often linked to influence and information campaigns.

file_69e91d3fc254b

Tal Harary, works as a creative director and storyteller

From Innovation to Influence

The organization’s own statements reinforce the strategic dimension of its work.

In published materials, its leadership argues that artificial intelligence can be used to “shape positive narratives” and strengthen national security. It frames misinformation not as a threat but as a space where states must actively compete.

This aligns with the growing concept of cognitive warfare, where influence over perception becomes as important as control over territory.

Notably, one of the group’s projects has been featured in NATO discussions on cognitive warfare and described as a “good deepfake.” The label itself has sparked concern among analysts, who warn that legitimizing any form of deepfake, especially in the arena of international diplomacy, risks normalizing the technology as a whole.

file_69e91d674167f

Once audiences accept manipulated content in one context, distinguishing truth from fabrication becomes increasingly difficult.