With Israel's military actions against Iran, a flood of AI-driven disinformation has emerged, misleading viewers about military capabilities and public sentiment. Expert analysis highlights the challenges of verifying information in this digital battleground.
Disinformation Surge Amid Israel-Iran Strikes: The Rise of AI in Conflict Narratives

Disinformation Surge Amid Israel-Iran Strikes: The Rise of AI in Conflict Narratives
The recent Israel-Iran conflict has triggered a massive wave of disinformation online, primarily fueled by AI-generated content, complicating the narrative surrounding the military exchanges.
In a concerning trend, the ongoing conflict between Israel and Iran has unleashed a tidal wave of disinformation across social media platforms, with numerous posts misleadingly amplifying Tehran's military narrative. Following Israel's strikes initiated on June 13, a multitude of videos, many constructed using artificial intelligence, have circulated, making exaggerated claims about Iran’s military power and showcasing fictitious aftereffects of supposed attacks on Israeli locales.
BBC Verify's examination identified several AI-generated videos, amassing over 100 million views collectively, aimed at misleading audiences. Among them, clips that falsely claim to depict the aftermath of Israeli strikes have gained traction, while pro-Israeli accounts have contributed to misinformation with outdated footage misrepresenting Iranian protests as signs of weakening government support.
As the conflict escalated, Iranian missile and drone strikes targeted Israel, triggering a barrage of online videos meant to enhance perceptions of Iran’s military dominance. Analysts noted that one account, Daily Iran Military, saw its follower count surge dramatically from 700,000 to 1.4 million in mere days, a reflection of the viral propagation of disinformation.
Commentators have termed the volume of disinformation generated as "astonishing," with groups like Geoconfirmed revealing that irrelevant videos and AI-created content are being shared under the guise of real events. Emmanuelle Saliba from Get Real emphasized that this represents an unprecedented use of generative AI amidst conflict, as videos depicting missile attacks often exploit nighttime settings that complicate verification efforts.
Certain misleading posts have gained considerable attention, including an AI-created depiction of missiles striking Tel Aviv, boasting 27 million views alone. However, many alleged incidents, such as claims of downed Israeli F-35 aircraft, were built on evident AI fabrication misunderstandings. Analysts pointed out recurring themes in disinformation linked to Russian influence operations, indicating a strategic intent to sow doubt about Western military equipment.
While pro-Iranian accounts ramp up disinformation, narratives suggesting internal dissent in Iran have emerged, illustrated by fabricated clips of Iranian citizens allegedly supporting Israel. Furthermore, speculation about US military involvement has prompted the splicing of AI-generated images featuring B-2 bombers in Iranian airspace, heightening tensions and further obscuring the truth.
Major platforms like X, TikTok, and Instagram are grappling with this circulation of misleading content, with X's AI chatbot Grok sometimes misidentifying manipulated visuals as authentic. TikTok asserts that it maintains strict guidelines against misinformation, yet reports indicate widespread use of doctored videos across various platforms.
Research from the University of Notre Dame posits that in politically charged environments, disinformation spreads rapidly via binary options, such as the stark narratives typical of war, prompting users to share sensational content that aligns with their identities. As the digital landscape increasingly intersects with real-world conflicts, the challenge of discerning fact from fiction has never been more crucial.