Trolling, memes and deepfakes: How AI is thickening the fog of war

The US and Iran are using technology to obstruct coverage of the current conflict. Four experts discuss how AI is reshaping war reporting and the news ecosystem
Iranian people attend a ceremony marking 40 days since schoolchildren were killed in a strike on a girls' primary school in Minab, amid the U.S.-Israeli conflict with Iran, in Tehran, Iran, April 7, 2026. Majid Asgaripour/WANA (West Asia News Agency)

Iranian people attend a ceremony marking 40 days since schoolchildren were killed in a strike on a girls' primary school in Minab, Iran. Majid Asgaripour/WANA (West Asia News Agency)

War has never been fought only on the ground. Clausewitz’s concept of the “fog of war” once described the uncertainty and confusion that cloud battlefield decision-making. Errol Morris’ 2003 documentary made the phrase a shorthand for the moral and informational ambiguities of modern conflict. But in the digital age, where war is also filmed, edited and promoted online, the fog is getting thicker and wars, more difficult to cover. 

The conflict between the United States and Iran makes this point clearer than ever. As images, videos, and narratives flood social media, it is now becoming even harder to tell what’s real and what’s not, with both the rise of AI and the changes in digital platforms reshaping how war is seen and understood.

This is not the first conflict after the launch of ChatGPT. But it may be the first one where generative AI has played a key role in the information war. 

In 2026, AI-generated content has surged across social media, both in volume and visibility. Fake drone footage, fabricated satellite images, edited clips, and synthetic statements are spreading widely, often reaching millions of viewers. 

In earlier conflicts, such as the early Israel-Hamas war, misinformation still relied more on recycled or miscaptioned real footage. Now, even official accounts are openly sharing false content. To understand how these narratives spread (and how to cut through them) I spoke to five investigators, researchers, and journalists working on the front lines of this treacherous information environment.

1. A war of memes

The US and Iran are waging a parallel struggle over narrative, image, and public perception online. In doing so, both camps have adopted a distinctive style of communication that speaks fluently the language of the internet: trolling. 

What might once have been dismissed as online provocation has increasingly become part of the grammar of geopolitical messaging, where irony, mockery, and spectacle are used to project power, ridicule opponents, and shape how audiences interpret the conflict.

On the US side, official White House accounts have posted videos featuring drone footage of bombings and strikes intercut with clips from films such as Top Gun and Braveheart, as well as references to video games like Wii Sports

The Iranian camp, by contrast, has fully embraced AI-generated media, producing Lego-style animated videos, obviously fabricated deepfakes, stylized music videos, and even old, unrelated funny clips as in this tweet from Iran’s embassy in Spain. 

The US projects a message of military dominance and authority. Iran mocks Trump and US foreign policy, using humour and parody to undercut American authority, and adapting his messages to different audiences around the world.

While deepfakes and other types of disinformation are trying to deceive audiences, AI and non-AI slop pursue a different goal: despite the obvious fakeness of it all, these images are used to push a specific narrative and worldview. 

Emerson T. Brooking is director of strategy at the Digital Forensic Research Lab of the Atlantic Council and the author of the book Likewar: The Weaponization of Social Media. He distinguishes Iran’s propaganda from the US one as tapping into two different sides of Internet culture. But he thinks they both represent a new generation of war propaganda which started to take shape after Hamas’s attacks on 7 October 2023. 

Research supports the notion that this is not an entirely new phenomena: in Ukraine, memes rallied around battlefield defiance and Russian embarrassment. But after 7 October, memes were indeed used to justify retaliation, challenge sympathy for Gaza, or accuse opponents of selective outrage, making the attacks a recurring engine of propaganda.

This time both Iranians and Americans have been leaning into different internet aesthetic cultures while pushing for different narratives. For example, Iran has presented itself as a global defender against US aggression, painting Trump as a ‘puppet’ of Israeli Prime Minister Benjamin Netanyahu while taunting him about the Jeffrey Epstein files

“The White House is leaning into a different internet cultural aesthetic, with sizzle reels and supercuts that incorporate video game imagery and periodic uses of AI,” said Brooking. “There is no story in that content. It is a series of explosions or acts of destruction. The subtext is: if you do not do what we say, we will do more of this to you.”

Brooking points to Iran’s famous Lego videos as the real innovation of the war. “They are so novel that people keep watching them,” he told me. “They are actually quite long, running several minutes. But the point is that they are telling a story, and you only really get it if you sit through them.”

 

 

This inflation of memes, parody and slopaganda presents a real shift in information warfare. As combat is presented through playful or entertaining lenses, it can dehumanise and desensitise the public to civilian harm. That’s the view of Sam Dubberley, Director of the Technology, Rights & Investigations Division at Human Rights Watch. Dubberley and his team use open-source investigation tools (often known as OSINT) to verify, expose and document human rights abuses. 

I recently met Dubberley at a neighbourhood coffee shop in Oxford. During our conversation, he shared his concerns about this trend. The rise of memes in war is not an issue in terms of verifying the events they investigate, but it can present a human rights issue. 

“Our great fear is a ramping up of the rhetoric of war as a game,” he said. “For us, the important thing in conflict is to minimise civilian harm as much as possible. But if you are using memes, videogames or Lego videos, war doesn’t seem real. If you’re having this kind of memeification of war, the rhetoric then ramps up, which could lead to more conflict and more civilian harm.”

Alexios Mantzarlis is a true pioneer of fact-checking. He co-founded Italian fact-checker Pagella Politica and was the founding director of the International Fact-checking Network, which he led from 2015 to 2018. He recently founded Indicator Media, an outlet that specialises on open-source reporting and online investigations, as well as studying and exposing digital manipulation. 

Mantzarlis told me that AI-generated content is most effective and most misleading when it appears alongside real material, because people are less able to scrutinise it in the quick, distracted way in which they usually consume information online. Even when viewers could recognise something as fake if they paused to examine it, in practice people just scroll down and take in a quick impression of what they see.

“Unfortunately, for most people, even the Iran war doesn’t really matter that much,” he said. “So, as they scroll down, [AI-generated content] just sticks around in the back of their mind whether it is realistic or not.”

2. The liar’s dividend    

Governments are not using AI just for trolling. More notably, and perhaps more insidiously, they are also deploying it to spread false narrative and manipulate global audiences. 

Propaganda has always been part of war. But according to the sources I spoke with, AI and social media have amplified it to an almost unfathomable scale. It’s not just AI slop. Bad-faith actors are using these tools to fabricate military footage and even deepfake images of the war’s victims.

The US attack that killed more than a hundred Iranian girls in a school in Minab became perhaps the clearest example of this. After the strike, false and AI-generated images circulated alongside authentic images of the victims and their graves, making even real evidence easier to doubt.

Mahsa Alimardani, associate director of the Technology Threats and Opportunities programme at WITNESS, has seen this in her own investigations. Her team has received an unprecedented number of requests for AI forensic examinations in recent weeks. She says that the problem we are seeing in Iran is the structural collapse of trust in authentic content and documentation. 

“What we're seeing in Iran is a textbook case: opposition media and diaspora accounts dismissed verified images of civilian casualties from the Minab school strike as AI-generated or recycled, based on nothing more than aesthetic judgments,” she explained. “‘The lighting is too good,’ ‘it looks like a performance.’ No forensic methodology, just vibes. And those claims spread widely before fact-checkers and other investigators confirmed the images were authentic.”

Alimardani is Iranian herself, so she has also been receiving requests from friends, asking her to verify what they see online amid the fog of war.

“Iran is this laboratory for the worst types of pollution that can exist in an information space,” she told me. “This is a laboratory for us to see worst-case scenarios, especially the kind my team at WITNESS has been talking about for a long time: how AI is really going to affect what we trust and what we believe.”

Manisha Ganguly operates in a similar space. She is the visual forensics lead at the Guardian and a pioneer in using OSINT to investigate war crimes in conflicts. While she doesn’t rely on single artifacts or sources for her own work, AI-generated images remain a concern because they can make false or misleading information appear credible.

“The influx of AI-generated satellite imagery is allowing state-aligned actors to cosmetically validate official accounts, or knowingly, incorrectly apply OSINT to align with ideological narratives,” said Ganguly.  

She offered the example of The Tehran Times, a state-linked English-language newspaper which shared a post on X with satellite images of “an American radar in Qatar” destroyed in an Iranian drone strike, which was found to be AI-generated. The post was viewed almost a million times under the banner of what seems to be a legitimate news outlet.  

That example points to a broader problem. Once AI-generated images circulate under the authority of seemingly legitimate outlets, they do not just spread falsehoods but also erode trust in authentic evidence. 

Alimardani argues that AI has introduced a new kind of fog to war, one in which real photos can be dismissed as fake and fake images can be used to depict real events. Once even a single fabricated image is exposed, it can be used to undermine trust in genuine evidence as well, making reality itself harder to verify and even real evidence of war crimes easier to deny.

“The lies spread much faster and we haven’t quite tackled how to deal with this. This has created just a lack of trust for everyone,” she told me.

Dubberley from Human Rights Watch said the growing volume of deepfakes has not fundamentally changed how investigators examine potential war crimes, but it has made the work slower and more difficult to them. Investigators can quickly identify obvious fakes. But the wider spread of false content causes broader public doubt, creates more noise to sort through, and makes it harder to establish the facts of what actually happened.

“While it doesn’t affect our investigations, it slows us down,” Dubberley said. “It makes people question everything and it takes us longer to pierce through this fog of the noise online and in social media. That’s what’s challenging.”

The slowdown matters beyond public debunking. Organisations like Human Rights Watch and Bellingcat are also preserving digital evidence for possible future legal proceedings. AI adds another layer of authentication: investigators must now show not only where and when an image was taken, but whether it was synthetic or manipulated.

3. How to fight propaganda 

Most of these narratives spread online. But under which conditions do they become mainstream? Online platforms are not just channels for distribution. They are also part of the machinery that helps create, shape, and amplify this kind of content. 

Brooking, the investigator from DFRLab at the Atlantic Council, said algorithms are central to this process. “No social media platform today allows you to discover content solely based on who you follow or what your stated interests are,” he said. “Algorithms have been instrumental. You would not have this kind of information conflict without them.”

That makes provenance – the ability to verify where an image came from and how it has been altered – increasingly important. But Alimardani said standards such as C2PA are not yet widely deployed enough to help most people during fast-moving conflicts. “Most phones don’t sign images at capture, most models do not embed provenance when they create content, and most platforms don’t display provenance signals to users,” she said. “So in terms of the immediate information environment around, say, the Iran conflict, provenance isn’t yet a factor.”

If adopted widely, provenance would not stop misinformation from spreading, Alimardani said, but it could give authentic content a verifiable chain of custody. The bigger issue, she said, is the collapse of trust in authentic documentation, as it happened in the case of the Minab school. “This is what we call the liar’s dividend. The mere existence of synthetic media gives people – and especially bad actors – a rhetorical tool to dismiss real evidence,” she said. 

Mantzarlis from Indicator Media stressed this is an especially difficult moment for information integrity. Platforms have rolled back some of their interventions and pulled back support for journalism at the worst possible time. Even so, he argued, platforms still have a responsibility to reduce harm, and this starts with recognising that not all AI content poses the same kind of threat.

Mantzarlis drew a distinction between AI content that is deliberately deceptive and AI content that is merely low-quality or spammy, and argued each requires a different response. 

“We need takedowns, labels, and explicit interventions for the truly fake material,” he told me. “But we also need some broader agreement to contain the spaces in which visibly fake but still harmful slop exists – either because it is hateful, or because it is propaganda pushed by state media or authoritarian regimes. It may not violate platform policies, but that does not mean it should be force-fed and available to everyone at all times.”

Platforms are not the only actors making verification more difficult. Satellite imagery coming from the Middle East is currently being restricted or limited by the companies providing it. Many of the investigators I spoke to found this deeply concerning, since those restrictions make it harder to verify events on the ground and create more room for deception. The lack of reliable imagery makes it far more difficult to understand what is happening on the ground. 

“With the new imagery restrictions being imposed by these commercial satellite providers, this process [of verification] is significantly delayed and is harming public interest reporting,” Ganguly said. 

Despite these challenges, journalists, investigators and fact-checkers still serve as an evidentiary authority tasked with piercing through this AI-powered fog of war, and provide a counterweight to state-sponsored propaganda. 

Many of the experts I spoke to mentioned that the best defense against AI-driven disinformation is still basic reporting: being on the ground, talking to trusted sources, and understanding what is credible. 

Mantzarlis argued that, for journalists who cannot access a place directly, that means being transparent about uncertainty, sharing what is known and unknown before full verification is complete, and using available AI and other technological tools. 

“It is a continuous kind of escalation between defenders and their opponents. Fact-checkers, journalists, and platforms are always going to be playing catch-up. But that does not mean they have to be miles behind,” he said. 

Join our free newsletter on the future of journalism

In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block

Meet the authors

Gretel Kahn

What I do  I am a digital journalist with the Reuters Institute's editorial team, mainly focusing on reporting and writing pieces on the state of journalism today. Additionally, I help manage the Institute’s digital channels, including our daily... Read more about Gretel Kahn