War of Deception: Iran’s Digital Invasion

A foreign terror-linked regime is reportedly flooding Americans’ feeds with fake “locals” to shape what you think about war, Israel, and the United States.

Story Snapshot

  • A Clemson University research team identified 62 social media accounts it linked to Iran’s Islamic Revolutionary Guard Corps (IRGC), many posing as Western users.
  • The network operated across X, Instagram, and Bluesky, pushing anti-Israel, anti-U.S., and pro-Tehran narratives during the Iran–U.S.–Israel conflict.
  • Researchers reported a sharp pivot after the Feb. 28 U.S.-Israel airstrikes, including greater use of AI-generated imagery and video.
  • Bluesky removed the listed accounts, while Meta said some accounts were inactive and others had low follower counts; X’s direct response was not cited.
  • Iran’s internet restrictions and blackouts have compounded the information fight by limiting what ordinary Iranians can share outward.

Clemson Report Maps a Coordinated Influence Network

Clemson University’s Media Forensics Hub reported that 62 accounts tied to Iran’s IRGC operated as a coordinated inauthentic behavior network across major platforms. The accounts allegedly presented themselves as users from the Americas and the British Isles, then pushed anti-Israel and anti-American content designed to exploit political and social divisions. Researchers said X hosted most of the activity, including tens of thousands of posts with potentially massive reach.

Researchers traced the earliest accounts to December 2023 and described a pre-war phase focused on stirring domestic divisiveness in the U.S. The report’s key finding is not simply “misinformation,” but an organized effort attributed to a foreign military organization operating under false identities. That matters for Americans who value transparent debate, because manipulated engagement can distort what looks like authentic public opinion.

War-Time Pivot: From Divisive Culture Content to Pro-Tehran Propaganda

The network’s messaging reportedly changed after Feb. 28, when the U.S. and Israel launched surprise airstrikes on Iranian nuclear sites, military assets, and leadership targets. Clemson’s researchers said the accounts pivoted toward war propaganda, including pro-Tehran narratives and material meant to exaggerate Iranian battlefield success. This shift is consistent with influence operations that “go loud” during crises, when fear and uncertainty make audiences more vulnerable.

Separately reported analysis of the conflict’s information environment described a surge of generative AI and recycled visuals misrepresented as current events. Examples cited in research coverage include old footage, simulations, and mislabeled clips amplified to claim dramatic strikes or destruction that fact-checkers later disputed. The practical takeaway for readers is simple: viral war clips can be engineered, and the speed of sharing often outruns verification—especially when platforms reward engagement.

Platform Responses Differ, Raising Enforcement Questions

Platforms reacted unevenly. Bluesky said it removed all the listed accounts connected to the reported network. Meta said it took action against violators and emphasized that roughly a third were inactive while others had limited followers. The research summaries did not cite a direct X statement about the Clemson findings, although X reportedly implemented a March 4 policy change to suspend monetization for unlabeled AI war content.

That unevenness matters because coordinated networks tend to “platform-hop” when pressure rises, reappearing under new handles or shifting to the least resistant environment. The Clemson findings also highlight a structural weakness: users often can’t tell whether an account is a real neighbor or a foreign operator, and moderation systems frequently act after narratives have already spread. The report’s documentation provides a starting point, but the full scale may be larger.

Iran’s Internet Blackouts Add Another Layer to the Information War

Iran’s domestic internet restrictions have also shaped the battlefield for narratives. Research sources described prolonged shutdown conditions and extremely reduced connectivity, limiting the ability of everyday Iranians to communicate with the outside world while pro-regime voices receive preferential access. That asymmetry can help state-linked messaging dominate outward-facing channels, because independent footage and firsthand accounts are harder to obtain and verify in real time.

For Americans following the conflict, this combination—foreign-linked fake personas abroad and censorship at home—creates a fog where strong claims travel faster than evidence. Clemson’s warning to monitor at-risk communities during crises speaks to a broader reality: adversarial propaganda targets social fault lines, not just foreign policy opinions. When Americans argue using manufactured “facts,” the winners are the operators who seeded them.

What Viewers Can Do Without Falling for Censorship or Spin

The research does not claim every misleading post is Iranian-made, and some misinformation operations may involve multiple actors, including Russia-aligned tactics reported elsewhere. Still, the documented IRGC-linked network reinforces why Americans should demand transparency rather than more speech policing. Practical steps include pausing before sharing war clips, checking whether footage is recycled, and looking for corroboration across multiple outlets. A free society depends on informed citizens, not algorithmic manipulation.

Congress, platforms, and the public can debate policy responses, but the constitutional priority should remain clear: protect Americans’ right to speak while exposing covert foreign influence that masquerades as domestic consensus. The Clemson report offers concrete, testable signals—fake personas, coordinated posting, and narrative alignment around key events—that can guide enforcement without turning political disagreement into a pretext for censorship.

Sources:

Iranian regime spreading anti-Israel propaganda across dozens of social media accounts: report

The Use of Generative AI and Disinformation in the 2026 US-Israel Conflict with Iran

Iranian regime spreading anti-Israel propaganda across dozens of social media accounts: report

Offline by decree: Iran’s war on the internet

State actors use visual misinformation in Iran war

Iran Update, Morning Special Report, March 1, 2026