The vast majority of viral misinformation about the Israel-Hamas war being posted on X (formerly Twitter) is being pushed by verified users, according to a recent study by NewsGuard — a for-profit organization that rates the trustworthiness of news sites. After analyzing the 250 most-engaged X posts between October 7th and October 14th that promoted incorrect or unverified information relating to the war, researchers at NewsGuard found that verified X accounts were behind 74 percent of it.
The 250 posts analyzed within the study promoted one of 10 false or unsubstantiated war narratives identified by NewsGuard, including claims that CNN had staged footage of its news crew under attack in Israel, and videos claiming to show Israeli or Palestinian children in cages. In one week, the 250 posts collectively received 1,349,979 engagements (including likes, reposts, replies, and bookmarks) and were viewed over 100 million times globally. 186 of these top 250 posts were posted by verified blue-checked X accounts.
NewsGuard’s analysis suggests that the algorithm boosting verified X accounts is “crucial” to false claims going viral
In the study, NewsGuard criticizes X for handing out blue-check verifications to anyone wiling to pay $8-per-month. Before Elon Musk purchased Twitter, those same blue checks were useful in reliably identifying the likes of celebrities, politicians, and journalists. “In addition to the appearance of credibility afforded to premium users by a blue badge, they are algorithmically boosted by the platform,” said NewsGuard. “While the exact details of how X boosts and downranks (lowering a post’s position in users’ feeds) is undisclosed and therefore unclear, NewsGuard’s analysis suggests that the boost is significant, if not crucial, to claims going viral.” The service began removing blue checkmarks from legacy verified accounts who refused to pay for premium subscriptions back in April.
In recent weeks, Musk has promoted X as a platform for ‘citizen journalists’ and praised its Community Notes feature for “improving the accuracy of information.” However, NewsGuard found that just 79 of the 250 posts were flagged for misinformation using X’s Community Notes feature. In other words, Community Notes failed to correct or identify misinformation almost 70 percent of the time. NewsGuard’s findings echo an NBC News report from October 10th that showed how volunteers behind the community fact-checking feature struggled to keep up with the flood of misinformation that followed the Hamas attack on Israel, causing notes to take hours or days to approve and some posts not getting labeled at all.
Misinformation was also found across Facebook, Instagram, and TikTok, but would go viral on X before spreading to other platforms
The deluge of misinformation isn’t isolated to X alone. NewsGuard says it also identified false or unsubstantiated information about the Israel-Hamas war on Facebook, Instagram, TikTok, Telegram, and more, but the study focused on X because it has publicly reduced its moderation efforts. NewsGuard’s investigation also found that misinformation about the war in Israel would go viral on X before spreading to other platforms like TikTok and Instagram.
These issues haven’t gone unnoticed by global regulators — last week, the European Union opened an investigation into X to ensure the platform is complying with rules under the Digital Services Act (DSA) amid the “alleged spreading of illegal content and disinformation.” The EU has also launched similar investigations into Meta and TikTok.