Sophisticated Tools to Disarm Disinformation: A Proactive Approach to Stopping Digital Deception
In this article Assistant Professor of Economics for the Tepper School, Maryam Saeedi explores innovative tools and strategies to proactively identify and disarm disinformation before it spreads, offering a new approach to maintaining the integrity of digital information.
In today’s digital age, it can be hard to determine if the articles we read accurately tell the full story. While critical evaluation skills help detect misinformation, the time spent fact-checking and analyzing content often results in the material being spread widely before it is corrected.
When you think of disinformation, you might remember old stories of propaganda from 50 years ago or more, like fake radio shows or misleading pamphlets designed to present false and misleading information. In 2024, misinformation is still spreading, but much of it now is online, especially through social media. Unlike the older versions of propaganda, the goal of today’s disinformation is not to control the narrative but to sow confusion, discredit opposition and disrupt the flow of legitimate information.
Today, online disinformation is the modern version of propaganda.
The many sophisticated advances in technology complicate the situation further. Some bad actors have used the unfiltered nature of the internet to their advantage. They have attempted influencing elections overseas or launched disinformation campaigns to benefit their cause.
Current methodology focuses on either live moderation or ex-post rebuttal. Live moderation is costly and time consuming, as referenced in a recent New York Times article. Content moderation may end up being ineffective if the content has already gone viral before verification is done by moderators. Another recent concern related to content moderation is that the rise of generative AI tools can further impede content moderation, as current tools rely heavily on text analysis. Ex-post rebuttal has shown limited impact based on previous research.
By the time the disinformation is fact-checked and corrected, it’s often too late: the lies have gone viral and achieved their purpose.
In response to these challenges, I have been part of a team researching ways to proactively spot the disinformation accounts so that we can prevent them from distributing their false narratives in the first place, rather than having to do damage control after disinformation is distributed.
Our method, which has an 85% accuracy rate, seeks to identify disinformation through these malicious accounts by focusing on their network settings. We have found that disinformation ecosystems are highly interconnected, forming the backbone of their support system. To achieve high penetration in a social network, they need to have a massive support system that echoes the message of the few accounts that eventually start a disinformation campaign.
Our methodology uses past disinformation events to learn about the strategy of malicious actors and identify their support systems, whereas the current methods do not. Identifying the ecosystem helps us identify future disinformation campaigns promptly. This network-based approach can help prevent the next attacks from multiplying out of control while the social media team strives to catch up.
Our method shifts the focus from analyzing individual content to the identifying accounts likely to initiate disinformation campaigns. This way, we can proactively flag potential disinformation before it gains traction, preventing the problem, rather than responding after the fact.
This methodology is cost- and time-effective; it allows social media companies to protect their reputations and it promotes strong business ethics. Since U.S government regulations require social media providers to do their best to guard against and proactively prevent misinformation from being published, this approach helps achieve those goals.
As disinformation continues to evolve, so must our methods of combating it. By shifting focus from content analysis to network examination, we can stay ahead of those who seek to manipulate public opinion and undermine truth. Our research offers a promising new tool in the ongoing battle against digital disinformation, one that could significantly impact how we protect the integrity of our information ecosystem in the years to come.