The Risk Of Digital Disinformation
Disinformation, misinformation, deception, and traditional propaganda all utilize digital channels and social networks to spread their messages to the public. The challenge and risk is complex, but can technology help? Can it reduce the risk of disinformation?
The crossing of digital disinformation with digital communication tools and platforms creates a perfect storm. On one side, we have attackers seeking valuable targets, which could be a voter choosing a candidate, an investor evaluating a stock, or a parent trying to understand a new medical treatment for their child. On the other side, social media platforms offer the data, analytics, and targeting tools necessary to effectively deliver these messages.
It’s important to clarify that when I refer to an “attackers,” I’m not talking about you, me, or anyone who simply holds a differing opinion. Instead, I mean a coordinated network that employs deceptive tools such as bots, avatars, spoofed websites, algorithmic exploitation techniques, and various psychological manipulation methods to manipulate perception, narratives, and information with an objective or erroding trust, increasing polarization or conducting illigal activities.
It’s easy to blame Facebook for anything disinfromation, but the reality is more complex. Manipulated content follows users, and wherever users go, attackers follow. While we often encounter narrative manipulation on platforms like Facebook, X, and TikTok, it’s becoming increasingly prevalent on smaller group platforms such as WhatsApp. There, smaller and more intimate groups can be targets. Additionally, this manipulation occurs in product review feeds, on Discord, Instagram, and many other communication channels. Essentially, if it’s a platform where people discuss and communicate, it becomes a point of interest for attackers.
Can We Fight Digital With Digital To Reduce Risk?
Since these attacks occur in the digital realm, it makes sense to counter them with digital tools. For instance, we could monitor conversations and interactions across various platforms and channels to determine whether a discussion undermining a company’s stock is just an isolated incident or part of a coordinated attack. Additionally, we could for example, identify chatter suggesting a planned violent attack against a minority group following an election event. Once we detect these patterns, we can report our findings to social networks where this content is shared, prompting its removal, or we might notify the police. In theory, this approach seems viable, but in practice, the situation is often much more complex.
An increasing number of companies are developing tools designed to detect content, monitor conversations, understand their context, and identify the sources of those conversations. These companies vary in their capabilities, including the extent, depth, and range of detection, their ability to navigate different languages and topic specific vobaculary, and the extent of topics they can monitor for potential threats.
There are notable differences among the various solutions, which require expertise to identify the right platform for specific challenges, such as criminal activity, national security, or business threats. Most of the solutions available in the market are not fully automated; analysts with domain-specific expertise are involved in interpreting the collected data.
This means that these countermeasures are not just simple mobile apps that can be installed on our phones to “solve the problem.” Instead, they are sophisticated proffesional tools intended for use by governments, large businesses, or organizations.
Technologies Challenges When Combatting Disinformation
Cost is an important factor to consider. Utilizing cutting-edge solutions and dedicated experts can be quite expensive. Organizations need to be strategic about the content they search for and focus on the most vulnerable topics that may serve as potential openings for an attack. Naturally, no one can afford to scan the entirety of the internet at all times, and budget constraints can limit the effectiveness of the technology. The key solution lies with expert analysts who have domain expertise. They can help guide the search and uncover disinformation amid the vast array of posts, videos, shares, likes, audio clips, memes, and the overall richness of the online social experience.
Detecting offensive content is only the first step; removing it poses a significant challenge. While some companies have better access to and relationships with social networks, there are instances where these networks are uncooperative or uninterested in removing the identified content. While some content clearly falls under criminal activity or national security concerns, other cases are more nuanced and may be political in nature or represent what some consider valid opinions, even if those opinions are toxic and harmful. Furthermore, even when there is evidence that the content is part of an inauthentic, coordinated campaign, it may still not be removed.
Scale is also a challenge. A recent post claimed that Politico, the online news website, received $8 million from USAID. However, this claim is false. Despite being inaccurate, the post garnered 15,000 shares and reached 80 million views within two days. Unfortunately, this kind of misleading content is likely to remain online indefinitely.
One of the biggest challenges hindering the advancement of powerful and effective technologies aimed at mitigating content manipulation is the lack of industry recognition. There is a significant gap between the scale of the problem—validated by numerous case studies—and the acknowledgment it receives from decision-makers. For instance, a study suggests that nearly 50% of all S&P 500 companies are targeted by fake news attacks. Other research indicates that AI-generated content can effectively deceive investors, and there is considerable evidence showing the impact of these attack campaigns on societal activities, contributing to polarization within communities. Since 2022, the World Economic Forum has identified disinformation as a top risk, consistently ranking it among the top five risks each year.
Which raises the question: will technology save us from disinformation? It may not save us entirely, but it could certainly help a lot.
Reducing The Risk Of Disinformation
Let’s start with the most important point: states and businesses must recognize the risks they face. These risks are real, and there are numerous examples of how disinformation affects societies and organizations. To counter these threats, organizations need a robust strategy that enhances their resilience and defenses against narrative attacks.
Education and upskilling can help bridge the gaps where technology falls short. Just as we have learned to identify high-sugar foods in the supermarket or to be skeptical of emails asking us to reset our passwords, we can also learn to recognize the signs of authentic online information versus misleading content.
Additionally, the right technological solutions must be implemented. Operating a business without a firewall or two-factor authentication to protect digital assets is simply unwise. The same logic applies to safeguarding brands and organizations from manipulative content. It’s time to establish firewalls, procedures, and mitigations to protect ourselves from these sophisticated attacks and their harmful impact.
Read the full article here