Perry Carpenter is Chief Human Risk Management Strategist for KnowBe4, developer of Security Awareness Training & Phishing Simulation tools.

Some experts have shared concerns that public figures could take advantage of the widely known capability of artificial intelligence technology to generate false images, voices and other synthetic content.

It’s a concept referred to as the “liar’s dividend,” a term coined by Bobby Chesney and Danielle Citron in 2019. When fake information can be purported to be real, it stands to reason that real information could also be purported to be fake. That’s the premise behind the liar’s dividend. Politicians, influencers, defendants and celebrities who get caught on video or audio doing or saying something that could land them in hot water could simply resort to claiming the evidence was fabricated.

The impact of the liar’s dividend holds significant implications for business leaders.

Synthetic Media And The Liar’s Dividend

“Is it live, or is it Memorex?” was an advertising slogan back in the 1970s used to promote the company’s cassette recording tapes. The claim? The quality of the sound was so good that listeners would not be able to discern whether it was a recording or live audio. Today, with the rise of synthetic media, that ability to discern real from manipulated content is garnering new concerns.

If you’re unfamiliar with the term, “synthetic media” refers to content created or modified using AI and other advanced technologies like deep learning algorithms and generative adversarial networks. In a world where it’s hard to tell what’s real from what’s fake, business leaders face risks they might not have previously considered.

Implications For Business Leaders

Synthetic media holds both promise and peril for business leaders. It can be used to generate education and training materials that can be quickly repurposed under multiple languages and locations—not using real people but their AI-generated likenesses and voices. It can be used to create realistic media for communications with employees and customers at a fraction of the cost and time required previously.

But I believe the rise of synthetic media also brings with it risk, allowing for the creation of misinformation and fraud and bad players to put forth false images and sound to misportray businesses and their leaders. The liar’s dividend flips this script to the opposite perspective. It also allows real content to be dismissed as fake. “No, that wasn’t me using disparaging language to refer to a client.” “That wasn’t me harassing an employee.” “That wasn’t what I really said.”

The implications of the liar’s dividend are concerning. Someone accused of misdeeds could potentially claim that the evidence presented against them is fake. As the public witnesses the rapid advance of synthetic media, it’s becoming increasingly difficult to prove that what’s fake is fake and what’s real is real.

Countering The Potential Peril

The ability to combat the challenge of synthetic media to avoid the erosion of public trust will require robust authentication methods, digital literacy and a commitment to truth-seeking.

The rise of generative AI represents a pivotal moment in the history of technology and society. Our ability to weather this transition will have major implications for the future of media, communication and how we perceive what is “real.” For now, businesses should:

1. Be transparent about content creation processes. Additionally, provide clear attribution for sources. You can create transparency in your content-generation processes by sharing details about your generative AI creation process on your company’s website and by including disclaimers on copy that has been created with the use of Gen AI. For instance, “This content was created with the assistance of Gen AI tools.”

2. Train on digital literacy and how to identify misinformation. This training should not only be for employees but also customers and other key stakeholders.

3. Develop a response plan. Establish clear procedures for quickly responding to false claims about content or communications.

4. Share information in multiple formats. Businesses communicate via multiple channels, both traditional and digital. Sharing information in multiple formats creates multiple verification points. For example, make an announcement via a social media post, blog post, video, etc.

5. Protect digital assets to prevent unauthorized access that could lead to manipulation. This might include using antivirus software, data encryption, password protection, two-factor authentication, security audits and ongoing communication and security awareness training. Data protection requires a combination of both technological and human efforts.

6. Implement robust content authentication methods. These might include digital watermarking, blockchain-based verification and other methods of creating a verifiable chain of possession for digital content. Companies can also consider using AI-detection tools that can help identify manipulated content.

Despite definable efforts that can be put in place to combat the risks of synthetic media, this isn’t a task that businesses can tackle on their own. It’s a societal challenge that will require collaboration with industry partners and regulators. This challenge cannot be addressed quickly, brushed aside or fully solved. A multi-faceted approach that is ongoing is needed to help mitigate the risks presented by synthetic media and the implications of the liar’s dividend.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Read the full article here

Share.
Exit mobile version