Khurram Akhtar – Cofounder of ProgrammersForce.
A chilling video surfaces, showing a renowned tech CEO confessing to sabotaging global energy grids.
Within hours, the video spreads like wildfire. News outlets report it, social media explodes with outrage and the company’s stocks plummet. Not only that, employees panic, governments launch investigations, and the world braces for an energy collapse.
Then, a startling twist: The video is a fake. Not just any fake. It is a deepfake so seamless it fooled experts, newsrooms and millions online. By the time the truth emerges, the damage is done. Lives are disrupted, and trust is shattered.
A scenario of this magnitude has yet to materialize, yet several scams and misinformation campaigns involving deepfakes have already made headlines. But how did we get here? And, more importantly, how do we, as business leaders, fight back?
The Technology Behind Deepfake Creations
Deepfakes tend to use a generative adversarial network (GAN) to create realistic images. Normally, they use two artificial intelligence agents. First, the generator creates realistic fake images. Then, the discriminator assesses the images created by the generator and finds any anomalies in them.
These networks operate in a competitive framework to achieve optimal results. One network, acting as a generator, synthesizes artificial images, while the other, functioning as a discriminator, identifies inconsistencies in these generated images. This happens many times over until a realistic deepfake is created.
Top Ways Deepfakes Are Being Misused
From political manipulation to personal exploitation and corporate fraud, deepfakes are creating a lot of problems in the modern world. They are being weaponized in politics, fabricating videos to spread disinformation like in the case of Donald Trump’s video promoting fraudulent cryptocurrency schemes. The top ways I’ve seen deepfakes being misused include political deepfakes, celebrity exploitation, corporate fraud and general news misinformation.
From Spoofing Attacks To Offsite Deepfakes
With the advancement in technology, micro-realistic deepfakes, especially, are coming to the forefront and spreading disinformation. According to a recent survey of leaders attending the World Economic Forum, almost half said that they are concerned about “advances in adversarial capabilities,” which include phishing, malware and deepfakes.
Presentation attacks are very common and challenge the facial biometric’s anti-spoofing measures. They generally fall into two categories: Physical and digital. Physical attacks use things like silicone 3D masks to mimic facial features, but they are often detectable through texture analysis, liveness detection and depth sensing.
I see digital attacks, leveraging AI-generated deepfakes via screens or pre-recorded videos, as posing a greater challenge. Poorly designed facial recognition systems lacking advanced liveness detection or AI-driven anomaly detection can be easily fooled by these high-fidelity digital forgeries.
On the other hand, injection attacks, a more sophisticated form of biometric spoofing, involve directly feeding deepfake-generated facial data into an authentication pipeline and bypassing camera-based liveness checks entirely.
Unlike conventional presentation attacks, which rely on displaying deepfake videos on screens, injection attacks manipulate raw data at the software level. This circumvents hardware-based defenses, making detection significantly harder unless robust cryptographic signing, device authentication or advanced AI-driven anomaly detection mechanisms are in place.
I think real-time deepfake impersonation attacks are probably the most harmful because they allow AI software to manipulate live feeds, changing facial expressions, identity and speech in real time and thus tricking some of the best authentication systems we currently have.
What’s At Stake And What Needs To Be Done?
Spoofing attacks are a growing concern across business sectors. Cybercriminals can trick authentication systems, approve fraudulent transactions or even manipulate corporate communications.
For business leaders, the stakes couldn’t be higher. A successful deepfake attack doesn’t just compromise an individual’s identity; it can wreak havoc on an entire organization. Fraudsters can impersonate executives to authorize fraudulent transactions, manipulate stock prices or even leak sensitive corporate data. Compliance challenges add another layer of risk, as companies operating in regulated industries must ensure stringent identity verification standards.
As outlined in some of my earlier examples, a single failure can lead to hefty fines, legal battles and loss of customer trust. Beyond financial damage, I think reputational harm can be the most devastating because once a company is linked to an identity fraud incident, regaining credibility becomes an uphill battle. In an era where trust is everything, businesses that don’t proactively strengthen their biometric defenses risk losing not just money but their very legitimacy.
Combating Deepfakes: Tips For Business Leaders
Having had extensive experience in the development of systems that detect deepfakes at my company, I have seen firsthand how the below can help businesses detect presentation and injection attacks.
1. First, businesses can ramp up their defenses by evaluating the level of threat they may encounter; this involves becoming informed and acknowledging the various threats based on your size and sector.
2. Next, I urge enterprises to consider strong multi-factor authentication and overhaul their identity access management solutions.
2. Financial sectors relying on remote identity proofing can use AI-powered biometric defenses with the ability to thwart deepfake attacks (this can also help pick facial minutiae at the micro level that a deepfake is not always able to replicate perfectly.)
3. Lastly, businesses relying on information from public media should be aware of deepfake threats and must have internal mechanisms to confirm the veracity of truth.
A Pre-Emptive Strategy
Given the progress and the development of deepfake attacks, standard authentication measures are no longer enough; business leaders, in order to protect your organization, I believe the adoption of advanced biometric defenses is essential to detect AI-driven identity fraud.
As attacks on facial biometric authentication systems grow more sophisticated, new standards like ISO/IEC NP 25456 for biometric data injection attack detection are being developed to strengthen the anti-spoofing capabilities of authentication systems.
Failing to take this seriously can lead to regulatory penalties and lasting reputational harm. As businesses move toward passwordless authentication, I believe ensuring these systems are resilient to deepfake attacks is essential to safeguarding both operations and credibility.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Read the full article here