Jochen Schwenk is CEO of Crisis Control Solutions LLC & Schwenk AG, an expert in risk and crisis management for the automotive industry.

Imagine a man in a high-visibility vest and a clipboard walking straight into a restricted area of a Fortune 500 building, blending in and eventually accessing highly sensitive information, all without forcing a single lock or triggering an alarm. This is what may happen if organizations invest heavily in digital security but overlook the most common point of failure: human behavior.

This scenario is drawn from the world of red teaming—a practice originally developed by military and intelligence agencies, now adopted by forward-thinking companies. A red team is an independent group tasked with challenging assumptions, testing defenses and simulating how a real adversary might attack an organization. Unlike traditional audits or security reviews, red teams operate like intruders: deceptive, adaptive and intent on exposing weaknesses that policies alone can’t prevent.

The Comfort Of Checklists

Most companies pride themselves on having robust risk assessments. They perform regular audits, maintain compliance and deploy the latest cybersecurity tools. On paper, everything looks secure. But when you assess these same systems through a red team lens—simulating the perspective of someone intent on bypassing them—a different picture can emerge.

Executives often assume that their people, processes and physical spaces are more secure than they actually are. Traditional assessments tend to focus on digital infrastructure, documented policies and known threats. But real-world vulnerabilities are often found in behavior: trust-based access, routine assumptions, untrained frontline staff and untested emergency procedures.

Sharking The Target

“Sharking” is a term that can describe the deliberate process of circling a target, starting from the periphery and gradually closing in to understand rhythms, weaknesses and opportunities for access. In one real-world red team engagement, surveillance revealed that the building’s elevators were monitored via CCTV—a common and expected measure. But, like many Western facilities, the building also had multiple emergency stairwells as required by fire safety regulations. These staircases were unmonitored, infrequently used and offered direct vertical access to most floors.

Using a plausible cover story, the red team passed front-desk security, loitered in stairwells, and accessed upper levels with lower foot traffic. As the day progressed and staff began to leave, the environment grew more permissive. By the time the cleaning crew arrived and alarm systems were partially disabled, access was unrestricted.

The most sensitive materials weren’t behind digital firewalls. They were in the assistant’s office for the board members. Unsecured printouts, passwords taped beneath keyboards, access codes in drawers and login credentials stored in a clearly labeled binder.

No breach required. No alert triggered. Just unnoticed presence—and avoidable human oversights.

The Fallacy Of ‘Secure Enough’

Most executives rely on protocol and policy, assuming that what’s written down will be followed under pressure. But policies don’t stop breaches; people do. And people, when untrained, fatigued or simply polite, can become the weakest link in even the most sophisticated risk environment.

This is well-understood in intelligence operations. Systems are stress-tested regularly. Plans are scrutinized through adversarial simulation. Red teams don’t ask if the policy exists—they ask whether it works in real life, under deception, distraction or stress.

Red Teams Vs. Risk Assessments

Risk assessments tend to confirm what organizations believe they’re doing right. Red teams, on the other hand, expose what they’re doing wrong, in practice, not theory.

Where audits review procedures, red teams will simulate adversaries. Where compliance assumes rational actors, red teams will simulate malicious insiders, confused contractors or opportunistic intruders. And where leadership believes their infrastructure is impenetrable, red teams often walk through the front door with nothing more than confidence and a cover story.

A New Standard: Intelligence-Led Risk Strategy

To evolve beyond these vulnerabilities, organizations must adopt a more dynamic and intelligence-driven approach to resilience. That means they must:

1. Integrate human factors into risk models. Include behavioral vulnerabilities, social engineering susceptibility and physical access controls in every audit and scenario.

2. Run regular red team exercises. Physical infiltration, insider threat simulation and deception-based drills should become part of routine risk management, not a one-off test.

3. Train the frontline. Executives often protect C-suite data but forget that assistants, receptionists and cleaning staff have everyday access to the most sensitive spaces. These personnel must be part of the security equation.

4. Measure culture, not just compliance. A strong security culture empowers employees to question, escalate and resist suspicious behavior—even when it’s uncomfortable.

Final Thought

Today’s risk environment is no longer defined solely by digital threats or regulatory checklists. It’s shaped by the human element, the part that’s hardest to quantify and easiest to exploit.

You don’t defend your company by thinking like an auditor. You defend it by thinking like an adversary. Because the next breach won’t come from malware, it may come from someone holding the door, wearing a vest and walking with purpose.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Read the full article here

Share.
Exit mobile version