Perry Carpenter is Chief Human Risk Management Strategist for KnowBe4, a cybersecurity platform that addresses human risk management.
I’m sure you’ve been hearing a ton about artificial intelligence lately. Most of that is fueled by all the buzz around what’s known as “Generative AI” (GenAI). But have you heard of agentic AI? It’s a relatively new term that represents the next significant evolution in AI technology, and it’s poised to have a significant impact on businesses in both positive and potentially perilous ways.
Before we go deeper, it can be helpful to understand the relationship between these three terms and how the terms represent an evolution in technology development:
• AI: The broadest and most encompassing term that refers to rule-based computer systems, statistical models and machine learning focused on specific capabilities like classification, prediction or optimization.
• GenAI: A subset of AI specifically focused on the creation of content driven by prompts input from users.
• Agentic AI: A form of AI that builds on the capabilities of generative AI through autonomy and independent decision-making.
Agentic AI can maintain and work toward specific objectives over time, making decisions about what actions to take without receiving explicit instruction or prompts from humans.
Agentic AI’s Role In Cybersecurity Efforts
If we think of how this progression in the more precise use of AI impacts organizations and their cybersecurity efforts, it works something like this:
• Traditional AI allowed for the focus on specific tasks that had predefined rules or models.
• Generative AI added the ability to create unique and new content based on learned patterns—the ability to generate security reports, or code analyses, for instance.
• Agentic AI adds the element of autonomy and goal attainment to independently hunt for and identify threats and to create and deploy defensive responses to those threats.
The promise of agentic systems is the ability to both adapt to changing conditions and learn from past experiences. It can also coordinate multiple tasks and subtasks. These capabilities can obviously have a potential positive impact on a wide range of occupations, roles and tasks.
Here, though, we’re focused on the impact and improvements that agentic AI can bring to an organization’s security awareness training and human risk management efforts. Agentic AI can play an important role in boosting these efforts to drive a strong security culture. It can address limitations that are common to most standard training methods, such as:
• Limited Training Resources: Many training efforts are generic, offering the same content to everyone, regardless of their role, prior knowledge, interest or needs.
• Outdated Content: The world of cybersecurity, and the cybercriminals that inhabit it, move quickly; too quickly for most corporate training efforts to keep up.
• Lack Of Individualized Context: A one-size-fits-all approach doesn’t address the disparity in needs that exist, for instance, between a customer service rep and an HR professional, or between IT staff and senior leaders. Learners aren’t able to clearly see how they’re uniquely impacted depending on their role and their day-to-day activities.
• Lack Of Individualized Feedback: In standard training models, learners often receive general feedback, but that typically occurs during or shortly after training. Feedback doesn’t extend into the work environment. There is also usually a lack of ongoing and individualized follow-up training and real-time coaching.
A New Role For Agentic AI
Agentic AI is poised to have a significant impact on efforts to build and sustain a strong security culture while ensuring employees have the information, understanding and resources needed to play a role in risk management. It can also help in both the general work environment and in the delivery of training and real-time coaching. For instance:
• Through continuous security monitoring and response, agentic AI can proactively identify and tackle vulnerabilities across multiple systems.
• By analyzing individual employee behaviors, individual security blind spots and risks can be identified, and targeted improvements recommended.
• High-level security policies can be translated into tactical procedures.
Addressing Potential Challenges
Agentic AI can revolutionize cybersecurity, but the journey toward successfully using AI agents can face limitations that leaders must keep in mind:
• Lack Of Transparency: Black box AI is a significant issue. AI agents, particularly deep learning model-based agents, process data to make decisions and predictions without explaining their decision-making process, which can affect user trust and hinder troubleshooting. The inability to explain and audit AI-driven results can be especially challenging in regulated industries that must comply with transparency requirements.
• Implementation And Maintenance Costs: Adopting agentic AI may be a costly business for some organizations. AI systems need high-performance computing hardware to train and run advanced AI models. The greater the number of AI agents in a system, the greater the compute load required. Finding talented AI engineers and skilled data scientists to train and upkeep AI systems can also be difficult. Initial development of infrastructure, continuous model training, maintenance and upgrades bear additional costs.
• Ethical Challenges: AI systems capable of self-learning may behave in unexpected ways or simply make poor decisions. Without vigilant monitoring, AI agents may perpetuate biases or make morally questionable choices. As they are exposed to new datasets to perform complex tasks, their objectives might shift, leading to reliability issues if they deviate from their intended purpose.
• Accountability Challenges: In situations where AI systems make a mistake, assigning blame becomes complicated, especially in interconnected systems where both human users and AI agents share responsibility for decision-making.
Conclusion
Organizations are already beginning to explore and evaluate how agentic AI systems can support and enhance their cybersecurity and related training efforts. What they’re finding is that agentic AI can help to ensure that employees have the continually updated skills and knowledge needed to combat cyber threats. It can also play a role in identifying vulnerabilities and potential attacks, responding to vulnerabilities with recommended steps and communication updates to close the gaps.
From AI to generative AI, and now to agentic AI, organizations are learning that despite warnings about the power these tools have in the hands of threat actors, the promise of this technology is significant, especially when it comes to building, strengthening and continually supporting a resilient cybersecurity culture.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Read the full article here