The Department of Justice (DOJ) released a revised guidance yesterday for corporate compliance programs incorporating artificial intelligence (AI) to help federal prosecutors more effectively assess these systems. This updated framework — the DOJ’s Evaluation of Corporate Compliance Programs — stems from a directive by Deputy Attorney General Lisa Monaco, who in early 2024 emphasized the importance of stricter penalties for criminals exploiting generative AI for misconduct. Part of a broader DOJ initiative to curb AI misuse, the guidance sets higher accountability standards, requiring companies to ensure their AI systems are ethical, effective, and capable of mitigating risks. More than just a regulatory update, it ushers in a new era of corporate governance, where AI must be monitored, tested, and continuously improved to avoid harm.

Key Elements of the DOJ’s AI Guidance

The guidance revolves around three fundamental questions: Is the company’s AI-driven compliance program well-designed? Is it earnestly implemented? And most critically, does it work in practice? Prosecutors are instructed to assess whether companies’ AI systems are equipped to detect and prevent misconduct and whether they’re regularly updated to address emerging risks. AI can offer tremendous advantages for compliance, such as automated risk detection and real-time monitoring, but the DOJ seems to believe that companies cannot rely on these systems without proper oversight.

AI must be trustworthy and ethically aligned with both the law and internal governance policies. Companies are therefore expected to ensure that their AI systems function transparently, and that decisions influenced by AI are subject to human review where necessary. The guidance emphasizes that the “black box” nature of some AI systems, and the fact that they might require more third-party management, cannot be an excuse for failing to meet legal and ethical standards.

How This Aligns with Emerging Compliance Models

In my forthcoming article, Emerging Compliance in the Generative Decentralized Era,” I explore many of these themes, including the necessity of evolving compliance frameworks in the age of generative AI and decentralized technologies. As AI increasingly powers compliance functions—from fraud detection to regulatory reporting—companies must develop systems that go beyond traditional, static compliance models. AI is capable of dynamically adapting to risks in real time, but for this to work, companies must design AI systems that are continually learning and improving. The DOJ’s guidance fits perfectly within the context of this research, which advocates for generative compliance—a proactive, forward-thinking approach where compliance programs are built to evolve alongside emerging risks.

It’s impossible to plan for risks we can’t predict, as the COVID-19 pandemic clearly illustrated. Unanticipated global disruptions can expose weaknesses that even robust compliance systems may overlook. However, companies need to regularly revisit and revise their risk assessments, particularly with technologies like AI. This proactive approach helps businesses stay prepared for evolving threats. The demand for skilled compliance professionals is growing, with the Bureau of Labor Statistics projecting a 5% increase in compliance officer employment from 2023 to 2033, reflecting the growing complexity of regulatory landscapes.

Accountability and Data Transparency

One of the more pressing issues covered in both the DOJ guidance and the mentioned research is data transparency. AI systems are only as good as the data they are trained on, and companies should demonstrate that their AI tools are designed to monitor compliance risks without introducing unintended consequences. The DOJ stresses that prosecutors will evaluate whether companies are effectively using their data to prevent misconduct and if these tools can provide real-time insights into potential compliance failures. In this new era, businesses that fail to leverage their data effectively—or worse, those that hide behind the complexity of AI—may find themselves under intense regulatory scrutiny.

Continuous Improvement Is Essential

The DOJ’s guidance emphasizes the need for continuous improvement in AI-driven compliance systems. Just a year ago, very few among the businesses surveyed were ready to manage risks posed by AI. Although this number has increased recently, with more executives actively identifying and preparing for emerging technology risks, many organizations still need to prioritize addressing these challenges. A stronger focus on proactively managing AI risks will be essential, therefore, as these technologies continue to evolve and integrate more deeply into business operations. Prosecutors will look closely at whether a company’s AI systems are periodically tested, updated, and refined based on lessons learned from past compliance issues or emerging trends. Indeed, AI isn’t a one-time solution but an ongoing commitment requiring continuous adaptation to new risks and regulatory changes.

Businesses should be asking themselves: How often are we revisiting our risk assessments? Are we incorporating new data streams and adapting to evolving legal standards? In a world where AI systems can rapidly adapt to detect new forms of misconduct, companies that fail to keep pace may fall behind, not only in compliance but also in consumer trust and market reputation.

A Call to Action for Businesses

The DOJ’s AI guidance is a clear call to action for companies to take AI compliance seriously. Those that integrate ethical AI into their compliance strategies will position themselves for long-term success, avoiding legal penalties and building stronger relationships with regulators. However, businesses that view AI as a simple plug-and-play solution may find themselves facing significant regulatory challenges.

The key takeaway from the DOJ guidance is, therefore, that AI in compliance must be designed for accountability, transparency, and continuous evolution. The future of compliance is dynamic, and businesses that embrace this generative approach will lead the way in an increasingly complex and AI-driven world.

Read the full article here

Share.