Max Cheng is the Chief Executive Officer of VicOne, a leader in future-ready cybersecurity for the automotive industry.
The global automotive industry is amid a period of dizzying change with the rollout of transformational innovations. Software-defined vehicles (SDVs), advanced driver-assistance systems (ADAS), and—perhaps the most dramatic innovation of all—integration of artificial intelligence (AI) are reshaping the collective imagination of the industry. The developments offer original equipment manufacturers (OEMs) and their layers of suppliers attractive opportunities for differentiated enhancements in functionality and cost efficiencies.
But the innovations also are bringing into the automotive world brand-new cybersecurity challenges. The systems and components within the vehicle itself—the onboard domain of electronic control units (ECUs), infotainment systems, ADAS technologies and other in-vehicle software and hardware architecture—accounted for the majority of published automotive-related vulnerabilities (83%) from 2014 to 2024. This points to the increasing complexity of such in-vehicle systems, and the introduction of AI could add further complications and security challenges in this domain.
Significant strategic risks are being generated across multiple fronts. Take the seismic shifts in governance that are ahead for the industry. The regulatory landscape for AI in the automotive space is very fluid, and compliance challenges and opportunities are quickly evolving. The standards landscape for the industry already was layered and complex, and now automakers must account for new sources and types of regulations. Questions around AI ethics and bias mitigation, especially, are likely to arise from standards organizations with which the automotive industry has never interacted.
The time is now for decision makers throughout the automotive ecosystem to position their organizations to not merely adapt but thrive in their industry’s accelerated adoption of AI.
Revolutionary Impact
Generative AI promises tremendous strategic value in multiple areas across automotive sectors—automation, predictive analytics, autonomous systems, etc.—and stands to reshape the end-to-end lifecycle of automobiles across design, testing, production and consumer use.
The next generations of autonomous driving could leverage end-to-end AI modeling to inform perception, recognition and control functions. These developments stand to improve the safety of vehicles by enabling better, swifter response to fast-changing road and traffic developments than human drivers could possibly undertake. Similarly, planners could rely on large language models (LLMs) in orchestrating commands such as acceleration and deceleration, braking and steering—again boosting automobile, operator and passenger safety.
Automakers clearly see the breakthrough capabilities of generative AI as central to their plans to innovate products, accelerate time to market and enhance customer experience. At the same time, generative AI is rapidly growing the potential surface area for automotive cyber attacks, and the stakes are very high because LLMs leverage critical enterprise data, drive hard-to-control self-learning across a manufacturer’s organization and have exhibited a propensity for errors. The strategic risks are far-reaching, and automakers must be proactive about getting in front of them.
An Expanding Governance Landscape
The regulatory pressure on automakers is ratcheting up, and the cybersecurity standards landscape is growing more complex.
ISO/SAE 21434 Road vehicles—Cybersecurity engineering, for example, was developed by the International Standards Organization (ISO) and SAE International (formerly the Society of Automotive Engineers) and recommends that OEMs and other entities in the automotive supply chain consider security not only during a vehicle’s conceptualization but also during its decommissioning. It is not compulsory for automakers, but ISO/SAE 21434 is closely aligned and complementary with UN Regulation No. 155 (UN R155), which is mandatory.
The standard presents key cybersecurity principles for vendors throughout the automotive ecosystem:
• Ensure that road-vehicles systems released to the market are secure.
• See that automakers and suppliers perform due diligence.
• Root cybersecurity engineering in current technologies and methodologies.
• Adopt a risk-oriented approach.
• Base management of cybersecurity activities on ISO/SAE 21434.
• Identify cybersecurity guidelines throughout a vehicle’s lifecycle.
How generative AI will factor into questions around these points will be defined in the years ahead of ongoing adoption, and some other AI regulatory frameworks that will have significant strategic impact for OEMs and suppliers are still to be finalized.
The U.S. National Institute of Standards and Technology (NIST), for example, has developed a program addressing system performance and measurement methods for cybersecurity around AI and sensors, perception and communications for automated vehicles.
Automotive companies may need to collaborate with regulators across sectors such as data privacy, cybersecurity and AI ethics, which historically have not been on the regulatory radar of the automotive industry. If automakers are to successfully manage the considerable strategic risks (and potentially high costs) around regulatory and standards compliance relative to AI in the years ahead, they are going to have to get and stay in alignment with a wider swath of agencies around the world than they ever have had to in the past.
Emerging Strategic Questions
Beyond cybersecurity and regulation, AI introduces a range of other strategic risks which OEMs and automotive suppliers of every tier will need to consider:
• Overreliance on AI—Could excessive dependence on generative AI hinder a manufacturer’s ability to anticipate and adapt to market shifts?
• Resource Misallocation—Could improperly managed AI initiatives lead to wasted R&D and other resources?
• Resource Constraints—How will shortages of semiconductors and talent impact automakers’ planning, business models and product delivery?
• Ethical Considerations—What constitutes ethical AI practices in the automotive context, and how might these standards vary across different regions and evolve over time?
• Supply-chain Vulnerabilities—How can cybersecurity extend beyond a given OEM, supplier or software vendor to address the entire supply chain, given AI’s reliance on vast data sets and substantial third-party integrations?
Even in this early stage of adoption across the automotive industry, AI has already established itself as a vast, complicated threat vector to be accounted for. OEMs and their suppliers must commit sufficient attention and investment to the task. An officer position such as a “Chief of AI Security” might need to be created today, in order for a company to properly grasp and address the new landscape of strategic cybersecurity risks that are being introduced.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Read the full article here