Cybersecurity Challenges in AI Applications for Autonomous Driving
#News Center ·2021-02-19 17:13:43
A report published by the European Union Agency for Cybersecurity (ENISA) and the Joint Research Centre (JRC) explores the cybersecurity risks associated with artificial intelligence (AI) in autonomous vehicles and proposes recommendations to mitigate those risks.
By eliminating the most common cause of road accidents—human drivers—autonomous vehicles promise to reduce accidents and fatalities. However, they may also introduce entirely different risks for drivers, passengers, and pedestrians.
Autonomous vehicles rely on AI systems that use machine learning technologies to collect, analyze, and transmit data to make decisions that would normally be made by humans in conventional vehicles. Like all IT systems, these AI systems are vulnerable to attacks that may compromise the safe operation of the vehicle.
The new ENISA–JRC report reveals cybersecurity risks associated with the adoption of AI in autonomous vehicles and provides recommendations for mitigating them.
“When an unsafe autonomous vehicle crosses EU Member State borders, its vulnerabilities multiply. Safety must not be an afterthought but a prerequisite for the trustworthy deployment of autonomous vehicles on European roads,” said Juhan Lepassaar, Executive Director of ENISA.
Stephen Quest, Director-General of the JRC, added: “European legislation must ensure that the benefits of autonomous driving are not offset by security risks. Our report aims to support EU-level decision-making by deepening the understanding of AI technologies used in autonomous driving and their related cybersecurity risks, to ensure the safe deployment of AI in autonomous systems.”
AI Vulnerabilities in Autonomous Vehicles
AI systems in autonomous vehicles operate continuously, identifying traffic signs and road markings, detecting vehicles, estimating speed, and planning the route ahead. In addition to unexpected failures, these systems are vulnerable to deliberate attacks designed to disrupt AI functions and compromise key safety features.
Examples include painting lines on roads to mislead navigation or placing stickers on stop signs to prevent recognition. Such alterations can cause the AI system to misclassify objects, leading to hazardous behavior from the vehicle.
Recommendations to Enhance AI Security in Autonomous Vehicles
To enhance AI security in autonomous vehicles, the report offers several recommendations. One is to conduct regular security assessments throughout the AI components' lifecycle. Systematic verification of AI models and data is crucial to ensure the vehicle can operate safely even when facing unexpected events or malicious attacks.
Another recommendation is to implement continuous risk assessment processes supported by threat intelligence. This helps identify potential AI-related risks and emerging threats in the context of autonomous driving. Appropriate AI security policies and a strong security culture should be embedded throughout the automotive supply chain.
The automotive industry should adopt a "security-by-design" approach during the development and deployment of AI functions, ensuring cybersecurity is a core component from the start. Finally, the sector must increase resilience and improve incident response capabilities to address the evolving cybersecurity landscape associated with AI.