Written by: NavInfo Europe
NavInfo Europe is responsible for the development of the automotive case study within the SAFEXPLAIN project. As part of this international collaboration, we have made substantial progress on the case study development and have achieved several milestones.
To create a realistic test bed for our autonomous driving agent, we chose CARLA as the simulation platform (Figure 1). CARLA provides a flexible and high-fidelity environment for designing and running various driving scenarios and road layouts. We have created multiple safety scenarios that align with functional safety standards, allowing us to rigorously test the agent’s capabilities.
The autonomous driving agent itself has been developed with a modular architecture, facilitating the integration of explainability features and a supervision module. These components, developed within the framework of other activities in the SAFEXPLAIN project, will enhance the safety and interpretability of the agent’s decision-making process. Currently, the agent demonstrates competence in perception, planning, and vehicle control within the simulated environment.
To validate the real-world applicability of our work, we have deployed the autonomous driving system on an embedded compute platform using the Robot Operating System 2 (ROS2) middleware. Porting the software to the embedded system posed several challenges, such as resource constraints and real-time performance requirements, which we have successfully overcome.
Showcasing performance in safety scenarios
To illustrate the capabilities of our autonomous driving agent, we have included two videos that showcase its performance in relevant safety scenarios. In the first video (Figure 2), a pedestrian suddenly crosses the road, requiring the agent to detect the pedestrian’s presence within its lane and react appropriately. The agent demonstrates its ability to slow down the vehicle within the allowed safety margins, preventing a potential collision.
Figure 2: Safety scenario video–Pedestrian crossing the road
The second video (Figure 3) presents a similar safety scenario, but under more challenging weather conditions. The scene takes place in heavy rain, which can adversely affect the perception capabilities of autonomous systems. However, our agent, equipped with robust perception components, maintains reliable performance even in these adverse conditions. This highlights the importance of developing autonomous driving systems that can operate safely across a wide range of environments and weather conditions.
Figure 3: Safety scenario– Challenging weather conditions
Moving forward, we will continue to actively work on enhancing the autonomous agent by integrating further technical innovations resulting from the SAFEXPLAIN project. We have planned a series of tests and evaluations to assess both the functional and non-functional aspects of the system. The case study development roadmap includes several upcoming milestones that will bring us closer to our goal of enabling safe and trustworthy AI for safety-critical systems.