Written by: NAVINFO
The SAFEXPLAIN consortium is advancing the frontiers of explainable AI for autonomous driving, with a strong emphasis on safety, transparency, and reliability. This collaborative project delivers innovations that make AI-driven decisions easier to understand, validate, and trust.
At the heart of the system are advanced safety components like the Decision and Supervision Nodes. These continuously verify vehicle behavior, ensuring every decision aligns with DS3.1 functional safety protocols.
Anomaly detection is handled by a Variational Autoencoder (VAE), which enables the system to spot unusual or unexpected scenarios in real time. This adds a critical layer of protection in dynamic traffic situations.
To illustrate the system’s ability to respond to typical urban hazards, Figure 1 shows a well-lit scenario where a pedestrian is detected, triggering emergency braking. Meanwhile, enhancements to the autonomous emergency braking (AEB) system—through Safety of the Intended Functionality (SOTIF) triggers—enable faster and more accurate reactions to sudden risks.
In adverse weather, the robustness of object detection remains crucial. Figure 2 presents a rainy scenario where the system correctly identifies a pedestrian and engages emergency braking.
The perception system fuses camera input and vehicle telemetry to detect hazards, while real-time overrides like braking and acceleration maintain stability in dynamic traffic environments.
Figure 3 demonstrates a scenario where a vehicle is identified, but the system classifies it as low risk, requiring no immediate intervention.
Development and testing are performed using a ROS2 Humble-enabled platform integrated with the CARLA simulator. This virtual testing ground allows the team to simulate a wide range of traffic scenarios and edge cases. The next step will involve porting the system to an embedded NVIDIA Orin AGX platform, leading up to a live demonstration in September 2025.
Validation efforts use pre-recorded ROSBAG driving data to replay real-world situations and analyze system behavior. A user-friendly interface and detailed logs support full traceability of each AI-driven decision—ensuring transparency for both developers and reviewers.
Validation Tooling: Overlay Interface and Foxglove Dashboard
To support functional safety validation and demonstration of DS3.1 compliance, two complementary interfaces have been developed:
Figure 5: Overlay Dashboard Interface
Overlay Dashboard Interface:
- Provides real-time visualization of detections, braking, and system states.
- Designed to convey critical information to a driver-like perspective, enhancing situational awareness.
- Visual elements include TTC bars, system health status (green/yellow/red), and anomaly flags.
Figure 6: Foxglove Studio Dashboard
Foxglove Studio Dashboard:
- Enables in-depth analysis of pre-recorded scenarios (MCAP format).
- Offers multi-panel visualization of ROS topics, including object detection, brake events, and diagnostic logs.
- Used by developers and auditors to inspect system performance at a granular level.
-
Figure 7 shows the DS3.1 demonstrator that takes a birdseye perspective towards the road and the ego vehicle. In the video it is possible to see the brake lights working and that in the case of an emergency even the warninglights switch on.
This update showcases the collective achievements of all SAFEXPLAIN partners for the automotive use case. More scenarios will be released as part of the project. The consortium remains committed to building an autonomous driving solution that is not only capable—but also transparent, auditable, and trusted by design.
See more demos here.
For more information on the autmotive use case, visit https://safexplain.eu/automotive-case-study/ .