
News

Final Automotive Demonstrator Results
The final phase of the automotive case study focused on building and validating a ROS2-based demonstrator that implements safe and explainable pedestrian emergency braking. Our architecture integrates AI perception modules (YOLOS-Tiny pedestrian detector, lane...

Trustworthy AI in Safety Critical Systems Event Signals End of SAFEXPLAIN Project but Results Live On
On 23 September 2025, members of the SAFEXPLAIN consortium joined forces with fellow European projects ULTIMATE and EdgeAI-Trust for the event “Trustworthy AI in Safety-Critical Systems: Overcoming Adoption Barriers.” The gathering brought together experts from...

Strong representation of SAFEXPLAIN in DSD-SEAA
SAFEXPLAIN partners shared key projects results at the 2025 28th Euromicro Conference Series on Digital System Design (DSD). Two papers were accepted to the conference proceedings and presented on 10 and 11 September 2025 by Francisco J. Cazorla from the Barcelona...

SAFEXPLAIN: Outstanding scientific solutions and practical application
SAFEXPLAIN’s success can be understood as a combination of outstanding scientific results and the vision to put them together to solve fundamental industrial challenges to make AI-based systems trustworthy. The project's results and network of interested parties...

Safety for AI-Based Systems
As part of SAFEXPLAIN, Exida has contributed a methodology related to a verification and validation (V&V) strategy of AI-based components in safety-critical systems. The approach combines the two standards ISO 21448 (also known as SOTIF) and ISO 26262 to address...

PRESS RELEASE: SAFEXPLAIN Unveils Core Demo: A Step Further Toward Safe and Explainable AI in Critical Systems
Barcelona, 03 July 2025 The SAFEXPLAIN project has just publicly unveiled its Core Demo, offering a concrete look at how its open software platform can bring safe, explainable and certifiable AI to critical domains like space, automotive and rail. Showcased in a...

Core Demo Webinar- Making certifiable AI a reality for critical systems
Now available!
The project’s Core Demo is a small-scale, modular demonstrator that highlights the platform’s key technologies and showcases how AI/ML components can be safely integrated into critical systems.

Showing SAFEXPLAIN Results in Action at ASPIN 2025
The 23° Workshop on Automotive Software & Systems, hosted by Automotive SPIN Italia on 29 May 2025 proved to be a very successful forum for sharing SAFEXPLAIN results. Carlo Donzella from exida development and Enrico Mezzetti from the Barcelona Supercomputing...

Tackling Uncertainty in AI for Safer Autonomous Systems
Within the SAFEXPLAIN project, members of the Research Institues of Sweden (RISE) team have been evaluating and implementing components and architectures for making AI dependable when utilised within safety-critical autonomous systems. To contribute to dependability...

SAFEXPLAIN shares its safety critical solutions with aerospace industry representatives
On 12 May 2025, the SAFEXPLAIN consortium presented its latest results to representatives of several aerospace and embedded system industries including Airbus DS; BrainChip, the European Space Agency (ESA), Gaisler, and Klepsydra, showcasing major strides in making AI...