
News
SAFEXPLAIN partners shared key projects results at the 2025 28th Euromicro Conference Series on Digital System Design (DSD). Two papers were accepted to the conference proceedings and presented on 10 and 11 September 2025 by Francisco J. Cazorla from the Barcelona...

SAFEXPLAIN: Outstanding scientific solutions and practical application
SAFEXPLAIN’s success can be understood as a combination of outstanding scientific results and the vision to put them together to solve fundamental industrial challenges to make AI-based systems trustworthy. The project's results and network of interested parties...

Safety for AI-Based Systems
As part of SAFEXPLAIN, Exida has contributed a methodology related to a verification and validation (V&V) strategy of AI-based components in safety-critical systems. The approach combines the two standards ISO 21448 (also known as SOTIF) and ISO 26262 to address...

PRESS RELEASE: SAFEXPLAIN Unveils Core Demo: A Step Further Toward Safe and Explainable AI in Critical Systems
Barcelona, 03 July 2025 The SAFEXPLAIN project has just publicly unveiled its Core Demo, offering a concrete look at how its open software platform can bring safe, explainable and certifiable AI to critical domains like space, automotive and rail. Showcased in a...

Core Demo Webinar- Making certifiable AI a reality for critical systems
Now available!
The project’s Core Demo is a small-scale, modular demonstrator that highlights the platform’s key technologies and showcases how AI/ML components can be safely integrated into critical systems.

Showing SAFEXPLAIN Results in Action at ASPIN 2025
The 23° Workshop on Automotive Software & Systems, hosted by Automotive SPIN Italia on 29 May 2025 proved to be a very successful forum for sharing SAFEXPLAIN results. Carlo Donzella from exida development and Enrico Mezzetti from the Barcelona Supercomputing...

Tackling Uncertainty in AI for Safer Autonomous Systems
Within the SAFEXPLAIN project, members of the Research Institues of Sweden (RISE) team have been evaluating and implementing components and architectures for making AI dependable when utilised within safety-critical autonomous systems. To contribute to dependability...

SAFEXPLAIN shares its safety critical solutions with aerospace industry representatives
On 12 May 2025, the SAFEXPLAIN consortium presented its latest results to representatives of several aerospace and embedded system industries including Airbus DS; BrainChip, the European Space Agency (ESA), Gaisler, and Klepsydra, showcasing major strides in making AI...

SAFEXPLAIN Update: Building Trustworthy AI for Safer Roads
For enhanced safety in AI-based systems in the railway domain, the SAFEXPLAIN project has been working to integrate cutting-edge simulation technologies with robust communication frameworks. Learn more about how we’re integrating Unreal Engine (UE) 5 with Robot Operating System 2 (ROS 2) to generate dynamic, interactive simulations that accurately replicate real-world railway scenarios.

Enhancing Railway Safety: Implementing Closed-Loop Validation with Unreal Engine 5 and ROS 2 Integration
For enhanced safety in AI-based systems in the railway domain, the SAFEXPLAIN project has been working to integrate cutting-edge simulation technologies with robust communication frameworks. Learn more about how we’re integrating Unreal Engine (UE) 5 with Robot Operating System 2 (ROS 2) to generate dynamic, interactive simulations that accurately replicate real-world railway scenarios.