


Safety for AI-Based Systems
As part of SAFEXPLAIN, Exida has contributed a methodology related to a verification and validation (V&V) strategy of AI-based components in safety-critical systems. The approach combines the two standards ISO 21448 (also known as SOTIF) and ISO 26262 to address...
PRESS RELEASE: SAFEXPLAIN Unveils Core Demo: A Step Further Toward Safe and Explainable AI in Critical Systems
Barcelona, 03 July 2025 The SAFEXPLAIN project has just publicly unveiled its Core Demo, offering a concrete look at how its open software platform can bring safe, explainable and certifiable AI to critical domains like space, automotive and rail. Showcased in a...
Core Demo Webinar- Making certifiable AI a reality for critical systems
Now available!
The project’s Core Demo is a small-scale, modular demonstrator that highlights the platform’s key technologies and showcases how AI/ML components can be safely integrated into critical systems.

Showing SAFEXPLAIN Results in Action at ASPIN 2025
The 23° Workshop on Automotive Software & Systems, hosted by Automotive SPIN Italia on 29 May 2025 proved to be a very successful forum for sharing SAFEXPLAIN results. Carlo Donzella from exida development and Enrico Mezzetti from the Barcelona Supercomputing...
Recent Comments