
SAFEXPLAIN: Outstanding scientific solutions and practical application
SAFEXPLAIN’s success can be understood as a combination of outstanding scientific results and the vision to put them together to solve fundamental industrial challenges to make AI-based systems trustworthy. The project’s results and network of interested parties...
Safety for AI-Based Systems
As part of SAFEXPLAIN, Exida has contributed a methodology related to a verification and validation (V&V) strategy of AI-based components in safety-critical systems. The approach combines the two standards ISO 21448 (also known as SOTIF) and ISO 26262 to address...
PRESS RELEASE: SAFEXPLAIN Unveils Core Demo: A Step Further Toward Safe and Explainable AI in Critical Systems
Barcelona, 03 July 2025 The SAFEXPLAIN project has just publicly unveiled its Core Demo, offering a concrete look at how its open software platform can bring safe, explainable and certifiable AI to critical domains like space, automotive and rail. Showcased in a...
Core Demo Webinar- Making certifiable AI a reality for critical systems
Now available!
The project’s Core Demo is a small-scale, modular demonstrator that highlights the platform’s key technologies and showcases how AI/ML components can be safely integrated into critical systems.

Recent Comments