Trustworthy AI in Safety Critical Systems Event Signals End of SAFEXPLAIN Project but Results Live On

Date: October 08, 2025

On 23 September 2025, members of the SAFEXPLAIN consortium joined forces with fellow European projects ULTIMATE and EdgeAI-Trust for the event Trustworthy AI in Safety-Critical Systems: Overcoming Adoption Barriers.” The gathering brought together experts from academia, industry, and regulatory bodies to explore the key challenges and opportunities in deploying trustworthy AI across safety-critical domains such as automotive, rail, and space.

The event opened with a welcome by Irune Agirre, Dependability and Cybersecurity Methods Team Leader from IKERLAN, who introduced the project coordinators organizing the event and recalled their shared mission: to reconcile AI techniques with safety and ethical requirements to improve explainability, traceability and runtime assurance for critical embedded systems.

Project coordinators Jaume Abella (SAFEXPLAIN), Michel Barreteau (ULTIMATE) and Mohammed Abuteir (EdgeAI-Trust) briefly presented their projects before entering into a panel discussion moderated by Irune.

The panel reflected on the broader AI landscape, including rising regulatory pressure for trustworthy AI, the gap between certification and AI-based systems and the challenges of trust, ethics, transparency and robustness in high-stakes applications.

The audience then broke into two tracks, one dedicated to SAFEXPLAIN and EdgeAI-Trust and another dedicated to ULTIMATE, allowing for a deeper look into each project’s results and tools. 

EdgeAI-Trust representatives Francisco J. Cazorla (BSC) and Carlo Donzella (exida development) shared insights into the work developed in the EdgeAI-Trust project. 

As part of its morning track, ULTIMATE shared its End-to-end Trustworthy AI (incuding ethics) approach and the different trustworthy AI activities along the hybrid AI lifecycle about a Satellite Use Case (see agenda for speaker details).

 

In the afternoon session, ULTIMATE also presented its remaining use cases, namely the “Robotic workshop” and the “Industrial mobile manupulators” cases. The related trustworthy AI activities regarding Design & Development, Algorithm evaluation and  Execution under operational conditions steps have also shown innovative hybrid AI solutions considering the accuracy, reliability, explainability, robustness and ethics trustworthy AI criteria.

The SAFEXPLAIN afternoon session focused on AI-based safety critical system demos, with presentations by:

The session closed on a positive note with project coordinators providing a forward look on AI in safety critical systems, including the need for explainability, ethics, human in the loop options and the new challenges and opportunities posed by GenAI. 

While the “Trustworthy AI in Safety-Critical Systems” event marked the conclusion of the SAFEXPLAIN and ULTIMATE projects’ official timeline, it also underscored how the results of all projects will live on through community, adoption and continued technological development. With the momentum and visibility gained, SAFEXPLAIN and ULTIMATE have set a strong foundation for the next generation of trustworthy AI in critical systems while showing their complementarities.