
News

SAFEXPLAIN Update: Building Trustworthy AI for Safer Roads
For enhanced safety in AI-based systems in the railway domain, the SAFEXPLAIN project has been working to integrate cutting-edge simulation technologies with robust communication frameworks. Learn more about how we’re integrating Unreal Engine (UE) 5 with Robot Operating System 2 (ROS 2) to generate dynamic, interactive simulations that accurately replicate real-world railway scenarios.

Enhancing Railway Safety: Implementing Closed-Loop Validation with Unreal Engine 5 and ROS 2 Integration
For enhanced safety in AI-based systems in the railway domain, the SAFEXPLAIN project has been working to integrate cutting-edge simulation technologies with robust communication frameworks. Learn more about how we’re integrating Unreal Engine (UE) 5 with Robot Operating System 2 (ROS 2) to generate dynamic, interactive simulations that accurately replicate real-world railway scenarios.

Case Studies Update: Integrating XAI, Safety Patterns and Platform Development
The SAFEXPLAIN project has reached an exciting point in its lifetime: the integration of the outcomes of the different partners.
The work related to the case studies began with the preparation of AI algorithms, as well as the datasets required for the trainings and for the simulation of the operational scenarios. Simultaneously, the case studies have counted on support from the partners focused on explainable AI (XAI), safety patterns and platform development.

BoF workshop at Future Ready Solutions Event — Trustworthy AI: main innovations and future challenges
Project coordinator Jaume Abella from the Barelona Supercomputing Center represented the project at the Birds of a Feather session, “TrustworthyAI Cluster: Main innovations and future challenges” together with cluster siblings EVENFLOW, TALON and ULTIMATE.

BSC Webinar Demonstrates Interoperability of SAFEXPLAIN Platform Tech & Tools
The third webinar in the SAFEXPLAIN webinar series will share the novative infrastructure behind the AI-FSM and XAI methodologies. Participants will gain insights into the integration of the proposed solutions and how they are designed to enhance the safety, portability and adaptability of AI systems.

Showcasing Project Results in 2 HiPEAC’25 Workshops
The 2025 HiPEAC conference successfully brought together more than 750 experts in in computer architecture, programming models, compilers and operating systems for general-purpose, embedded and cyber-physical systems. This annual event is a premier forum for...

Safety patterns for AI-based systems
Through the SAFEXPLAIN project, Ikerlan has analyzed strategies and developed specific solutions to implement safety patterns in systems that incorporate AI components. Learn more about the 4 key safety mechanisms used in the reference safety architurecture pattern.

Coming Soon! SAFEXPLAIN technologies converge in an open demo
As the project enters into its 27th month, releases for most technologies are already available and undergoing the last steps towards completing their integration.To demonstrate the SAFEXPLAIN approach, project partners are working towards creating an open source demo...

Developing Scenario Catalogues for SAFEXPLAIN Case Studies: the Railway Case
Part of Exida developnet SRL’s work in the SAFEXPLAIN project includes developing a Catalogue of Scenarios and Test Cases for each case study. These scenarios are performed in either a real or simulated testing environment.

RISE Webinar Highlights XAI for Systems with Functional Safety Requirements
Dr Robert Lowe, Senior Researcher in AI and Driver Monitoring Systems from the Research Institutes of Sweden discussed the integration of explainable AI (XAI) algorithms into the machine learning (ML) lifecycles for safety-critical systems, i.e., systems with...