SAFEXPLAIN: From Vision to Reality

AI Robustness & Safety

Explainable AI

Compliance & Standards

Safety Critical Applications
THE CHALLENGE: SAFE AI-BASED CRITICAL SYSTEMS
- Today’s AI allows advanced functions to run on high performance machines, but its “black‑box” decision‑making is still a challenge for automotive, rail, space and other safety‑critical applications where failure or malfunction may result in severe harm.
- Machine- and deep‑learning solutions running on high‑performance hardware enable true autonomy, but until they become explainable, traceable and verifiable, they can’t be trusted in safety-critical systems.
- Each sector enforces its own rigorous safety standards to ensure the technology used is safe (Space- ECSS, Automotive- ISO26262/ ISO21448/ ISO8800, Rail-EN 50126/8), and AI must also meet these functional safety requirements.
MAKING CERTIFIABLE AI A REALITY
Our next-generation open software platform is designed to make AI explainable, and to make systems where AI is integrated compliant with safety standards. This technology bridges the gap between cutting-edge AI capabilities and the rigorous demands for safety-crtical environments. By joining experts on AI robustness, explainable AI, functional safety and system design, and testing their solutions in safety critical applications in space, automotive and rail domains, we’re making sure we’re contribuiting to trustworthy and reliable AI.
Key activities:
SAFEXPLAIN is enabling the use of AI in safety-critical system by closing the gap between AI capabilities and functional safety requirements.
See SAFEXPLAIN technology in action
CORE DEMO
The Core Demo is built on a flexible skeleton of replaceable building blocks for Interference, Supervision or Diagnoistic components that allow it to be adapted to different secnarios. Full domain-specific demos are available in the technologies page.
SPACE
Mission autonomy and AI to enable fully autonomous operations during space missions
Specific activities: Identify the target, estimate its pose, and monitor the agent position, to signal potential drifts, sensor faults, etc
Use of AI: Decision ensemble
AUTOMOTIVE
Advanced methods and procedures to enable self-driving carrs to accurately detect road users and predict their trajectory
Specific activities: Validate the system’s capacity to detect pedestrians, issue warnings, and perform emergency braking
Use of AI: Decision Function (mainly visualization oriented)
Enhancing Railway Safety: Implementing Closed-Loop Validation with Unreal Engine 5 and ROS 2 Integration
For enhanced safety in AI-based systems in the railway domain, the SAFEXPLAIN project has been working to integrate cutting-edge simulation technologies with robust communication frameworks. Learn more about how we’re integrating Unreal Engine (UE) 5 with Robot Operating System 2 (ROS 2) to generate dynamic, interactive simulations that accurately replicate real-world railway scenarios.
Case Studies Update: Integrating XAI, Safety Patterns and Platform Development
The SAFEXPLAIN project has reached an exciting point in its lifetime: the integration of the outcomes of the different partners.
The work related to the case studies began with the preparation of AI algorithms, as well as the datasets required for the trainings and for the simulation of the operational scenarios. Simultaneously, the case studies have counted on support from the partners focused on explainable AI (XAI), safety patterns and platform development.
BoF workshop at Future Ready Solutions Event — Trustworthy AI: main innovations and future challenges
Project coordinator Jaume Abella from the Barelona Supercomputing Center represented the project at the Birds of a Feather session, “TrustworthyAI Cluster: Main innovations and future challenges” together with cluster siblings EVENFLOW, TALON and ULTIMATE.
Webinar: AI-FSM- Towards Functional Safety Management for Artificial Intelligence-based Critical Systems
Javier Fernandez from partner IKERLAN will share the SAFEXPLAIN project approach to integrating AI into Functional Safety Management in a safe, trustworthy and transparent way. In this 1.5 hour webinar hosted by HiPEAC, Javier will introduce a AI-FSM lifecycle that...
European Convergence Summit-Digital Booth ADR Exhibition
SAFEXPLAIN will have a digital booth as part of the ADR Digital Exhibition, co-located within the European Convergence Summit 2024. This digital booth will showcase the work conducted as part of the SAFEXPLAIN project, including videos, publications, and presentations...
TrustworthyAI Cluster Webinar hosted by ADRA-e
SAFEXPLAIN partner Enric Mezzetti from Barcelona Supercomputing Center will join the ADRA-e hosted webinar on "Trustworthy AI: Landscaping veriable robustness and transparency" on 29 May 2024 from 10-12h. The TrustworthyAI Cluster, nine EU-projects under call Horizon...








