SAFEXPLAIN: From Vision to Reality

AI Robustness & Safety

Explainable AI

Compliance & Standards

Safety Critical Applications
THE CHALLENGE: SAFE AI-BASED CRITICAL SYSTEMS
- Today’s AI allows advanced functions to run on high performance machines, but its “black‑box” decision‑making is still a challenge for automotive, rail, space and other safety‑critical applications where failure or malfunction may result in severe harm.
- Machine- and deep‑learning solutions running on high‑performance hardware enable true autonomy, but until they become explainable, traceable and verifiable, they can’t be trusted in safety-critical systems.
- Each sector enforces its own rigorous safety standards to ensure the technology used is safe (Space- ECSS, Automotive- ISO26262/ ISO21448/ ISO8800, Rail-EN 50126/8), and AI must also meet these functional safety requirements.
MAKING CERTIFIABLE AI A REALITY
Our next-generation open software platform is designed to make AI explainable, and to make systems where AI is integrated compliant with safety standards. This technology bridges the gap between cutting-edge AI capabilities and the rigorous demands for safety-crtical environments. By joining experts on AI robustness, explainable AI, functional safety and system design, and testing their solutions in safety critical applications in space, automotive and rail domains, we’re making sure we’re contribuiting to trustworthy and reliable AI.
Key activities:
SAFEXPLAIN is enabling the use of AI in safety-critical system by closing the gap between AI capabilities and functional safety requirements.
See SAFEXPLAIN technology in action
CORE DEMO
The Core Demo is built on a flexible skeleton of replaceable building blocks for Interference, Supervision or Diagnoistic components that allow it to be adapted to different secnarios. Full domain-specific demos will be available at our final event.

SPACE
Mission autonomy and AI to enable fully autonomous operations during space missions
Specific activities: Identify the target, estimate its pose, and monitor the agent position, to signal potential drifts, sensor faults, etc
Use of AI: Decision ensemble

AUTOMOTIVE
Advanced methods and procedures to enable self-driving carrs to accurately detect road users and predict their trajectory
Specific activities: Validate the system’s capacity to detect pedestrians, issue warnings, and perform emergency braking
Use of AI: Decision Function (mainly visualization oriented)

RAIL
Review of the viability of a safety architectural pattern for the completely autonomous operation of trains
Specific activities: Validate the system’s capacity to detect obstacles, issue warnings, and perform service braking
Use of AI: Data-quality and temporal consistency scores
BSC’s Francisco J. Cazorla Delivers Keynote at prestigious 36th ECRTS Conference
BSC Research on real-time embedded systems takes center stage at premier European conference Francisco J. Cazorla from BSC delivers keynote at the 36th ECRTS The 36th Euromicro Conference on Real-Time Systems is a major international conference showcasing the latest...
IKERLAN Webinar Provides Key Insights into AI-Functional Safety Management
Dr Javier Fernández Muñoz from IKERLAN contextualizes the current state of AI and Functional Safety Management On 4 July 2024, speakers from IKERLAN shared an in-depth look into the SAFEXPLAIN project developments in AI-Functional Safety Management (FSM) methodology....
SAFEXPLAIN joins EU AI Community with Digital Booth @ ADR Exhibition
The 2024 European Convergence Summit, hosted by the AI, Data and Robotics Association ecosystem, was held online on 19 June 2024 and brought together influential players from AI, Data and Robotics to discuss the impact of these technologies on society. The summit...
Presentation at Smart City Expo World Congress: Safe and Trustworthy AI in critical systems (automotive and rail)
On 9 November 2023, SAFEXPLAIN coordinator, Jaume Abella from the Barcelona Supercomputing Center will be presenting at the 2023 Smart City Expo World Congress in Barcelona, Spain. The presentation, Safe and Trustworthy AI in critical systems (automotive and rail)...
SAFEXPLAIN SILVER SPONSOR of the AI, Data and Robotics Forum
On 8-9 November 2023, the SAFEXPLAIN project participated in the 2023 AI, Data, Robotics Forum as a silver sponsor. BSC partners Francisco J. Cazorla and Axel Brando presented the SAFEXPLAIN project during the poster session and gave a brief talk to AI experts. ...
European Research Night – European project corner
More than 300 cities in 30 European countries celebrated European Research Night this year. This EU-funded initiative seeks to share research, innovation and results with a variety of audiences. SAFEXPLAIN participated in the Barcelona online edition of this event. As...