Introducing SAFEXPLAIN:
Safe and Explainable Critical Embedded Systems based on AI
Objectives
To improve the explainability and traceability of DL components
To provide clear safety patterns for the incremental adoption of DL software in Critical Autonomous AI-based Systems (CAIS)
To integrate the SAFEXPLAIN libraries with an industrial system-testing toolset
To create architectures of DL components with quantifiable and controllable confidence, and that have the ability to identify when predictions should not be released based on applicability’s scope or security concerns
To design, implement, or update selected representative DL software libraries according to safety patterns and safety lifecycle considerations, meeting specific performance requirements on relevant platforms
Deep Learning (DL) techniques are key for most future advanced
software functions in Critical Autonomous AI-based Systems (CAIS) in
cars, trains and satellites. Hence, those CAIS industries depend on their
ability to design, implement, qualify, and certify DL-based software
products under bounded effort/cost
Case studies
Railway: This case studies the viability of a safety architectural pattern for the completely autonomous operation of trains (Automatic Train Operation, ATO) using intelligent Deep Learning (DL)-based solutions.
Space: This case employs state-of-the-art mission autonomy and artificial intelligence technologies to enable fully autonomous operations during space missions. These technologies are developed through high safety-critical scenarios.
EV community at Innovex 24 welcomes presentation by SAFEXPLAIN
SAFEXPLAIN partner, Carlo Donzella from exida development, opens EV session with key note on "Enabling the Future of EV with TrustworthyAI" June 6, 2024 marked an important opportunity for the SAFEXPLAIN project to share project results with key audiences from the...
Successful showcase of SAFEXPLAIN use cases at Trustworthy AI webinar
SAFEXPLAIN partner Enrico Mezzeti from the Barcelona Supercomputing Center joined 8 other Horizon Europe-funded projects under call HORIZON-CL4-2021-HUMAN-01-01to present the project´s work on TrustworthyAI and its implications for its use cases. The nine projects,...
A Tale of Machine Learning Process Models at Automotive SPIN Italia
Carlo Donzella from exida development presents at the Automotive SPIN Italia 22º Workshop on Automotive Software & System SAFEXPLAIN partner Carlo Donzella, from exida development, presented at the Automotive SPIN Italia 22º Workshop on Automotive Software &...
European Research Night – European project corner
More than 300 cities in 30 European countries celebrated European Research Night this year. This EU-funded initiative seeks to share research, innovation and results with a variety of audiences. SAFEXPLAIN participated in the Barcelona online edition of this event. As...
TÜV Rheinland International Symposium 2023
Image from TÜV Rheinland International Symposium website The TÜV Rheinland International Symposium is a specialist event intended as a platform for intensive experience exchange for those involved in Functional Safety and Cybersecurity of different industrial...
EXIDA Automotive Symposium 2023
The 2023 Exida-hosted Automotive Symposium will be held from 18-20 October 2023 in the alpine town of Spitzingsee, Germany. This two-day event will encourage the exchange of information and contacts in the automotive industry.