
News

Celebrating Women and Girls in Science Day with advice for young scientists
We´re celebraiting the 9th Anniversary of #FEBRUARY11 Global Movement with a look into the women in science and technology in the project.
The SAFEXPLAIN projects counts with the participation of many women in science who are driving the project´s success. See what advice they have for young scientists.

Integrating AI into Functional Safety Management
SAFEXPLAIN is developing an AI-Functional Safety Management methodology that guides the development process, maps the traditional lifecycle of safety-critical systems with the AI lifecycle, and addreses their interactions. AI-FSM extends widely adopted FSM methodologies that stem from functional safety standards to the the specific needs of Deep Learning architecture specifications, data, learning, and inference management, as well as appropriate testing steps. The SAFEXPLAIN-developed AI-FSM considers recommendations from IEC 61508 [5], EASA [6], ISO/IEC 5460 [3], AMLAS [7] and ASPICE 4.0 [8], among others.

Certification bodies weigh-in on SAFEXPLAIN functional safety management methodologies integrating AI
SAFEXPLAIN partners from IKERLAN and the Barcelona Supercomputing Center met with TÜV Rheinland experts on 22 January 2024 to share the project´s AI-Functional Safety Management (AI-FSM) methodology. This meeting provided an important opportunity for the project to present its work to an important player in safety certification.

Mixed Critical Systems Workshop at HiPEAC 2024
Irune Agirre, from partner IKERLAN, discusses functional safety approaches for AI-based critical systems at HiPEAC2024 On 19 January 2024, members of the SAFEXPLAIN consortium participated in the 12th Workshop on "MCS: Mixed Critical Systems – Safe and Secure...

Sustained performance and segregation through hardware-level support
Exploiting the computational power of complex hardware platforms is opening the door to more extensive and accurate Artificial Intellgence (AI) and Deep Learning (DL) solutions. Performance-eager AI-based solutions are a common enabler of increasingly complex and...

SAFEXPLAIN consortium meets with industrial advisory board to ensure alignment with industry needs
In an important checkpoint for the SAFEXPLAIN project, the project consortium met with an Industrial Advisory Board comprised of eight influential industry actors on 24 November 2023. At just over the one year point, this meeting sought to present the project´s...

EU projects collaborate for Trustworthy AI Across Europe
Horizon Europe supports nine initiatives to boost solid and trustworthy AI across Europe Nine projects funded under Horizon Europe call HORIZON-CL4-2021-HUMAN-01-01 will pave the way for the widespread acceptance of Artificial Intelligence (AI) across Europe....

EXIDA presents ASPICE MLE in context of SAFEXPLAIN
EXIDA partner, Carlo Donzella, presents an initial comparison between ASPICE vs SAFEXPLAIN models at the 2023 EXIDA Automotive Symposium SAFEXPLAIN partner, exida, presented the SAFEXPLAIN project in the exida-hosted Automotive Symposium on18 October 2023 in...

IKERLAN presents AI for Safety-Critical Systems @ 2023 TÜV Rheinland International Symposium
IKERLAN partner, Jon Perez-Cerrolazo presents @ TÜV Rheinland International Symposium 2023 on AI and Safety survey prepared by the consortium. The 2023 TÜV Rheinland International Symposium was held in Boston, USA from 16-17 October. This two day event brought...

Press Release: Introducing SAFEXPLAIN: Trustworthy AI through Deep Learning solutions that meet European Safety Standards for Industry
Barcelona, 18 October 2023—The newly released video from the EU-funded SAFEXPLAIN project explains how it tackles safety and explainability challenges associated with the use of deep learning (DL) solutions in critical autonomous systems, like cars, trains and satellites.