
News

RISE Webinar Highlights XAI for Systems with Functional Safety Requirements
Dr Robert Lowe, Senior Researcher in AI and Driver Monitoring Systems from the Research Institutes of Sweden discussed the integration of explainable AI (XAI) algorithms into the machine learning (ML) lifecycles for safety-critical systems, i.e., systems with...

BSC receives visit from delegate from Taiwanese Institute for Information Industry
Figure 1: Photo by Francisco J. Cazorla, BSC representative also attending this meeting The SAFEXPLAIN project was thrilled to receive the visit of Stanley Wang, Director of the Digital Transformation Research Institute, part of the Institute for Information Industry...

Contributing to EU Sovereignty in AI, Data and Robotics at the ADRF24
SAFEXPLAIN participated as a Silver Sponsor of the 2024 AI, Data and Robotics Forum, which took place in Eindhoven, Netherlands from 4-5 November 2024. This two-day event gathered leading experts, innovators policymakers and enthusiasts from teh AI, Data and Robotics...

Consortium sets course for last year at Barcelona F2F
Members of the SAFEXPLAIN consortium met in Barcelona, Spain on 29-30 October 2024 to discuss the project's process at the end of the second year of the project. With one year to go, project partners used this in-person meeting to close loose ends and ensure that...

Second IAB Meeting Confirms SAFEXPLAIN Advancements at Start of Year 3
The SAFEXPLAIN project met with members of its industrial advisory board on 03 October 2024 to present project advancements at the beginning of the project’s third and final year. This meeting was important for ensuring the project’s research outcomes align with real-world industry needs.

RISE explains XAI for systems with Functional Safety Requirements
The SAFEXPLAIN project is analysing how DL can be made dependable, i.e., functionally assured in critical systems like cars, trains and satellites. Together with other consortium members, RISE has been working on establishing principles for ensuring that DL components, together with required explainable AI supports, comply with the guidelines set forth by AI-FSM and the safety pattern(s).

High interest in SAFEXPLAIN tech @ Gate4SPICE INTACS event
The SAFEXPLAIN keynote at the INTACS event “Optimal Performance of Modern Development: Automotive SPICE® Fusion with Intelligent Systems and Agile Frameworks” hosted by SEITech Solutions GmbH as part of the Gate4SPICE was extremely well-received by attendees. The...

SAFEXPLAIN deliverables now available!
Twelve deliverables reporting on the work undertaken by the project have been published in the results section of the website. The SAFEXPLAIN deliverables provide key details about the project and how it is progressing. The following deliverables have been created for...

SAFEXPLAIN takes part in 1st intacs® certified ML for Automotive SPICE® (pilot) training
SAFEXPLAIN partner, exida development provided invaluable contributions to the two days of pilot training for the intacs® certified machine learning (ML) automotive SPICE® training.

Integrating the Railway Case Study into the Reference Safety Architecture Pattern
Within the SAFEXPLAIN (SE) project, project partner, Ikerlan, leads the railway case study (CS), which is specifically centred on Automatic Train Operation (ATO). This article highlights how this CS is integrated into the reference safety architecture, building on the...