


Status at month 6: The SAFEXPLAIN consortium meets at IKERLAN
Figure 1: Members of the SAFEXPLAIN consortium meet at IKERLAN’s headquarters SAFEXPLAIN partners spent two days in IKERLAN presenting the work carried out in the first six months of the project. Although project partners have been meeting frequently online to...
Towards the Safety of Critical Embedded Systems Based on Artificial Intelligence: the Standardization Landscape
AI safety means ensuring that the operation of an AI system does not contain any unacceptable risks . It is essential to ensure that the AI system operates reliably, that unintended behavior is mitigated and that it is possible to explain how the AI system arrived at a particular decision

Press Release: SAFEXPLAIN facilitates the safety certification of critical autonomous AI-based systems for a more competitive EU industry
Barcelona, 13 February 2023. – The EU-funded SAFEXPLAIN (Safe and Explainable CriticalEmbedded Systems based on AI) project, launched on 1 October 2022, seeks to lay the foundationfor Critical Autonomous AI-based Systems (CAIS) applications that are smarter and...
Recent Comments