


SAFEXPLAIN Presentation on Safe and Explainable Critical Embedded Systems Based on AI at DATE Conference in Antwerp
SAFEXPLAIN presented its latest research on Safe and Explainable Critical Embedded Systems Based on AI at the upcoming Design, Automation and Test in Europe Conference (DATE) in Antwerp, Belgium. The DATE conference is a leading event for electronic system design and...
Status at month 6: The SAFEXPLAIN consortium meets at IKERLAN
SAFEXPLAIN partners spent two days in IKERLAN presenting the work carried out in the first six months of the project. Although project partners have been meeting frequently online to discuss the work in their own spheres, this Face-to-Face (F2F) allowed them to share...
Towards the Safety of Critical Embedded Systems Based on Artificial Intelligence: the Standardization Landscape
AI safety means ensuring that the operation of an AI system does not contain any unacceptable risks . It is essential to ensure that the AI system operates reliably, that unintended behavior is mitigated and that it is possible to explain how the AI system arrived at a particular decision

Recent Comments