ETSI publishes technical report on testing AI and ML systems
Machine learning (ML) and artificial intelligence (AI) are increasingly being used in critical systems. In the report TR 103910, ETSI describes a transparent and effective approach for testing ML-based systems.

For AI and ML models to be used in safety-critical systems, they must be reliable, secure and trustworthy. ETSI TR 103910 describes how ML systems and various ML methods can be tested. In addition, the report provides metrics for the correctness, robustness, freedom from bias and security of such systems and describes test methods, such as risk-based testing, that are particularly suitable for testing ML-based systems.
The guideline was developed by Fraunhofer FOKUS together with the University of Göttingen and the German Bundesnetzagentur as part of the ETSI MTS working group.
Last modified: