BSI PD ISO/IEC TR 5469:2024
$215.11
Artificial intelligence. Functional safety and AI systems
Published By | Publication Date | Number of Pages |
BSI | 2024 | 84 |
PDF Catalog
PDF Pages | PDF Title |
---|---|
2 | undefined |
7 | Foreword |
8 | Introduction |
11 | 1 Scope 2 Normative references 3 Terms and definitions |
14 | 4 Abbreviated terms 5 Overview of functional safety 5.1 General |
15 | 5.2 Functional safety |
16 | 6 Use of AI technology in E/E/PE safety-related systems 6.1 Problem description 6.2 AI technology in E/E/PE safety-related systems |
20 | 7 AI technology elements and the three-stage realization principle 7.1 Technology elements for AI model creation and execution |
22 | 7.2 The three-stage realization principle of an AI system 7.3 Deriving acceptance criteria for the three-stage of the realization principle |
23 | 8 Properties and related risk factors of AI systems 8.1 Overview 8.1.1 General 8.1.2 Algorithms and models |
24 | 8.2 Level of automation and control |
25 | 8.3 Degree of transparency and explainability |
27 | 8.4 Issues related to environments 8.4.1 Complexity of the environment and vague specifications 8.4.2 Issues related to environmental changes |
28 | 8.4.3 Issues related to learning from environment |
29 | 8.5 Resilience to adversarial and intentional malicious inputs 8.5.1 Overview 8.5.2 General mitigations 8.5.3 AI model attacks: adversarial machine learning |
30 | 8.6 AI hardware issues |
31 | 8.7 Maturity of the technology 9 Verification and validation techniques 9.1 Overview |
32 | 9.2 Problems related to verification and validation 9.2.1 Non-existence of an a priori specification 9.2.2 Non-separability of particular system behaviour 9.2.3 Limitation of test coverage 9.2.4 Non-predictable nature 9.2.5 Drifts and long-term risk mitigations |
33 | 9.3 Possible solutions 9.3.1 General 9.3.2 Relationship between data distributions and HARA |
34 | 9.3.3 Data preparation and model-level validation and verification |
35 | 9.3.4 Choice of AI metrics 9.3.5 System-level testing |
36 | 9.3.6 Mitigating techniques for data-size limitation 9.3.7 Notes and additional resources 9.4 Virtual and physical testing 9.4.1 General 9.4.2 Considerations on virtual testing |
38 | 9.4.3 Considerations on physical testing |
39 | 9.4.4 Evaluation of vulnerability to hardware random failures 9.5 Monitoring and incident feedback 9.6 A note on explainable AI |
40 | 10 Control and mitigation measures 10.1 Overview 10.2 AI subsystem architectural considerations 10.2.1 Overview 10.2.2 Detection mechanisms for switching |
43 | 10.2.3 Use of a supervision function with constraints to control the behaviour of a system to within safe limits |
44 | 10.2.4 Redundancy, ensemble concepts and diversity |
45 | 10.2.5 AI system design with statistical evaluation 10.3 Increase the reliability of components containing AI technology 10.3.1 Overview of AI component methods 10.3.2 Use of robust learning |
46 | 10.3.3 Optimization and compression technologies |
47 | 10.3.4 Attention mechanisms 10.3.5 Protection of the data and parameters |
48 | 11 Processes and methodologies 11.1 General 11.2 Relationship between AI life cycle and functional safety life cycle |
49 | 11.3 AI phases 11.4 Documentation and functional safety artefacts 11.5 Methodologies 11.5.1 Overview 11.5.2 Fault models |
50 | 11.5.3 PFMEA for offline training of AI technology |
51 | Annex A (informative) Applicability of IEC 61508-3 to AI technology elements |
64 | Annex B (informative) Examples of applying the three-stage realization principle |
69 | Annex C (informative) Possible process and useful technology for verification and validation |
72 | Annex D (informative) Mapping between ISO/IEC 5338 and the IEC 61508 series |
75 | Bibliography |