A requirements model for AI algorithms in functional safety-critical systems with an explainable self-enforcing network from a developer perspective  

在线阅读下载全文

作  者:Christina Klüver Anneliesa Greisbach Michael Kindermann Bernd Püttmann 

机构地区:[1]CoBASC Research Group,Essen 45130,Germany [2]Pepperl+Fuchs Group,Mannheim 68307,Germany [3]TUV Nord Group,Essen 45307,Germany

出  处:《Security and Safety》2024年第4期61-85,共25页一体化安全(英文)

摘  要:The requirements for ensuring functional safety have always been very high.Modern safety-related systems are becoming increasingly complex, making also the safety integrity assessment more complex and time-consuming. This trend is further intensified by the fact that AI-based algorithms are finding their way into safety-related systems or will do so in the future. However, existing and expected standards and regulations for the use of AI methods pose significant challenges for the development of embedded AI software in functional safety-related systems. The consideration of essential requirements from various perspectives necessitates an intensive examination of the subject matter, especially as diferent standards have to be taken into account depending on the final application. There are also diferent targets for the “safe behavior” of a system depending on the target application. While stopping all movements of a machine in industrial production plants is likely to be considered a “safe state”, the same condition might not be considered as safe in flying aircraft, driving cars or medicine equipment like heart pacemaker. This overall complexity is operationalized in our approach in such a way that it is straightforward to monitor conformity with the requirements. To support safety integrity assessments and reduce the required efort, a Self-Enforcing Network(SEN) model is presented in which developers or safety experts can indicate the degree of fulfillment of certain requirements with possible impact on the safety integrity of a safety-related system. The result evaluated by the SEN model indicates the achievable safety integrity level of the assessed system, which is additionally provided by an explanatory component.

关 键 词:Functional safety Safety-critical systems Requirements for AI methods Explainable self-enforcing networks(SEN) 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程] TP309[自动化与计算机技术—控制科学与工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象