Skip to main content
SHARE
Publication

Hazard Contribution Modes of Machine Learning Components

by Colin A Smith, Ewen Denney, Ganesh Pai
Publication Type
Conference Paper
Book Title
Proceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020)
Publication Date
Page Numbers
14 to 22
Volume
2560
Publisher Location
New York, New York, United States of America
Conference Name
SafeAI 2020 [The AAAI's Workshop on Artificial Intelligence Safety]
Conference Location
New York City, New York, United States of America
Conference Sponsor
Centre for the Study of Existential Risk; Universitat Politecnica de Valencia; Future of Life Institute; CEA Tech List; University of Liverpool.
Conference Date
-

Amongst the essential steps to be taken towards developing and deploying safe systems with embedded learning-enabled components (LECs)—i.e., software components that use machine learning (ML)—are to analyze and understand the contribution of the constituent LECs to safety, and to assure that those contributions have been appropriately managed. This paper addresses both steps by, first, introducing the otion of hazard contribution modes (HCMs)—a categorization of the ways in which the ML elements of LECs can contribute to hazardous system states; and, second, describing how argumentation patterns can capture the reasoning that can be used to assure HCM mitigation. Our framework is generic in the sense that the categories of HCMs developed i) can admit different learning schemes, i.e., supervised, unsupervised, and reinforcement learning, and ii) are not dependent on the type of system in which the LECs are embedded, i.e., both cyber and cyber-physical systems. One of the goals of this work is to serve a starting point for systematizing LEC safety analysis towards eventually automating it in a tool.