CA | ES | EN
Seminar

Integrating machine learning into safety-critical systems
Integrating machine learning into safety-critical systems

15/Jul/2024
15/Jul/2024

Speaker:

Thomas G. Dietterich
Thomas G. Dietterich

Institution:

Oregon State University
Oregon State University

Language :

EN
EN

Type :

Hybrid
Hybrid

Description:

TBA

The impressive new capabilities of systems created using deep learning are encouraging engineers to apply these techniques in safety-critical applications such as medicine, aeronautics, and self-driving cars. This talk will discuss the ways that machine learning methodologies are changing to operate in safety-critical systems. These changes include (a) building high-fidelity simulators for the domain, (b) adversarial collection of training data to ensure coverage of the so-called Operational Design Domain (ODD) and, specifically, the hazardous regions within the ODD, (c) methods for verifying that the fitted models generalize well, and (d) methods for estimating the probability of harms in normal operation. There are many research challenges to achieving these. But we must do more, because traditional safety engineering only addresses the known hazards. We must design our systems to detect novel hazards as well. We adopt Leveson’s view of safety as an ongoing hierarchical control problem in which controls are put in place to stabilize the system against disturbances. Disturbances include novel hazards but also management changes such as budget cuts, staff turnover, novel regulations, and so on. Traditionally, it has been the human operators and managers who have provided these stabilizing controls. Are there ways that AI methods, such as novelty detection, near-miss detection, diagnosis and repair, can be applied to help the human organization manage these disturbances and maintain system safety?

Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stanford University 1984) is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University.  Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 220 refereed publications and two books. He is the recipient of the 2024 IJCAI Award for Research Excellence. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability.

Dietterich has devoted many years of service to the research community and was recently given the ACML Distinguished Contribution and the AAAI Distinguished Service awards. He is a former President of the Association for the Advancement of Artificial Intelligence and the founding president of the International Machine Learning Society. Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and program chair of AAAI 1990 and NIPS 2000. He currently oversees the Computer Science categories at arXiv.

Dietterich spent a sabbatical year at the IIIA in 1998-99.

The impressive new capabilities of systems created using deep learning are encouraging engineers to apply these techniques in safety-critical applications such as medicine, aeronautics, and self-driving cars. This talk will discuss the ways that machine learning methodologies are changing to operate in safety-critical systems. These changes include (a) building high-fidelity simulators for the domain, (b) adversarial collection of training data to ensure coverage of the so-called Operational Design Domain (ODD) and, specifically, the hazardous regions within the ODD, (c) methods for verifying that the fitted models generalize well, and (d) methods for estimating the probability of harms in normal operation. There are many research challenges to achieving these. But we must do more, because traditional safety engineering only addresses the known hazards. We must design our systems to detect novel hazards as well. We adopt Leveson’s view of safety as an ongoing hierarchical control problem in which controls are put in place to stabilize the system against disturbances. Disturbances include novel hazards but also management changes such as budget cuts, staff turnover, novel regulations, and so on. Traditionally, it has been the human operators and managers who have provided these stabilizing controls. Are there ways that AI methods, such as novelty detection, near-miss detection, diagnosis and repair, can be applied to help the human organization manage these disturbances and maintain system safety?

Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stanford University 1984) is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University.  Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 220 refereed publications and two books. He is the recipient of the 2024 IJCAI Award for Research Excellence. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability.

Dietterich has devoted many years of service to the research community and was recently given the ACML Distinguished Contribution and the AAAI Distinguished Service awards. He is a former President of the Association for the Advancement of Artificial Intelligence and the founding president of the International Machine Learning Society. Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and program chair of AAAI 1990 and NIPS 2000. He currently oversees the Computer Science categories at arXiv.

Dietterich spent a sabbatical year at the IIIA in 1998-99.