Confirmed Speakers & Talks

 

Ryan YeeStaff Operational Safety Manager at Zoox

“Operational Safety - Extending lessons learned to service operations”"

Ryan Yee is a Staff Operational Safety Manager at Zoox, where he is responsible for Operational and Field Safety.  His team is responsible for vehicle incident response, triaging potential vehicle safety concerns that occur through testing and service operations, and ensuring that Zoox maximizes its learning opportunities from continuous operations.  Prior to Zoox, Ryan was previously at Bird and Lyft, also in safety roles.  Ryan holds a PhD in Nuclear Engineering from UC Berkeley.


Simon Burton, Chair of Systems Safety at the University of York

AI-based autonomy for automotive - creating convincing safety arguments

Professor Simon Burton, PhD, holds the chair of Systems Safety at the University of York, UK and Business Director of the Center for Assuring Autonomy. He graduated in computer science at the University of York in 1996, where he also achieved his PhD on the topic of the verification of safety-critical software in 2001. Professor Burton has worked in various safety-critical industries, including 20 years as a manager in automotive companies. Professor Burton’s personal research interests include the safety assurance of complex, autonomous systems and the safety of machine learning. He has published numerous academic articles covering a wide variety of perspectives within these fields, such as the application of formal methods to software testing, the joint consideration of safety and security in system analysis and design, as well as regulatory considerations and addressing gaps in the moral and legal responsibility of artificial intelligence (AI)-based systems. He is also an active member of the program committees of international safety conferences and workshops. Professor Burton is convener of the ISO working group ISO TC22/SC32/WG14 “Road Vehicles—Safety and AI” and led the development of the standard ISO/AWI PAS 8800 “Safety and AI”.


Philip Koopman, Associate Professor at Carnegie Mellon University

“Understanding Self-Driving Vehicle Safety”

Abstract: Removing the human driver from a road vehicle fundamentally changes the assurance argument approach needed for safety. We must start with a deeper inquiry into what we actually mean by acceptable safety. A simplistic "safer than human driver" positive risk balance approach must be augmented with additional considerations regarding risk transfer, negligent driving behavior, standards conformance, absence of unreasonable fine-grain risk, ethics, and equity concerns. Once we have a more robust understanding of what it means to be acceptably safe, we will find that current standards frameworks and accompanying definitions are likely to be inadequate to assure safety due implicit assumptions that are violated when the human driver is removed. We propose a new framework for relating risk to acceptable safety based on satisfying multiple stakeholder constraints rather than taking a risk optimization point of view. This broadens safety approaches in a way needed to deal with autonomous vehicles.

Prof. Philip Koopman is an internationally recognized expert on Autonomous Vehicle (AV) safety whose work in that area spans over 25 years. He is also actively involved with AV policy and standards as well as more general embedded system design and software quality. His pioneering research work includes software robustness testing and run time monitoring of autonomous systems to identify how they break and how to fix them. He has extensive experience in software safety and software quality across numerous transportation, industrial, and defense application domains including conventional automotive software and hardware systems. He is a faculty member of the Carnegie Mellon University ECE department where he teaches software skills for mission-critical systems. In 2018 he was awarded the highly selective IEEE-SSIT Carl Barus Award for outstanding service in the public interest for his work in promoting automotive computer-based system safety. He originated the UL 4600 standard for autonomous system safety issued in 2020. In 2022 he was named to the National Safety Council's Mobility Safety Advisory Group. In 2023 he was named the International System Safety Society's Educator of the Year. He is the author of the books: Understanding Checksums & Cyclic Redundancy Codes (2024), How Safe is Safe Enough: measuring and predicting autonomous vehicle safety (2022), The UL 4600 Guidebook (2022) and Better Embedded System Software (2010).


Michael Woon, CEO & Founder at Retrospect

“Reasonable Safety in the World of Autonomous Systems”

Abstract: The autonomous vehicle industry is heading in the wrong direction when it comes to ensuring safety and defending themselves from liability. Breaking the definition of safety down into manageable behaviors with probabilistic targets helps AV developers identify, document, and develop their defense of why they are not putting the public at risk. Furthermore, this same framework can be used to effectively manage internal risks and losses so that AV companies can anticipate losses due to poor performance and allocate resources appropriately. Academic and industry research is needed to help build consensus on an appropriate subset of careful behaviors and probabilistic targets.

Michael Woon is a co-founder of Retrospect, an autonomous vehicle safety validation company in Ann Arbor, Michigan. Since its founding, six years ago, Michael has led Retrospect’s core autonomous safety solutions, consulted for the leading autonomous vehicle companies on safety frameworks, and contributed to developing AV safety standards and policies. Prior to founding Retrospect, Michael consulted at one of North America’s leading functional safety companies, kVA, now UL Solutions, and before that he was an algorithm development engineer for GM’s powertrain group. He holds a Mechanical Engineering degree from Michigan Technological University and a Master’s in Mechanical Engineering from the University of Michigan.

Michael has spoken at several industry conferences regarding Retrospect’s work, including AutoSens Detroit (2022), IEEE ICCVE 2022, AutoSens Brussels (2020), IQPC ISO 26262 (2019), SAE WCX (2019), and IQPC SOTIF and Testing ADAS & Self Driving Cars (2019).

Michael has also been invited to present at ITU-T FG-AI4AD (2020), EDCC DREAMS Workshop (2020), has moderated roundtables at IQPC ISO 26262 (2019), and lead workshops at IQPC Testing ADAS & Self Driving Cars (2019).


Leila Meshkat, Associate Division Chief at NASA Goddard Space Flight Center

“A Framework for Safety of Autonomous Systems in Space”

Abstract: The rapid advancement of Artificial Intelligence and the acceleration of new technology infusion in existing applications are undeniable trends.  While these technologies offer many benefits and are paradigm shifting, their rapid rate of adoption poses risks that should be acknowledged and managed.   Space exploration is becoming increasingly sophisticated due to advances in technology.   Autonomous spacecraft have been exploring deep space and even interstellar space for decades.  As their level of autonomy increases, so do the decision-making abilities of the spacecraft.  These decisions pertain to fault management, to autonomous navigation and path planning, and to science observations and activity planning.  Furthermore, humans are planning to return to the moon and then onwards to Mars in the foreseeable future.   As humans inhabit the solar system, they will benefit from autonomous systems that work alongside them to accomplish tasks.  There is still much to be learned about operating in outer-world environments therefore, autonomous systems should be equipped with the ability to make safe decisions under uncertainty.   NASA's definition of safety is to prevent injury, illness, loss of life, property damage, and environmental harm from its activities.  In this talk, we present a classification of autonomous spacecraft functions, the type of autonomy applicable to each of these functions, the corresponding uncertainties associated with each function, and type of autonomy, that may lead to unsafe states and, ultimately, make suggestions for methods to incorporate NASA’s safety principles within the design and implementation of autonomous systems

 Leila Meshkat is a seasoned systems engineer, researcher, technologist, and teacher with more than twenty years of experience in cutting edge, multi-disciplinary engineering, research, and technology development. She is currently the Associate Chief of the Assurance Systems Division at NASA Goddard Space Flight Center (GSFC) and actively involved in the digital transformation and technological advancement of existing processes. Prior to joining GSFC, Leila was a Senior Engineer and the Deputy AMMOS Technologist at the Jet Propulsion Laboratory (JPL) and a Lecturer at the University of Sothern California (USC) Astronautics Department. 

At JPL, since 2002, she held systems and software systems engineering roles in multiple phases of space mission development and operations and multiple flight projects (Mars Curiosity Rover, Europa Orbiter, Mars Perseverance Rover, Juno), as well as technology development projects for the Deep Space Network (Multiple Uplinks Per Antenna). Ms. Meshkat supported Program Level Decision making by building reliability models for the Mars Orbiters (Mars Program) and conducting systems analysis studies (Multi-Mission Ground Systems and Services Program). She held Lead Systems Engineering roles (PREFIRE proposal, JSC based FPP Re-engineering project) and was the Principal Investigator on multiple research and development efforts where she developed and infused new technologies into existing teams and processes (TeamX, DAWN, Juno, MRO). Leila was a Post Doc at USC Information Sciences Institute (ISI) before joining JPL and holds a Ph.D. in Systems Engineering from the University of Virginia, an M.S. in Operations Research from The George Washington University and a B.S. in Applied Mathematics from the Sharif University of Technology. 


Ronald Boring, Human Factors Manager at Idaho National Laboratory

“A Testbed for Levels of Automation”

Abstract: In this talk, I will briefly overview the state of new nuclear power plants. With increasing societal demand for electricity from data centers, nuclear energy is widely seen as the solution to provide high levels of electricity. One of the key cost factors of new nuclear is the level of staffing, whereby automation provides the key to operating new reactors safely with fewer people. Because the current fleet of U.S. nuclear power plants features largely analog instrumentation and controls, the nuclear industry must confront the simultaneous pull to digitalization and automation. This leapfrogging of technologies requires empirical work to ensure levels of safety are maintained. Given this problem space, I will close my talk with a discussion of how we are combining features of our simulator testbed with virtual operator modeling to create an automation testbed.

Ronald L. Boring, Ph.D., is a Distinguished Scientist and Department Manager at Idaho National Laboratory, specializing in human factors, human-computer interaction, and human performance. He previously worked as a human reliability researcher at Sandia National Laboratories, guest researcher at Halden Reactor Project, usability engineer for Microsoft Corporation and Expedia Corporation, and as a guest researcher in human-computer interaction at the National Research Council of Canada. Dr. Boring has a BA in Psychology and German from the University of Montana, an MA in Human Factors and Experimental Psychology from New Mexico State University, and a PhD in Cognitive Science from Carleton University. He was a Fulbright Academic Scholar to the University of Heidelberg, Germany. He has published over 200 research articles in a wide variety of human reliability, human factors, and human-computer interaction forums. He has served on the organizing committees for international conferences held by the Human Factors and Ergonomics Society, Applied Human Factors and Ergonomics. IEEE, and the Association for Computing Machinery.


Mohammad Pourgol, Associate Professor at the University of Maryland & Director of Design for Safety and Reliability at Schneider Electric.

"Advancing Safety and Risk Analysis for Autonomous Vehicles:
Insights and Outcomes from the UMD-ASME Workshop”

Abstract: In this presentation, I will share the key achievements and outcomes of the UMD-CRR and ASME-SERAD joint workshop on "Risk Analysis for Autonomous Vehicles: Issues and Future Directions," held on April 26, 2019. The workshop brought together experts from academia, industry, and government to address the critical safety, risk, and reliability challenges facing autonomous vehicle technology. Key topics discussed included the development of new safety case studies, the importance of modern safety and risk-based analysis methods, and the need for clear top-down safety requirements. The workshop also highlighted the necessity of avoiding outdated safety analysis methods and emphasized the role of human behavior in the safety of fully autonomous vehicles. I will present the collaborative efforts and recommendations from the workshop, which aim to enhance safety standards, regulatory frameworks, and industry practices. This presentation provided valuable insights into the state of autonomous vehicle safety and proposed future directions for research and policy development to ensure the safe integration of autonomous vehicles into our transportation systems. The workshop was followed by a congressional visit to Capitol Hill to brief members of the congress on the status of safety technologies for autonomous vehicles.

Dr. Mohammad Pourgol is the Director of Design for Safety and Reliability at Schneider Electric and an Associate Professor (adjunct) of Mechanical Engineering at the University of Maryland. Previously, he held the position of Associate Professor of Reliability Engineering at Sahand University of Technology (SUT). With over 20 years of experience, Dr. Pourgol's career encompasses industrial applications, research, and teaching in safety applications and reliability engineering at various esteemed institutions, including Schneider Electric, Teradyne, Johnson Controls, FM Global, Daikin Comfort, and the Massachusetts Institute of Technology (MIT). Dr. Pourgol is recognized as an IEEE Senior Member and has been elected as an ASQ Fellow and ASME Fellow. He served as the Chair of the ASME Safety Engineering and Risk/Reliability Analysis Division (SERAD) from 2017 to 2022. Additionally, he is a registered Professional Engineer (PE) in the State of Massachusetts and holds multiple certifications, including Reliability Engineer (ASQ CRE), Six Sigma Black Belt (CSSBB), and Manager of Quality/Organizational Excellence (ASQ CMQ/OE). An accomplished researcher, Dr. Pourgol has authored over 160 papers in archival journals and peer-reviewed conferences. He has also been an invited keynote speaker at numerous conferences and webinars and has filed one US patent. Currently, he serves as an Associate Editor for the ASME-ASCE Journal of Risk and Uncertainty in Engineering Systems, overseeing both Part A: Civil Engineering and Part B: Mechanical Engineering. His contributions to the field have been recognized with several prestigious awards.


 

Previous IWASS Editions