SARAS is an interdisciplinary organization focusing on autonomous system safety and reliability.  We aim to help all stakeholders design, build, operate, and certify safe and reliable autonomous systems. We help governments, their agencies, manufacturers and other researchers in creating frameworks under which reliable and safe autonomous systems are designed and operated. 

The incorporation of autonomous systems into everyday life from the perspective of safety and reliability will be either painful, planned or lucky.

We aim to help it be planned.


Our vision is necessarily futuristic. We see a world where many of the tasks and activities we undertake today are more effectively and efficiently undertaken by autonomous systems. We see global productivity increasing as a result. We see an environment where emerging ideas about how autonomy can help society more broadly quickly become reality.

Our vision is that the decisions about whether an autonomous system is safe and reliable is not a barrier to its successful implementation.


Machines Learning to be Reliable

Machine learning is not a new concept—machines are very good at quickly going through data and can fairly easily identify patterns. If a machine identifies a pattern in failure data, it may have identified a causal relationship. That is, it may have found out the cause for a particular form of failure. But in the field of reliability engineering, we already have many models of failure mechanisms. These are better than "patterns" as they are based on science. Can we combine the benefits of machine learning with our "human" understanding of why things fail? Can we motivate a machine to work out how to better 'operate itself' to be more reliable? These are questions we are attempting to answer. 

Read More →

Autonomous Systems For Space Exploration

Resilience is the ability of a system, in an open range of adverse scenarios, to maintain normal operating conditions or to recover from degraded or failed states in order to provide anticipated functions or services to achieve mission success. The key objective of the project is to develop a model-based resilience engineering methodology and toolkit to close the gaps of resilience capabilities between autonomous and human-operated systems. There are many examples of autonomous or semi-autonomous systems in space applications, such as the Mars Science Laboratory Curiosity Rover. The technical approaches envisioned are based on the theory of Hybrid Causal Logic (HCL) and the software platform Hybrid Causal Logic Analyzer (HCLA) developed at the institute.

Read More →

Autonomous Systems Control Software

How do we assure that an autonomous system does what it is supposed to do? Autonomous systems are controlled by software, which is supposed to always make “right” decisions. Probabilistic risk assessment is one way to assess and verify that a system is safe enough to operate. Software behaves unlike physical system components. Its assessment therefore requires adapted methods. Aim of the ongoing research is to develop a method for assessing the impact of software control systems on the risk level of operation of autonomous systems

Read More →

Humans in the Loop

A few challenges have been identified regarding human-machine interaction in autonomous systems, such as: i) autonomous machine failures may be especially challenging for the operator, compared to traditional, non autonomous systems; ii) the significant reduction of trust following a system failure; iii) anxiety and frustration that can be beyond the ones that result from working with fallible machines, which derive in part from the greater complexity and unpredictability of machine intelligence. These challenges, affect the strategies and solutions the crew may use to run the mission safely, as well as the mechanisms for recovery or fallback. Ongoing research is focused on using the Hybrid Causal Logic methodology for modeling the interaction human/technology in the environment of autonomous systems, considering its particularities.