The Safety Of The Intended Functionality (SOTIF) process defines and conveys how Advanced Driver Assistance Systems (ADAS) should be verified and validated as being functionally safe. These definitions are organized into scenarios. Anything that has ever happened, or might happen, can be defined in a SOTIF scenario. But first, to better understand what SOTIF scenarios are and how they aid the overall process, let’s review the basics.
As we have discussed in the previous two articles in this series, Why SOTIF Is Key for Safety in Autonomous Driving and Key Pieces of the SOTIF Puzzle, today’s modern vehicles are highly complex mechatronic machines that tightly interweave mechanical systems, software, computational power, sensing, data and bandwidth capacity, and physical actuation, implemented by electrical and electronic systems. These systems enable the mechatronic vehicle as a unified whole, to make decisions and take actions based on what it perceives, and what it has been designed to do.
The design of these systems is heavily influenced by concerns for safety. ADAS are the product of this design process. They are the hardware, software, and communication systems that are designed to increase safety and reduce risk by using the human-machine interface to aid the humans driving the vehicle.
SOTIF helps mitigate risks and hazards that may come to light when the driving conditions exceed the limits of the technology for one or more of the system components. And it also addresses human considerations, such as foreseeable system misuse, or confusion as to how the human should be operating the vehicle.
Both ADAS and SOTIF are defined and managed through adherence to international standards, and they work together to address both the intended functionality of the system as well as any unintended functionality that may arise. The scope of SOTIF is defined, tested, and refined, through deliberate intent shaped by the risks that the engineers are trying to nullify. Therefore, the risks ultimately drive the work of defining what the safe state is for a given system and scenario. The process of clearly identifying, quantifying, and categorizing those risks, is what we will examine next.
What are the SOTIF Scenarios?
Before any system design work can be initiated, one must first define the risks for each scenario that the system may find itself in. After all, the purpose of all this effort is to minimize risk. So, the risks that you are trying to mitigate must be quantified before systems can be designed and built to address them. Once ADAS are developed that have the intelligence to properly address all of the SOTIF considerations, the systems will have achieved situational awareness.
For our purposes, a scenario is defined as a detailed plan for a projected set of conditions and sequence of events. When a real or potential risk is identified, it is categorized as being within one of four SOTIF scenario types. Anything that has ever happened, or might happen, can be defined in a scenario. Each scenario can be classified. The classification indicates the nature and risk within that scenario and provides a starting point approach for dealing with it.
Each SOTIF scenario has two either/or variables that address fundamental questions: is that scenario inherently safe, or is it inherently unsafe? And, within either of those two answers lies another binary parameter: is the risk known, or is it unknown? The result is four categories of SOTIF scenarios for each relevant use case, all of which are classified into four overlapping areas:
Because each type of risk carries its idiosyncrasies, each category is prioritized and dealt with in a slightly different way.
The first step is to try to cover the known, for the most fundamental of reasons: we need to do a good job with what we already know. The known things fall under the categories of known safe and known unsafe. Due diligence must be performed in trying to cover these considerations the best we can.
Known Safe Safe Operation/Conventional Operation. The normal intended functionality can be deployed and verified.
Known Unsafe The vehicle can be operated, with system or sensor level restrictions. Redundancy can be deployed at the system level, and manual drive mode alerts can be carefully utilized. The independent system behavior (with its intended functionality), other random trigger events associated with the road scenario, negligence of the driver regarding system alerts, and possible misuse of system functions by the driver, can all be simulated and validated at the system level. Also, depending on the functionality of the intelligent system, an algorithm could be deployed and the end-to-end response time could be verified to identify performance limitations and guide possible improvements.
In the known part of the process, we try to identify weaknesses within the system to find vulnerabilities. That usually starts at the sensor level but may expand to other components. We must identify the weaknesses of certain sensors because if the data coming from the sensors is not accurate or reliable, it can corrupt everything else downstream in the system.
Testing the known is a straightforward process. But how do we test the unknown? That's where things get a little more abstract.
Unknown Unsafe The nightmare scenario. Difficult to predict the system reaction and other aspects. Common or continuing scenarios or trigger events in the road environment can be generated synthetically, such as various traffic conditions, on-highway driving, city driving, hospital zones, school zones, road construction zones, service lanes, and restricted lanes. These are prime candidates for evaluation of both their qualitative and quantitative characteristics and the appraisal of the behavior of the system, by leveraging technologies such as black-box testing that utilizes a Hardware-in-Loop (HIL) configuration.
Unknown Safe The normal intended functionality can be deployed and verified. The vehicle can be operated if it possesses a secured and rugged system design.
The general idea is that we must create unknown scenarios and then see how our system responds. The high-risk category is the unknown unsafe, where we don't know what we don't know. What we're trying to do is make that unknown smaller, that infinite pie smaller. We are trying to get smarter about the system and stress the system in a way where we increase our confidence that we will be successful in the unknown unsafe category.
How do you create unknown scenarios?
Industry-wide, the concrete math for unknown scenarios is not yet fully defined. So, we need a method to randomly generate scenarios that will represent the unknown. Some guidance is in order here: If you are not careful, you could end up putting in a significant amount of time and effort to create lots of scenarios covering things that, in retrospect:
you realize are not impactful enough to warrant additional attention, or
are not truly unknown unsafe, or
that may end up being safe through no other mechanism than 100% infinite randomness.
So, it is imperative to focus your efforts. Some of the inputs to focusing are using what you learned from the known weaknesses so that you emphasize those. You could also add some statistical analysis to try to generate statistics on some of the scenario input.
Let's say that one of the scenario environments may be a city street, another one may be a country road, another one may be a freeway or yet another a two-lane highway. They are all a little bit different and have different levels of risks and challenges. So, do we test them all identically? No. There is another way.
Creating a practical statistical distribution
If we conducted some research and were able to show how often vehicles operated in each of these scenarios, we would come up with a distribution, a percentage of operation in each one. So, as an input to our random scenario generation, we should end up with scenarios that have the same type of distribution, because statistically, that is how we expect the vehicle to be operated. And we would do that for every type of input, including the road type, weather conditions, and other considerations.
Accounting for weather requires a slightly different approach. After conducting the research, it might be found that, compared to clear sunny days, the vehicle is not operated in snow that often, or it is not operated in the rain that often. In those scenarios, the scenarios should be tweaked a little bit to err more towards the bad weather element. It makes sense that the best chances for easy success are in the “bright sunny day at noon” kind of scenario. However, that does not reflect the big picture reality. The solution is to not emphasize the sunny days. Even if those types of days may end up being the majority, de-emphasize them. Tweak your inputs to err towards less optimal weather and build that into your test plan.
Random generation in SOTIF scenarios
How is randomness generated into SOTIF scenarios? Industry-wide, that is still an area under experimentation and review. What is the right way of doing it? Philosophically, we at LHP have a method, and we have some guidance to offer. There may even be some training out there. But how specifically do you do it? How do you implement it? This is a discussion many people are interested in right now, but despite this interest, the industry has not yet achieved a concrete step-by-step methodology.
Random generation is the new frontier that is only now being explored in earnest. We have found lots of tools out there that are suited for this type of work, and we think we are close to coming up with a path that may work. But today, right now, the experiment continues.
Anything that has ever happened, or might happen, can be defined in a SOTIF scenario. Each scenario can be sorted into one of four SOTIF classifications: known safe, known unsafe, unknown unsafe, and unknown safe. The classification indicates the nature and risk within that scenario and provides a starting point approach for dealing with it.
The first step is to cover the known, what we already know, the known safe and the known unsafe. Those are pretty straightforward. In comparison, testing the unknown becomes a more abstract exercise. The unknown unsafe is the nightmare scenario, where you don’t know what you don’t know, and you have no visibility of how much risk there is. It is quantified in part using testing simulations such as HIL systems, that allow for testing a significant variety and volume of scenarios in rapid order safely and cost-effectively. The unknown safe can offer surprises, but a rugged system design can absorb most of the impacts of an unknown yet safe scenario.
Driving conditions vary depending on road conditions and the environment. Statistical distributions are employed to reflect how we expect the vehicle to be operated under these different types of conditions. In these, we err towards testing more on the side of the less safe. This helps to compensate for the reality that the industry does not yet have a concrete plan for systematically injecting randomness into the work. We as an industry have made significant progress in systematically managing these challenges, with SOTIF playing a pivotal role. But there is still much to be done to systematically create testing that realistically reflects the chaotic randomness of real life. .
Interested in learning more about SOTIF for your organization? Contact our team today!