What are the key considerations for putting technical safety requirements to work

 

It is one thing to properly write a requirement. It is quite another to make sure that it is interpreted and applied as intended. In this article, we examine some of the more nuanced considerations to watch for along the journey as a requirement goes from being written to being refined and applied in the real world to accomplish useful work.

New call-to-action

 

Why is it important to gather feedback and verify clear understanding?

 

Accurate interpretation

When you are writing technical safety requirements, you might think you have provided a clear explanation. But then someone else reads it, and it doesn't make sense to them. This isn’t a matter of who is right and who is wrong, but rather, it is a matter of whether the requirement communicates what it needs to convey in a manner that is interpreted accurately and completely by the end user.

There are many factors that can impact the successful interpretation and implementation of a requirement. Some are well within the ability of the author to anticipate, while others surface only after the requirement is shared with others. For example, depending on the user’s native language, the order of logic in a sentence might get rearranged in the reader’s mind, despite what is written. The result can be a sentence that is grammatically correct English, yet it might be prone to misinterpretation because it is too complex or verbose. Feedback brings these issues to light and provides us with the opportunity to revisit that sentence and make it more universal.

The act of authoring requirements is no place for pride of authorship to become the priority. Rather, accuracy and completeness are paramount. You need to be proactive about seeking out and receiving feedback to make sure that what is being communicated, consumed, and acted upon, is what was actually intended.

Human considerations

Our work is technology-rich and immersed in procedure. Do not be discouraged if most of the feedback you receive appears negative. Right or wrong, a reviewer typically is not going to take much time to voice affirmations of correct content simply because it does not require further input; corrections and refinements are the only things requiring additional communication.

It is not easy to be the author when you keep receiving negative comments, but take heart because what you are doing is important. Every item that achieves approval is a victory that quite literally makes the world a better and safer place, and you can’t gain those approvals without your work going through the review crucible. Feedback improves requirements, and the best possible requirements create functionally safe vehicles. In the end, that is all that matters.

How are iterations of requirements developed and approved?

You might author multiple iterations of a requirement in concert with the other stakeholders, as open requirements are reviewed and refined with other teams. For example, if I am writing the systems requirements, I am going to ask my customer’s systems engineers to weigh in on those requirements, since they are the people who are going to be implementing them. It is a basic yet important question: “Can you implement this requirement?” Or in another scenario, if I am a systems engineer, I will ask the hardware and software teams, “Can the software implement this?” The same process is followed for the hardware. This back-and-forth creates iterations. This process adds a lot of time to the development of the product, but it is time well spent. It is critical that these questions get answered.

This iterative process drives the allocation of requirements. If we have multiple technologies, allocation could be spread across mechanical requirements, software, and hardware. So, it is important that the people who are directly involved with those areas have plenty of input into helping shape those requirements.

How are requirements allocated within the system architecture?

You will not have the resources to support an infinite number of requirements, so they must be allotted judiciously. The allocation of requirements between hardware and software varies, based on the nature and complexity of the system. Generally, allocation is first based on higher-level abstract considerations. Then, the periods of time that impact the scenario are defined, such as reaction times. After the key time intervals are identified, safety mechanisms are addressed.

Another aspect that influences the allocation of requirements is the system’s architecture itself. Once the system architecture is defined, you can define how your system is going to be designed at the higher level. Then, you can identify the sensors, and you can clarify what functions are in the hardware versus the software. With that information, you can finally start allocating requirements to software and to hardware.

How does system complexity impact the allocation of requirements?

An important point is reached when the focus of the definition process progresses from what requirements need to be implemented on a higher level, to how they are going to be implemented. To illustrate this point, let's compare the rearview mirror system and the sideview mirror system of a vehicle, two different systems that both serve the same higher-level purpose. They have different designs, but at that higher abstract level, they pretty much do the same thing; namely, they provide the driver with a rearward view in a safe manner. After establishing that basic higher-level requirement, consideration can then shift to how each system is going to be used.

There have been many iterations of these systems over the years, and a substantial amount of innovations that have added functionality. In addition to providing the driver with a safe rearward view, today’s rearview mirror systems also provide automatic dimming. Likewise, modern sideview mirrors are often equipped with proximity sensors and additional turn signals.

The simple mirror systems of old had simple requirements: Provide a rearward-facing view, be adjustable, and don’t fall off the vehicle. They were so simple, that one system could also cover for the other. For example, in most states, it is still legal to drive a car with the rearview mirror obstructed so long as both sideview mirrors are functional and unobstructed. Likewise, a car didn’t even have to possess a passenger side mirror if it was equipped with a driver’s side mirror and a rearview mirror. Even into the late 1960’s, a passenger side mirror was typically considered an option, if it was offered at all.

But in modern mirror systems, the functional safety requirements become much more complex. Safety can be greatly impacted if the automatic dimming mechanism of a modern rearview mirror ceases to function, especially if the rearview “mirror” is a camera-and-display system prone to washout, rather than a reflective surface that can be tweaked with a flick of the wrist. Likewise, if the proximity sensors and turn signals built into the sideview mirrors stop functioning properly, blind spot detection could fail and negatively impact the communication of intent to other drivers in the blind spots on either side of the vehicle. The driver could lose the habit of turning their head to manually check their blind spots because they think the proximity sensors are checking for them, and they think they are fully communicating their intent to turn via the turn signals built into their sideview mirrors. In other words, the human has essentially become trained to no longer check their blind spots, because they believe that the system is now doing that check for them. If the system fails and gives no warning, the driver could steer right into another vehicle in their blind spot.

So, complexity helps determine the allocation of requirements. With added capability, comes added complexity. The more complex the system, the greater number of requirements that are needed to define all of its considerations and impacts.

Gustavo blog_adobespark (1)

 

 

The constraints of Reliability Engineering and Safety Integrity Levels

When defining requirements, there are constraints. For example, there are reliability aspects which factor the ability of a system or component to function properly under defined conditions for a specified period of time. And depending on the level of the criticality, the Safety Integrity Level (SIL) must be taken into consideration.

Reliability engineering deals with the prediction, prevention, and management of high levels of uncertainty and risks of failure over the “lifetime” of the engineering. Likewise, the SIL is defined as a measurement of performance that reflects the relative level of risk reduction provided by a safety function. Various industries have different definitions of SIL requirements. But the consistent aspect is that the higher the SIL, the less likely it is that there will be a safety hazard from a systems failure.

There is so much to reliability engineering that it stands as an entire sub-discipline of systems engineering, and SILs are an in-depth topic in and of themselves. A detailed examination of either realm would be beyond the scope of this article. However, the two are closely intertwined, and a high-level understanding of them helps illustrate the constrains they bring to the process of writing requirements.

For the sake of this discussion, reliability details the ability of a system’s hardware to function without failure by focusing on the costs of failure from system downtime, cost of personnel, spares, repair equipment, and the cost of warranty claims. And, SILs are an exercise in risk analysis and probability for a given safety device, using requirements grouped into two broad categories: hardware safety integrity and systematic safety integrity. (To achieve a given SIL, a device or system must meet the requirements for both of those categories.) The higher the integrity, the less likely it is that the system will experience a failure.

Reliability, costs, prediction, prevention, uncertainty, the level of risk reduction, safety integrity levels… these all encompass significant constraints. When SIL data is married with reliability data, what emerges is a comprehensive picture of the safety of a system and the cost of its reliability. Requirements must support these constraints and fit within them.

How do you build requirements when there is no prior work?

The process for building requirements is quite varied, based on where you are in the development chain. If the OEM is providing the requirements to their supplier, the OEM doesn't have anyone else to go to because they are starting with the bigger picture. “These are the requirements that the supplier is going to implement.” That is important. What the OEM provides to the suppliers is going to be the reference for the suppliers on what is expected for the system. Basically, they will become parent requirements to a lot of child requirements.

The OEM starts by defining a lot of what to do at a higher level. Then, definitions, complexity, and details are added. The project starts to look like a live product, and the team becomes more familiar with it. But having the ability to refer to the OEM’s initial requirements for clarification helps to make sure that all aspects are addressed.

As the author of these requirements, you need to go back frequently to ask questions during the writing process. Sometimes you might find a hazard that hadn't been accounted initially. Or, you are trying to find a certain reference, but you can’t find everything you need. If you find new hazards and uncover gaps in the requirements, resolving them is part of the objective of the system development process.

What assistance do iterations provide when solving problems?

When you review something, you might discover unintended functionality that is being implemented. You really want to avoid having unintended functionality. For example, let's say there is a situation where you are driving and suddenly, without warning, your right brakes are working, but the left brakes no longer work. When you try to apply the brakes, the result would be some sort of unintended behavior, such as the vehicle being pulled off the road. Or, you are driving, and suddenly your airbag deploys, causing an unintended accident. Having iterations to build upon and review aids understanding when unexpected functionality occurs. You can go back through your documentation, trace incremental changes, and identify the root cause, which might be far removed from the problem and not an obvious cause-and-effect.

If you find problems like gaps in the requirements or the architecture, it might not be immediately obvious where the root problem is. But if you find a problem, you must let the appropriate people know. This is very important!

The company should have a process for documenting problems. It is typically called a problem reporting process or a change control process. Bring the problem to your change control experts, your CCB, or your safety manager. The company must have a process to document problems and their solutions, and everyone must follow it.

Requirements and regression testing

Usually, software is debugged until the problem is found and fixed. However, I've seen a few cases where the same condition is simulated in a subsequent software build, and the problem is simply no longer there. What should you do when this happens?

In such cases, you should add that problem to your list of tests to be done for regression testing. Add a few tests on every subsequent design to make sure that the problem does not return. But ideally, you should try to find the root cause of the problem.

How do you define the boundaries of a requirement?

In order to define the boundaries of a requirement, define the functions that are related to what is being intentionally modified or repaired. For example, if a given sensor is being changed, whatever is related to the functionality of that sensor should also be retested and, if necessary, changed. It is imperative that you address every other system that is using the sensor. Investigate all the software modules that are using the old software and the components that are using that sensor. Have they been affected? Run tests on that specific portion, just to be sure.

You might also add to your regression testing, problems that you've seen in the previous deliveries to your customer. When defining the boundaries of your test, I would add some functionalities that are very important to your customer that you absolutely do not want to break. So, let's say you change only 5% of your code; you might end up testing 20% of it, just to be sure that critical functionalities are not negatively impacted.

What tools and processes are used to write requirements?

At LHP we typically use Jama Connect®, a requirements management tool for writing requirements. However, we can also work with other tools, depending on the customer’s preference. IBM offers a powerful tool, but it might be too expensive for some customers. We can offer suggestions for modifications that can best maximize the use of whatever tool they choose.

Requirements can be written in multiple ways, but the most common and practical is to use written natural language. We can also combine natural language with specific code and languages used in the various engineering systems. In a complex system, written requirement and graphical elements can be combined to define and maintain the requirement.

As requirements are created, they will be assigned unique ID numbers, a number that is a global reference to that unique item in the requirements management tool. Some tools will allow you to link to other requirements, for example, when a product is created in multiple colors but all other elements remain the same.

Do requirements have hierarchical relationships?

Yes. This process of documenting and testing these parent-child relationships is called “suspecting.” Utilizing ID numbers to link requirements makes it easier to document the cause-and-effect scope and impact of changing a given requirement.

In these cases, requirements have references to other requirements. For example, a parent requirement may have 10 child requirements. When the parent requirement is changed, all of the child requirements will then flag as being suspect, because there is now a chance that these child requirements might have been affected by that modification. Each child requirement downstream must then be tested anew.

Functional Safety Work Product Kickstarter-1

Summary

Putting technical safety requirements to work involves many considerations, and some of them can be quite complex. It is imperative to gather feedback from other team members and verify their clear understanding of the requirements. The back-and-forth communication will create iterations which add a lot of time to the development of the product, but it is time well spent.

This iterative process drives the allocation of requirements across multiple technologies, so it is important that the people directly involved with those areas have plenty of visibility and input. The allocation of requirements will vary, based on the nature and complexity of the system. An important point is reached when the focus progresses from what needs to be implemented, to how the requirements are going to be implemented. The more complex the system, the greater number of requirements that will be needed.

Constraints help shape requirements and drive their effectiveness. Reliability and SIL data provide a comprehensive picture of the safety of a system and the cost of its reliability. Requirements must be written to both support these constraints and fit within them.

Finding new hazards and uncovering gaps in the requirements, through rigorous regression testing, is an important part of the system development process. Test after every design change, resolve issues at their root cause, and avoid unintended functionality. Communicate all issues and refinements through the proper reporting process.

The requirements that the OEM provides to the suppliers will become the reference that defines what is expected from the system, so attention to detail is paramount. Requirements have complex parent-child relationships; managing them in a practical and cost-effective manner requires the use of a proven requirements management tool, such as Jama Connect®, and the practical insight that comes from experience. LHP can provide time-tested guidance to help you get the most from your resources, elevating your processes and requirements to produce functionally safe products that truly make the world a better place.

 

Interested in learning more about Functional Safety for your organization? Contact our team today!

CONTACT US

 

 

Further reading and references

 

How to Write Requirements for Functional Safety

 

 

 

Gustavo Melo

Written by Gustavo Melo

Before coming to LHP, Gustavo spent 10 years developing, managing, and certifying safety Critical Aircraft Systems including Ice Protection, Cabin Pressurization, Oxygen and Bleed Air Systems. He is well versed Systems Engineering tasks such as Functional Hazard Analysis (FHA), System Safety Assessment, Fault Tree Analysis (FTA), Failure Propagation Analysis, System Requirements Specification, and Interface Control Documents, Validation and Verification of Requirements and testing from the component to the integrated levels. Gustavo also has experience in electronics and digital controls. He led an electronic controller project for an aircraft, from the detailed hardware design up to the component qualification testing using DO-160 standard. While working at EMBRAER, Gustavo worked for 2 years as a Designated Engineering Representative (DER) for the Brazilian Civil Aviation Agency (ANAC). Gustavo led his team through innovative initiatives which resulted in patent applications. At LHP, Gustavo manages the LSS engineering team in Michigan and supports training and implementation of Functional Safety for our customers spread across the country.