Effective Test Cases and How Inputs Influence Decisions

This is Part 2 in a series on creating high-quality test scripts. In “Writing High-Quality Test Scripts: Taming the Chaos,” we touched on the complexity of the testing environment, defined what a test script is, compared test scripts to test cases, and touched on some of the considerations for designing and developing test scripts. In this installment, we compare and contrast some of the inputs and outputs, examine the programming languages used, explore the impact of peer reviews, and take a look at the impact of test cases now and in the future.

Master Data Webinar

 

Relationships Between the Key Test Elements

To better understand the relationships between the various elements that go into creating and maintaining the testing process, a few definitions are in order. Some of these terms have similar or overlapping meanings, depending on the industry using them. For the scope of this article, we are citing the most common usage in the automotive realm:

Use Case: A use case describes how a given system must perform a task under a specific set of conditions. Use cases are outlined with various software or business requirements—which describe how end users will engage with the system—and the various outputs that the end users should receive. Use cases describe how a product is supposed to work, whereas test cases describe how a product is supposed to be tested.

Test Case: A test case is a detailed document that contains a set of actions or conditions that are performed on a software application to verify that the features within the application all function as expected. Derived directly from use cases, test cases ensure that a product is thoroughly tested.

There are many different types of test cases, including formal and informal test cases, and those that test functionality, user interfaces, integration, performance, security, usability, database storage and retrieval, and user acceptance. There are also exploratory test cases. Defining all these types in detail is beyond the scope of this article. However, there are two common truths that apply to all the various types of test cases:

  1. All test cases are performed for a specific test object at a defined test level.
  2. Although the objectives of these different types of test cases may vary, the results of most test cases fall into one of these four categories:
    1. Pass: The system has accomplished what it is supposed to.
    2. Fail: The system has failed in the attempt.
    3. Not executed: Tests that have not yet been run or will not be run in this particular round of testing.
    4. Blocked: Tests that will not be run due to an external circumstance or precondition.

Test cases are a source of truth that ensure proper test coverage, help reduce the cost of software maintenance and support, and improve quality. They enable testers to think things through and approach the tests from many different vectors, helping to verify that the software meets requirements. Test cases are also reusable, empowering future testers to utilize them to perform the tests again independently.

Test Plan: Where a test case is scoped to a particular testing situation or a specific aspect of a product’s functionality, a test plan is a significantly more comprehensive and overarching document designed to capture the information required to cover all aspects of testing the software. That information includes: the test strategy, the scope and objectives of the test, the test schedule (including start and end dates), relevant estimations, applicable deadlines, and the resources that will be required to complete the work. It is a plan, controlled by test managers, that aligns organization-wide expectations with what actually happens as testing is performed, in order to validate that the software is functioning as intended.

Test Script: In some circles, the terms test script and test case are virtually interchangeable, because they both describe the actions that test a software element’s functionality. However, in the automotive arena, the term test script is typically used in the context of automated testing, in which a machine does the testing. In other words, developers write test scripts to be machine-readable, as opposed to test cases which are written to be interpreted by the humans who are performing manual testing.

 

AdobeStock_298135186_ccexpress

 

 

Criteria for Good Requirements, Test Cases, and Test Scripts

 

Cause and Effect

It is helpful to keep in mind the cause-and-effect order in which the various test-related elements are created.In the automotive industry, requirements shape the test cases. The way the test cases are written drives how the code will be programmed and how the scripts will be written. In turn, the scripts and programming constitute the test. In other words, the requirements define everything that takes place downstream, and everything downstream should fall within the scope of the requirements—no more and no less.

Quality Characteristics and Best Practices

The goal of testing is to run the test subjects through a scenario in an accurate and realistic manner within an automated computer-based simulation environment, testing to the requirements. Therefore, the requirements themselves must be specific, unambiguous, and measurable. The cleaner the requirements, then the clearer and more concise the test cases, the more efficient the test scripts, and the more economical and trustworthy the tests.

Optimally, test cases should be written early in the software development lifecycle—specifically, during the phase when requirements are gathered. Test cases are defined by these requirements. Therefore, as they are writing the test cases, the tester should refer often to the requirements and use case documentation, as well as to the overall test plan.

Test cases should be written in a clear and concise manner and should take into consideration any relevant application flows. They should also be kept economical and easy to execute on a high level. This precautionary effort will reduce the maintenance burden when the application inevitably evolves.

There are certain characteristics and best practices that are particularly important in regard to all test scripts:

  • Test scripts should be small and completely independent from any other test script. The last thing a tester should do is run a test script that is dependent on other test scripts. If a dependent test script is run before the test script it is dependent upon is run, errors and other problems will result.
  • The test script must be fully traceable back to whatever requirement it was written for. There can be no ambiguity in traceability, and tracing must be possible in both directions. Bi-directional traceability ensures the completeness of the testing, illustrates test coverage to key stakeholders, and provides the basis for future tests.
  • The test script must cite the correct units of measure. It is imperative to set the test subject’s unit—the quantitative meter in which it operates—to the correct state and configure a specific calibration. These preconditions are important for clearly defining the inputs and outputs that are made afterwards.

 

Inputs and Outputs of a Test Case

A test case is a set of instructions on how to validate a particular test objective. It has components that define an input, action, and then an output (expected result), to determine if a given feature in the application is working correctly.

Developers’ building inputs and outputs must define and document fundamental considerations:

  • Test case ID
  • Unit to test (what specifically is being verified)
  • Assumptions
  • Test data (defining the variables and their values)
  • The steps that are to be executed
  • The expected result
  • The actual result
  • The definition of pass/fail
  • Comments

Additionally, certain inputs and outputs could require that a unit fall within a range value—for example, +/-5—and that the test case be able to accurately capture that value and write it within those limits.

 

Reusability

Developers should keep in mind a test case’s potential reusability, which is usually higher when inputs and outputs do not have very strict limits. To help leverage this, developers typically write target values as variables. These variables can then be reused for test cases that are repetitive. Clients will often have projects that involve their own library of test cases and scripts for subjects closely related within their own product lines. Sometimes those projects overlap, with a significant portion of the components in related products being interchangeable.

For example, imagine a team of developers working a project revolving around an engine oil pressure system, with one engine intended for desert use and the other for arctic conditions. Most of the components of these two closely related engines are the same, with only the parameters being different, depending on the application. The parameters would get rewritten at a project-level basis, allowing room for adjustment. By employing variables rather than hard-coding parameters into the testing, this balances reuse while enabling customization.

 

Programming the Code

After the test is planned out, the code can be programmed. Developers use an internal tool to create automated test requirement parameters, cases, expected outcomes, and plans, but test engineers are still closely involved in overseeing each step and writing the test script itself. Some aspects of the script can be automated—defining variables, inputs/outputs, etc.—but writing typically is still done manually.

Leveraging Automation

When initiating test runs at the end, an automated system is utilized. LHP frequently uses National Instrument (NI) software, so coding is primarily done in TestStand and LabView; .NET (dot net), Python, and C programming languages are good alternatives as well. TestStand is efficient because it is a complete test automation engine that is designed to run test scripts, with a feature allowing developers to talk with AI hardware. It is also diverse enough to call programming languages outside of National Instruments software—like Python, C, and LabView—making it a valuable and consistent option for test script execution.

The Importance of Peer Review

Well-rounded peer review is critical during every step of the process. A key factor in what sets LHP apart from the competition is the different ranges of experience that LHP brings to their partnerships. LHP has embedded engineers, test engineers, functional safety engineers, hardware engineers, and ALM-focused engineers, all supplying a great mixture to the work that is done within these larger projects. Validating code, for example, usually is not a single-person process, and having peer code reviews is an effective way to ensure that validation work is thorough. These teams divide the work into chunks until everything is reviewed, and then they start the testing itself.

New call-to-action

 

 

How Outputs Impact Test Cases

 

Reducing Variation

Outside of expected failures, the other challenges faced during the test creation process can range from insignificant to extreme. Unless a moment of failure derives from one of the assumed outcomes, it gets examined as an imminent problem. If inconsistencies are found in the coding, typically they are caught early. The coding team uses peer reviews to analyze the code, and then adjusts it until the issues are resolved.

Managing Model Years

The automotive industry is historically built around model years. However, test script projects generally are not affected by this because most test cases span multiple model years. Cases are usually more program-driven in the sense that these programs have their own multi-year lifespans, and the test cases follow suit. Occasionally though, changes in the program will be caused by a model year, and if these model changes are substantial enough, developers will need to construct a new program. It really depends on the degree of overlap. Some model-year changes are more superficial, affecting mostly body and trim. Others, however, can be the result of a complete redesign. This is where reusability can really pay off.

 

How LHP Supports Stakeholders

The test case projects that LHP supports can range from implementing test requirements and placing them in clients’ libraries for reuse, to building test cases from several pages of documentation. Either way, the work LHP performs reduces the workload on their clients, saving them valuable time and resources.

Obviously some projects involve global clients. But in such cases, usually there is a U.S. location these companies utilize to manage most of their communication, thus bridging any time and language gaps with their counterparts around the world.

 

The Future of Test Scripts

Within the automotive industry, testing is a valuable service that can be leveraged by companies that want to utilize external sources for testing instead of carrying the burden of supporting in-house testing all on their own. Companies who take advantage of outsourced testing develop a more streamlined and automated process overall, saving them tremendous time and effort.

It should be noted that writing test scripts and performing testing efficiently can require significant growth on the part of the client, both with the internal discipline of learning and adhering to proper internal processes, as well as maintaining timely and robust communication with its partners. If utilizing external sources does become a more frequent option for a company, the test script writing from these outside sources must meet and maintain certain standards. Not every service provider will be able to keep up, so it is imperative that companies utilize proven partners with a verifiable track record of success, like LHP.

It will become more common for companies to migrate from manual testing to automated testing. The timing, however, will vary with each company. Though it is straightforward to gauge and predict the levels of progress that a company can make with the test script writing process, there are several factors that will impact the pace and effectiveness of this transition, including:

  • Company budget
  • Engineering team size
  • Coding team size
  • Other unknown elements

 

Summary

For automotive manufacturers to move forward into a safer and more efficient testing environment, there must be growth and maturation within the scope of testing, the coding process itself, and the act of validation. Automated testing has already proven itself to be essential. It is inevitable that the automotive industry will continue to adopt and embrace these critical methodologies.

 

Interested in learning more about test cases for your organization? Contact our team today!

CONTACT US

 

 

 

Nathan Haynes

Written by Nathan Haynes

Nathan Haynes has served as a senior software engineer and application software technology grouping leader for LHP Engineering Solutions since 2014. He develops enterprise-scale web, mobile, and desktop-based applications for multiple OEMs and tier one organizations. He has worked with customer IT organizations to architect applications within their security and infrastructure requirements. In the past several years, Nathan has moved into the areas of application lifecycle management (ALM) and automated test systems. Using his experience with systems integration and communication protocols, he has developed unique approaches for the integration of these systems to help bridge the gap required for compliance with ISO 26262. Nathan has experience in security auditing of web applications in regard to cross site scripting, SQL injection, broken authentication and session management. He is a full stack developer and has experience in both software/database development as well as IT and cloud infrastructures. He began his career as a web and database developer in the healthcare IT industry, focusing on developing software that interfaced with a hospital’s Health Information System. His move to the automotive industry found him connecting factory floor PLCs to database-driven applications. This new technology enabled Nathan to develop business intelligence infrastructures to provide strategic decision-making information used by management. Nathan holds a Bachelor’s in Computer Science from the University of Southern Indiana and lives in Brownstown, Indiana.