System-level, or "black box", testing verifies that software correctly implements the system-level requirements and specifications. It requires no knowledge of the software design or structural implementation. In contrast, unit-level testing is based on detailed knowledge of the architectural and logical design, data structures, performance and timing specifications, and interface requirements. The result is software that does what it supposed to do and that is far less likely to fail, to require costly rework, or to result in harm to patients.
Regulators recognize this. The FDA, in its General Principles of Software Validation guidance document, recommends that device software "should be challenged with test cases based on its internal structure, and with test cases based on its external specification."
WHAT IS UNIT TESTING?
White-box testing is based on internal
structure. Unit testing is a form of whitebox
testing that is done at or near the code
level to ensure that the implementation
matches the intended design. The design
must be documented in sufficient detail
so that tests can be developed to verify
the accuracy of the software implementation.
The idea is to test that small sections
of code (typically functions) work as intended
and are reasonably robust. Once
these individual code blocks are tested,
larger groups of those building blocks will
hold up better during integration.
The execution of a unit test should be as fully automated as possible and should be given strict pass/fail criteria based on the intended design. Unit test configurations often allow batching or scripting of tests to be run. The script allows automated loading of the units to be tested, feeding of test inputs, capture of test outputs, and logging of pass/fail results.
This allows the unit tests to be run quickly and the results to be tabulated with great efficiency. There should be little left open to interpretation by the tester. Unit testing that requires tester interaction (such as pressing buttons or stepping through the code with a debugger) should only be used as a last resort, but may be necessary for some areas (e.g. critical startup routines that may be coded in as sembly). A high level of automation allows the developer to exercise the code after a build, but prior to release, for integration or systems-level testing. This would not be practical if the unit testing was by manual means. Frequently re-running the tests provides some confidence that the code was not "broken" in some major way with the most recent changes. Stopping errors at this early release stage can save time as the code goes through the integration process.
WHAT IS A UNIT?
From a regulatory perspective, the
device manufacturer can define a unit to
be any grouping of source code. However,
the definition of a unit for unit testing
generally should be done at the smallest,
most practical code level.
UNIT TEST HARDWARE CONFIGURATION figure 1
TOOLS, TECHNIQUES AND METHODOLOGIES
Unit tests should be written to verify that
the code is a thorough and accurate implementation
of the design that is described
in the design documentation. The assumption
that the design elements of the documentation
have been traced back to the
requirements is implicit. This assures that
the design being tested is actually one that
the user and other product stakeholders
need. The core algorithms, computations
and path coverage should be tested for
compliance with the design documentation
as much as possible.
The unit test strategy should be planned as part of the development phase. Creating code that contains a framework for unit testing is easier than trying to shoehorn unit testing in later. Finding errors during unit testing is also more cost effective to the project, rather than waiting for problems to be found in systems tests, which take longer to develop, debug, run and process.
The unit test framework is usually straightforward and requires the ability to isolate code, drive testing and report results. The framework code handles communicating with the PC to download test in put parameters and to upload test results. Output is often captured to a file for rapid analysis. An expected-results file is critical for objectively determining if the unit has behaved as expected.
The results should include the output of the test invocation, as well as a log of each sub-unit that was invoked. For object- oriented systems, it is possible to test one object by using mock objects to intelligently fake the various interfaces in the system – leaving the tested object free to run under controlled circumstances. Sets of mock-object interfaces can be interchanged with real objects as part of integration testing.
By controlling the inputs and checking the resulting sequence of function calls and outputs, a unit can be verified to work under a wide range of conditions. Both in-range and out-of-range inputs should be tried – this includes minimum values, maximum values, boundary values and any corner cases that may occur as a result of errant software. Programmers often assume that the calling functions will never pass in incorrect parameters. However, software systems are complex, and ensuring that a function is robust and defensive often helps when other, more complicated areas of software cause invalid values to be propagated within the system.
Several tool suites (JUnit, VBUnit, CPPUnit, Cantata++) can be used as the test framework. It is even possible to create a custom framework for testing as part of code development. The unit being tested must be compiled using the same compiler and toolset as the actual product software.
Additionally, the testing should run on the actual hardware and processor to validate the compiled code. If leaving unit test hooks in place is a problem, or if code space is tight, compilation switches can be used to switch the tests off for actual releases.
HOW MUCH SHOULD BE TESTED?
Generally, 80% of the errors are found in 20% of the code. Spend the most time on functions containing key algorithms, safety, are complex or are likely to be defective. How do you know what is likely to be defective? Large functions, complex functions, code with multiple branches, complex calculations or deep layers of nesting are good targets. Functions writ ten by less experienced or less skilled engineers should also be considered for more focused testing. Straight-line code is unlikely to turn up much in the way of problems and is not likely to be broken by changes to the code. Spend the most time computations, looping constructs and ar eas with data structure access.
Unit testing typically finds problems with corner cases, lack of input checking and computational problems—flaws that might not be easily detected at a systems level. Both expected and unexpected inputs should be tried, focusing especially on the boundary conditions that are most likely to occur in the case of errant software. Core algorithms that drive the software are easily stressed when controlling the inputs as part of unit testing.
UNIT TESTING AND DEFENSIVE PROGRAMMING
Unit testing helps ensure that code is
robust. It indirectly strengthens segments
of code by exposing improper handling
of unexpected failures that crop up in the
software. Resolution of these findings results
in software in which unexpected failures
are properly trapped and corrected
before the device fails and harms the patient.
Consequently, unit tests generally make individual units more defensive and less reliant on checks in other areas of software. If bugs in other areas of software are introduced during later development, this defensive posture will make an excellent safeguard. This tends to aid in the debugging process as well, since robust code will be a more likely trap with a meaningful error rather than, say, hung code followed by a watchdog reset.
Unit testing, however, will generally not catch integration errors. The focus here is on the smaller building blocks in the system to ensure that each does its job properly. How these smaller pieces interact is not likely to be found with unit testing, which is why integration testing and systems- level testing is so critical to the entire validation process.
SUMMARY
Unit testing is best thought about during
development, not afterwards. Creating
a unit-test strategy will help the developers
structure the code to be more easily testable.
This leads to more robust code and
code that is generally less tightly coupled
(i.e. unit more independent and easier to
unit test) than code written without unit
testing in mind. Unit testing should be
automated as much as possible so that
regression is as simple as building and
running the code.
This will save time in later phases of the development effort as the software iteration process occurs. For best results, an independent person should review the test and update for proper coverage. The FDA wants independent review and testing of the device software as much as possible. Finally, don't be leery of the unit-test process. Proper unit testing done early will make the code far more reliable as the software enters the remaining phases of testing. A little more time up front will pay dividends down the road.
Explore the March 2007 Issue
Check out more from this issue and find your next story to read.
Latest from Today's Medical Developments
- LK Metrology acquires Nikon Metrology’s laser scanning and Focus software assets
- Flexxbotics’ robot compatibility with LMI Technologies 3D scanning, inspection products
- IMTS 2024 Booth Tour: Behringer Saws
- UNITED GRINDING Group to acquire GF Machining Solutions
- Mitutoyo America’s Metlogix M3 with the Quick Image Vision System
- IMTS 2024 Booth Tour: Belmont Equipment & Technologies
- Krell Technologies launches Photonics Outreach Program
- Hurco’s TM8MYi lathe