Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Embedded
?
?
Embedded??

Finding defects in safety-critical code

Posted: 31 Mar 2009 ?? ?Print Version ?Bookmark and Share

Keywords:code safety critical? static analysis? testing rigorous?

A second class of check is for inconsistencies or redundancies. These are not bugs per se, but are very often indicators that a programmer has misunderstood something. For example, if you check the return value of a function 99 percent of the time, that 1 percent where you don't check it may indicate such a problem. This class includes redundant conditions, useless assignments, and checking if a pointer is null after it has already been dereferenced.

Finally, some tools allow you to author your own checks.

Note that static analysis can't check all properties of your program. If your code contains a logic error that causes your program to produce the wrong result, then static analysis will usually be of little help finding that bug. That is what testing is for.

Open source example
To illustrate the kinds of flaws that advanced static analysis is capable of detecting, an example is shown. Note this is a real flaw found in an open-source application. It was detected by CodeSonar.

Figure 2. A screenshot from CodeSonar showing a warning that there is a buffer overrun. In this case, the error is on line 2200, where the programmer has a misplaced parenthesis. (Click to view full image)

.

Management of results
Because static analysis works on an abstraction of the code, false positive results are unavoidable in practice. Also, they can have false negatives toothey do not guarantee to find all bugs in your code. Too many false positives means that you may spend too much time sifting through the chaff looking for the real bugs. This robs your development effort of resources that might be better spent finding bugs through other methods. A high false positive rate has a subtle psychological implication too: as it increases, users are less likely to trust the results, and are more likely to erroneously tag a true positive as a false positive.

The real measure of the effectiveness of a static analysis tool is how well it simultaneously balances the false positive rate with the false negative rate.

Different kinds of bug merit different levels of effort to find, depending on the amount of risk they carry. A buffer overrun in a medical device may be life-threatening, whereas a leak in a game controller that means it must be reset once a day is very low risk. The amount of risk determines the false positive rate that users are prepared to accept. In practice, we have found that for the serious class of flaws, such as buffer overruns and null pointer dereferences, users are often prepared to accept a false positive rate of as much as 75 percent to 90 percent. For less risky classes, a false positive rate of more than 50 percent is usually considered unacceptable.

An important aspect of these tools is their usability. Once a false positive has been identified and labeled, the user will not want to see that warning again. If the warning is a true positive, tools would allow users to annotate the warning with comments or assign it to a user. Warnings are typically saved in a database so managers can create views of high-level properties, such as which modules are most risky or how the code quality is changing over time.


?First Page?Previous Page 1???2???3???4?Next Page?Last Page



Article Comments - Finding defects in safety-critical c...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top