
By Scott Trappe, Lawrence Markosian | Article Rating: |
|
November 1, 2000 12:00 AM EST | Reads: |
14,514 |
The design of the Java language has done much to overcome the limitations of C and C++. However, testing and debugging continue to account for much of the cost of developing Java applications. Once you've deployed a Java application, it's even more difficult and costly to fix software faults.
Unfortunately, most conventional test methodologies assess only about 60% of the code in any Java application. And as applications become larger and more complex, even less code is covered by conventional test methodologies. So the challenge facing Java developers remains: How do you effectively debug applications before you deploy them?
New automated inspection techniques are becoming available from independent QA providers that make it practical to inspect and evaluate source code before you test. In fact, inspection at various stages of development can isolate critical, crash-causing faults before Java applications are compiled and tested, resulting in higher quality code with less reliance on software testing. Unlike testing, automated software inspection (ASI) can cover 100% of Java source code in a fraction of the time.
ASI is a new approach that provides early feedback to developers about crash-causing, data-corrupting software defects. ASI reads the source code, analyzes the structure of the software, determines how data flows through the program, and identifies code-level defects such as array bounds violations, dereferencing null objects, and race conditions. For example, conventional procedures don't test every possible application thread interaction in a multithreaded Java application. ASI, on the other hand, can provide complete code coverage, examining every thread for points of potential race conditions. Similarly, testing for out-of-bounds array access errors is impractical since it requires developing test cases that cover every possible access. Here, ASI can perform a comprehensive static analysis without having to test every array access and can uncover potential problems before testing. ASI requires neither target hardware nor test cases, and thus can detect errors that would escape conventional testing. It's ideal for isolating hard-to-find problems in Java applications.
First Inspect, Then Test
It's always best to find defects before they become bugs - ideally, immediately after coding. One of the best lines of defense in improving the overall quality of software is to improve software testing. However, while testing is acknowledged to be a critical part of total quality assurance, it has limitations:
- It's expensive and time-consuming to create, run, validate, and maintain test cases and processes.
- The number of statements tested, that is, code coverage, inexorably drops as the system grows larger. The bigger the application, the less source code that testing can validate.
- Even when testing uncovers a fault, it's difficult to trace a failure from a test case back to the code causing the problem.
- Testing can't uncover all potential bugs. A study conducted by Capers Jones concluded that even the best testing processes would remove, at most, 85% of all software defects. Additional research conducted by the Standish Group determined that most QA organizations are only 30-40% effective at identifying software defects.
The lines show the execution paths of several test cases. In the ideal case shown here, every statement was executed, achieving total code coverage, although such comprehensive testing is rare. And even if total code coverage is achieved, many defects can remain. Effective testing requires total path coverage in addition to statement coverage, so all potential paths that connect two statements should be tested. The lines in Figure 2 show several paths that were missed by the test cases.
ASI can minimize this problem by providing both code coverage and path coverage. Experts agree that inspecting software before testing is the most effective way to find software faults. Studies by Capers Jones show that up to 65% of all coding defects can be identified prior to testing, thus realizing substantial savings in both time and dollars.
The Cost of Debugging
While ASI isn't a replacement for testing, it is a cost-effective, complementary QA technique that shortens time-to-market and increases reliability for Java applications. Applying automated inspection early and at critical stages in the development process saves development time and resources.
Outsourcing inspection to a qualified service provider may seem expensive. Some developers have considered trying to do their own software inspection, but this is truly a specialized skill requiring a high degree of technical sophistication, not to mention substantial dedicated computing power. Still, the savings you can realize by using a software inspection service can be substantial. Consider the following examples.
Let's review a typical Java application consisting of 500,000 lines of code. Most applications average one software defect for every 1,500 lines of source code, so you'd expect our Java application to have about 330 defects. The cost of thoroughly inspecting a Java application of that size using ASI would be about $50,000.
That may seem like a substantial investment, but consider the costs of alternative approaches, such as inspecting the same application manually. It would take a good programmer an average of about five hours of dedicated work to isolate a single software problem. The typical software developer costs the company approximately $8,500 per month, and there are about 170 work-hours per month. That averages out to a cost of $250 per defect, or a total cost of $82,500 to manually inspect a 500,000-line application, and it would take about 10 man-months. What that estimate doesn't take into account is the loss of productivity and additional loss of revenue resulting from dedicating valuable development time to inspection. ASI, on the other hand, costs about $50,000 and would take less than two weeks. And ASI provides higher accuracy with repeatable results.
Now let's consider the same example and the cost of automated inspection versus testing. A typical QA engineer costs the company about $6,000 per month, which averages out to a cost of $1,600 to find each of our 330 defects. The average QA engineer can isolate only about eight defects per month, so thoroughly testing 500,000 lines of code would take about three man-years at a total cost of about $250,000. ASI could be done in two weeks at 20% of that cost.
Finally, let's consider the cost of ASI versus improving the defect removal rate. It costs an average of $14,000 to isolate a bug after the software is deployed (according to the Cutter Consortium), and there is an average of 5,000 bugs introduced before inspection or testing. Even assuming you have the best possible average defect removal rate of 85%, that still leaves an average of 15% or 750 bugs that will make their way into the first commercial release. Even if ASI improves the defect removal rate by only 5%, that still represents a potential savings of $3.5 million.
Clearly, ASI can represent substantial savings in Java development. But how would you apply ASI to isolate bugs unique to Java? Here are a few examples.
Types of Java Defects Found by Automated Inspection
Since Java is considered to be a relatively "safe" programming language that performs a lot of runtime checking, many programmers believe that extensive testing is sufficient to guarantee high-quality code. Not so. When Java performs a runtime check that detects an exception condition, it throws an exception. If the programmer hasn't handled the exception properly, the program will either crash or the environment will be corrupted in a way that prevents normal continuation. Even if the exception is handled, it's likely that the computation performed is incomplete and may cause data corruption or further exceptions. Even with extensive testing, you can still get latent defects that will crash an application in the field. Finally, threading problems are a serious issue in many Java applications and are extremely difficult to isolate with testing. While mechanisms in Java, such as synchronization of shared resources, can prevent some of these problems, the defects themselves remain. ASI can uncover these faults.
In addition to finding the point in the code where an exception or other problem can occur, such as a null object dereference, ASI can provide the information needed to correct the underlying problem (for example, the place in the code where a null object can be generated). Testing places a greater burden on the programmer to interpret the raw results and trace back to find where the null pointer was generated.
Java defect classes that we expect to detect are a subset of Java runtime exceptions plus threading defects. We haven't made a final determination of which defect classes will be implemented first. Likely candidates include:
- NullPointerException
- IndexOutOfBoundsException
- NegativeArraySizeException
- SecurityException
- Race conditions
- Deadlock
The trick in defect detection is to isolate only the defects that are truly important: crash-causing faults and other problems that warrant allocating the resources to fix them. This rules out problems such as:
- Violations of most coding standards
- All false positives
- All "low value" defects
The false positives are simply code fragments that look like defects but really aren't. They're usually caused by the inability of testing tools to perform a sufficiently deep analysis of the program to rule out the apparent defect. An example would be a failure to check for a zero divisor prior to a division by zero. If you just examine the code in the particular method where the division is performed and you find no check for a zero divisor, you might be tempted to report a defect in the method. However, it may be that the check is performed elsewhere so the method is never called with a zero value for the divisor. In this case reporting a potential arithmetic exception for division by zero would be a false positive.
Another useful characterization of defects is "low value." These are true defects but, for whatever reason, they're of little or no interest to the developer. For example, in C and C++, a null pointer dereference on an out-of-memory condition is a true defect, but it's often (although not always) a low-value defect because the expectation is that the application (and maybe the system) is about to crash anyway.
Java coding can cause a number of problems where inspection will isolate crash-causing faults that testing can miss, creating a false negative. We'll also consider a similar example that seems to be a problem, but on closer inspection isn't. Our examples are about race conditions in Java.
Listing 1 is a synchronized method that adds an element to a table.
Each thread calls this method at a different time. The method locks the table, creates a new table entry, recalculates the column sizes by calling setMaxColSize, and then adds the new row to the table. The code for setMaxColSize is shown below and sets a number of instance variables.
private void setMaxColSize(Table t) {
DateTime = t.dateTime.toString().length();
Id = t.Id.length();
Status = t.status.length();
}
This application has an event-driven user interface. At any point in the execution of the above code, a user could press a button in a frame causing another process to execute. In the File-Save operation, for example, which is invoked by pressing a button, there is a call to a method called getMaxColSize. This method isn't synchronized.
Thus, it's possible that, while a thread is in setMaxColSize setting instance fields, the main event-handling thread could trigger a file save that would read those column sizes. With a bad interleaving the reading thread could get some new values and some old values for the column sizes:
public int[] getMaxColSize() {
int[] verytemp=
{DateTime, Id, Status};
return verytemp;
}
This is a potentially dangerous situation that can cause unexpected results. Where it's hard to detect this race condition using conventional programming testing, inspection would identify this kind of problem.
A False Positive
It's common practice in thread programming for the programmer to rely on each thread retaining a copy of all required variables rather than using synchronization to share memory between threads. This assumes that sharing variables isn't required within the application. This is an acceptable way of avoiding race conditions, assuming that none of the variables accessed by the threads are static or in any way accessible by multiple threads.
The example in Listing 2 is adapted from Eckel's Thinking in Java. It relies on three variables: countdown, threadcnt, and threadnum. A perfunctory analysis of this code might conclude that a potential race condition exists. However, each thread has its own copy of two of these variables. The threadcnt variable looks a little more troublesome because it's static, but note that it's modified only in the constructor, not in a start or run method, so multiple threads won't access it. Again, here's a nonfault that might be incorrectly identified as a fault during a superficial inspection.
Conclusion
From these examples it's clear that automated software inspection can be extremely valuable in Java development. Testing can uncover many problems, but it's often difficult to assess the severity of the defects and trace them back to their source. Automated software inspection, on the other hand, provides more complete coverage of Java code, not only identifying many defects earlier in the software development life cycle but also providing information necessary to correct the problem.
ASI isn't a replacement for testing. However, it's a cost-effective, complementary QA technique that shortens time-to-market and increases software reliability. Applying automated inspection early in the development process - as soon as a build can be done - and at various strategic later stages, saves time and resources. It also saves developer time and allows for creation of higher quality applications.
Published November 1, 2000 Reads 14,514
Copyright © 2000 SYS-CON Media, Inc. — All Rights Reserved.
Syndicated stories and blog feeds, all rights reserved by the author.
More Stories By Scott Trappe
Scott Trappe is president and COO at Reasoning, Inc. (www.reasoning.com), an application service provider specializing in software quality and modernization.
More Stories By Lawrence Markosian
Lawrence Markosian, a founder of Reasoning, is a product manager for its automated software inspection service.
Reasoning is headquartered in Mountain View, California, with sales and service locations in North America, Europe, and Japan.
![]() Nov. 11, 2018 04:00 PM EST Reads: 3,170 |
By Pat Romanski Nov. 11, 2018 11:45 AM EST Reads: 2,284 |
By Elizabeth White Nov. 10, 2018 11:45 PM EST Reads: 2,064 |
By Pat Romanski Nov. 10, 2018 10:00 PM EST Reads: 3,206 |
By Pat Romanski Nov. 10, 2018 01:00 AM EST Reads: 2,941 |
By Pat Romanski Nov. 9, 2018 04:45 PM EST Reads: 2,282 |
By Yeshim Deniz Nov. 3, 2018 05:00 AM EDT Reads: 4,027 |
By Yeshim Deniz Nov. 2, 2018 03:00 PM EDT Reads: 3,210 |
By Elizabeth White ![]() Oct. 30, 2018 03:45 PM EDT Reads: 14,062 |
By Zakia Bouachraoui Oct. 30, 2018 11:45 AM EDT |