Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > T&M
?
?
T&M??

Accelerate processor verification through testbench infrastructure reuse

Posted: 01 Sep 2011 ?? ?Print Version ?Bookmark and Share

Keywords:verification? IP? Simulation? testbench?

In other types of processors, especially DSPs, changes to the core ISA from product to product differ significantly, by the re-encoding of instructions, for example. Even apparently easy changes, such as the addition of a single instruction for optimising a favourite algorithm, can prove to be very complex for test generation if there are special use cases to take into account. A common source of difficulty is software-exposed VLIW changes, such as new addition slots, interlocks, and so on. These don't affect the ISA directly, but they certainly affect test generation. This situation is illustrated in figure 3b, where the reusable test collateral is shown to be very small in comparison to the total verification space.

In short, implementation complexity and quickly evolving ISAs can have a huge impact on the ability to reuse testbench infrastructure C and particularly test generators C from project to project.

All too often, the end result is expensive, unscheduled rewrites of infrastructure code.

 test generation capability

Figure 3: Shown is the required increment in test generation capability for different project types.

Project size
The nature of the project-to-project changes notwithstanding, approaches to processor verification depend on the scale of the undertaking. This may be massive, as in the examples of the Intel Atom and the ARM Cortex, or more moderate. Examples of the latter category include domain-specific, proprietary architectures, and also enhancements to existing ones, such as those of ARC or Tensilica.

Massive projects are attempted only by large companies that have been in the game for a considerable time. Any new processor development therefore leverages existing infrastructure and test suites (backwards compatibility with existing architectures is a frequent requirement), and new tests and testing components are added to this infrastructure as needed.

Such companies generally consider functional verification to be a core competence, and they produce and maintain their own functional verification tools almost entirely in-house. This approach makes senseup to a point. The production and maintenance of large amounts of code (usually C or C++) costs a lot of money. Further, to get the right return on this investment a company must maintain a strong focus on the software activity. This means well-defined and managed projects for the different components of the functional verification infrastructure, with dedicated and motivated owners. As these infrastructure projects age, this kind of focus can be very hard to maintain. The constraints solver guru leaves the company, gets ill, or is hit by a meteorite, and his replacement wants to "clean up" thousands of lines of code.

Before we consider ways to reduce exposure to such risks, let us turn to the second category of projects. Processor architectures in "moderate" projects are simpler than those of their massive counterparts (at least, they start that way), and the infrastructure is built and maintained by smaller teams. The engineers involved often come from a hardware development background, and this is one of the reasons for the frequent choice of Hardware Verification Languages (HVLs) such as e, Vera and SystemVerilog. These are conceptually very close to Hardware Development Languages (HDLs) such as Verilog and VHDL, which may also be used for verification tasks. An HVL is similar in look and feel to an HDL, but it includes specialised verification features. The most important of these is a constraints engine, which is a difficult function to implement from scratch (though it typically only takes about 1% of the code in a full test generator).

The main drawback of HVL-based verification systems is that they tend to reach a saturation level, beyond which it becomes inefficient and costly to incorporate new features. HVLs may be verification oriented, but they are not processor verification oriented, and using an HVL for processor verification is a little like using Excel to manage company accounts. It's very useful for getting started, and you need the flexibility it provides in the flow, but once a certain level of complexity is reached you also need to invest in specialised tools. At this point, if you wish to continue along the path of creating and maintaining your own tools, the only choice is to write them at the C/C++ level.

We therefore see that the "massive" and "moderate" project approaches are the same in one crucial respect: they both require a large commitment to coding and maintaining the testbench infrastructure, especially the test generators.

Commitment to infrastructure
Each time a test generator is developed or updated for a new project, the team runs certain risks. An inadequate or inefficient test generator may be unable to sufficiently cover the functionality of the design, or it may be too slow to do so in a reasonable amount of time. It may also lack the needed constraints to test or avoid certain behaviours. If the test generator is late or not completed on schedule, it stands to compromise the schedule for the entire project.

Suppose, for example, that a static test generator (where tests are generated using architecturally visible state only) is discovered to be generating insufficient coverage of a complex processor pipeline. The current coverage metrics are increasing too slowly to meet deadlines, and we realise that our approach is not working. At this point, our options would be to either re-allocate engineering resources to hand-write vast numbers of directed tests or upgrade to dynamic test generation, where corner cases are provoked by using state information that is only available from a running implementation. Of course, the difference between a static and dynamic generator is a fundamental one, and it may be hard to add dynamic capability to an existing, static generator. This illustrates how projects can easily arrive at a verification impasseC new implementation features added for performance, expanding the verification space.

Of course, the ideal situation for a company involved in processor verification is to be able to limit its own work on testbench infrastructure to the strict minimum that differentiates its product. Further, it is preferable to be in the position of evaluating and configuring tools, rather than writing code and scripts, since the latter are more expensive to maintain than configuration tables, for example. This Holy Grail can only be reached if the necessary generic products are available for building the non-differentiating part of the testbench. Fortunately, because processor verification is such a widespread activity, and since modern processor designs generally use a common set of techniques and building blocks, more and more progress is being made in this direction. Figure 4 divides the major functional blocks modern test generators into a test generator core and DUT specific layers. The generator core consists of generic components that may be reused across a wide range of designs. The DUT specific layers are comprised of custom components that are unique to a given design. The 90/10% split is a generalized approximation.

 modern test generator components

Figure 4: Shown are the modern test generator components.

Most architectural and implementation features in a new processor are shared by other processors. There are only so many ways to create and test an ADD instruction, or a pipeline, or a multi-threaded architecture. As the architecture * implementation product continues to drive up the complexity of test generators, either teams are going to have to make a much larger commitment to building this infrastructure, or they must start leveraging IP in this field.

Looking forward
In summary, the use of specialised tools in certain key parts of the verification process can and should be utilised in effort to re-asses engineering focus. Rather than being responsible for the evolution and maintenance of major pieces of the testbench, engineers should be directing their efforts on the layer of the verification system that is product-specific. Processor projects suffering from increasing implementation complexity, or from frequent and radical changes in instruction sets, stand to benefit most from this trend. There are particularly attractive opportunities to do this in the processor domain, where similar problems are seen industry-wide and the scope for generic solutions is therefore especially large.

About the authors
Eric Hennenhoefer is the President, CEO and Co-Founder of Obsidian Software Inc. (Austin, Texas).

Andrew Betts is a Technical Sales & Marketing Consultant for Iconda Solutions.

View the PDF document for more information.


?First Page?Previous Page 1???2



Article Comments - Accelerate processor verification th...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top