Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > EDA/IP
?
?
EDA/IP??

A practical way of inspecting IP quality

Posted: 12 Oct 2011 ?? ?Print Version ?Bookmark and Share

Keywords:IP? quality? regression checks? SoCs?

Some of us just don't like surprises. If you are a chip designer working with third-party IP, you have learned that surprises, not always of the good kind, are an inevitable part of the package. And you are not alonethe use and cost associated with third-party IP are on the rise.

So what can you do about it? Do you already have, or plan to have a systematic approach to inspect IP quality on delivery?

Clearly defining what you mean by "quality" can help both you and your supplier converge more quickly on a better flow. Furthermore, your definition of quality probably needs to expand beyond a bug-centric view. A robust process that can automatically assess quality at incoming inspection can have a large impact on your schedule and overall well-being. Instituting such a system may not prevent issues, but it will ensure that issues are trapped quickly, at the source, before they trigger fire-drills later in the design process.

If you are an IP supplier, I'm sure you are already familiar with the concept of "smoke tests" as a quick way to flush out problems in the inner loop of development. This kind of analysis can be used not only to validate correctness but also to give a quick, albeit coarse, assessment of design parameters, as I will explain below. When you are in what-if exploration, this can help you explore more options, more quickly than full implementation analyzes would allow.

Unavoidable quality problems
When you decide to use a particular IP, you probably run a comprehensive evaluation and/or you check with other users who have built production silicon around that IP. You may have personal experience and confidence in the suitability and quality of that IP as well. Coming out of this process, you feel pretty confident you have done sufficient due diligence to ensure the IP is solid. Then things start to go wrong. Why?

To explain, let me divide the issue of quality into three main components:

Objective quality: a measure of issues in the deliverable IP according to purely objective criteria. For example, the RTL is lint-clean, the synthesis and timing constraints follow best practices, the IP performs functionally and area and performance are within the bounds of the evaluation reading of the specification. Evaluation should flush out most of these problems but new issues may emerge on the heels of bug fixes or specification changes, and you don't have time to repeat the full evaluation cycle on each change.

Specification quality: a measure of issues in the deliverable according to a disagreement in a reading of the specification or, more commonly, a measure of what a reasonable user might expect versus what is actually supported. No matter how carefully you read the specification or conduct the evaluation, you will expect (and fail to test) some corner of behavior which the supplier may not have considered. This is particularly obvious with configurable IP. If an IP has 5 knobs, each with 3 possible settings, the supplier must test over 3,000 configurations over the full range of possible behavior for each configuration. Guess what. They don't. It wouldn't be possible.

The same consideration applies to behavioral usage. How far do you push corners of behavior for that hypothetical reasonable user? The supplier tests what they feel should be reasonable use-models. Over time, they evolve their testing as more users push on their definition of reasonable. If the supplier is careful (and lucky), most users can live within a relatively small subset of use-models. But don't be surprised if you are the first user to push on an unexplored corner.

Regression quality: a measure of changes between releases which don't change the specification but which may impact a specific design C for example if pins have been added or removed, or the IP contains new critical paths. Changes of this nature may be unavoidable but can still negatively impact design productivity. Since it is impossible to model all possible designs and design environments in a regression process, regression issues are common.

Who bears responsibility for a problem really doesn't matter when a design is dead in the water. Because the customer comes first, you send me a fix. Therein lies the seed of one or more new problems, especially if the fix is triggered by a specification change. As the IP consumer, I need the fix urgently, so you do some basic testing and ship it off. But in the time you had to think about and make the fix, you didn't consider some subtle consequences.

Running through the full regression process is not guaranteed to catch new possible latent quality issues since the existing regression suite was never designed to consider interactions between the spec change and other features.

1???2???3?Next Page?Last Page



Article Comments - A practical way of inspecting IP qua...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top