Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > EDA/IP

Why stitch and ship is no longer workable

Posted: 09 May 2013 ?? ?Print Version ?Bookmark and Share

Keywords:system on chips? verification? register transfer level?

The electronics and semiconductor industries have depended on a modular building block approach since the dawn of time. By creating a limited number of interfaces and using these to connect components, this approach has enabled a fairly simple separation of the functional pieces. These distinct functional pieces are designed separately and integrated either at the chip, board or system level. This has often been compared to the "Lego" building-block approach.

Another common design practice has been to minimise the frequency of communications between the functional blocks because those interfaces generally have long latencies (compared to processing speeds) and are often the congestion points in a system. This also simplifies the integration process because it decreases the number of problems that can be created due to temporal interactions.

For many years, companies making the most complex system on chips (SoCs) have been quite successful performing the bulk of their verification at the block level. When the components are integrated, a small number of system-level tests are run to ensure that the blocks were properly interconnected.

This strategy, often called stitch and ship, is increasingly leading to failure because of growing complexity at the system level. In addition, increasing amounts of functionality are defined at this level. New verification strategies are required to bring system-level verification into the mainstream development flow.

System complexity used to be driven by Moore's Law and, while this is still alive and well, two additional laws have been added to the mix. The first is Amdahl's Law to describe the total throughput in a system being constrained by the slowest piece of the system. When processor speeds stopped increasing, the industry transitioned to multiple processors and, in the embedded world, these are usually heterogeneous in nature. Processors, memories, buses and peripherals now have many more connection points than they did in the past and it has become increasingly difficult to analyse these both functionally and in terms of performance.

The second new law is Metcalf's Law, which addresses complexity and the utility of systems where multiple independent pieces are able to communicate with each other. Systems today have many pieces of interconnected functionality that can be combined in numerous ways to create any number of user experiences. This can be compared to the single processor, single function systems of yesteryear. In addition, SoCs of today have functionality not confined to leaf blocks. Voltage and frequency adjustments on power domains are system-level functions and these have established a new set of verification challenges that cannot be performed at the block level.

All of this leads to a growing myth within verification that if the individual blocks work and the communications fabric works then, when integrated, the system will either work or can be fixed in software. This is rarely true and the number of failures in the field is testament to the fact that system-level verification has been ignored for longer than it should have been. Continued insufficient allocation of resources will lead to an increasing number of failures at this level.

Problems with existing verification strategies
Verification methodologies in use today are not appropriate for the system verification task. There used to be a rule in IT: Nobody got fired for buying IBM. Today that rule may be: Nobody got fired for relying on the Universal Verification Methodology (UVM).

This methodology was designed to operate at the block level and cannot detect typical system-level errors. The reasons for this are clear. The first is that UVM is based on hardware verification languages (HVLs) and cannot create code suitable for execution on embedded processors. Many sub-systems contain a processor that must be removed from the design and replaced by a bus functional model before UVM can assist with any verification. This makes it an inaccurate representation of the system and unable to perform verification tasks, such as performance verification, in any meaningful way.

The second problem is that UVM, or any verification strategy based on pseudo-random stimulus generation, becomes less efficient and effective as the sequential depth of the design increases. To attempt to overcome this limitation, it defined sequences as a way to define snippets of legal and useful activity. Higher-level sequences can be created from lower-level sequences as the design is integrated. Unfortunately, this is not a scalable process as it requires new virtual sequences to be defined every time a different configuration or variant is created.

1???2???3???4?Next Page?Last Page

Article Comments - Why stitch and ship is no longer wor...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top