Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > EDA/IP
?
?
EDA/IP??

High-bandwidth networking for better fault tolerance

Posted: 16 Feb 2002 ?? ?Print Version ?Bookmark and Share

Keywords:soc? dram? atm cell? memory simulation? ip?

Networking has benefited from advances in SoC design, notably in terms of specialized intellectual propertyincluding network processors and on-chip buses. Now, the challenge is to keep processors fed with data. The bandwidth bottleneck has shifted to memory subsystems, a trend that can be seen across various end markets, especially in networking, where speed is a differentiating factor.

Memory vendors are addressing this problem by developing new architectures focused on high-bandwidth networking and communications applications. The rapid fragmentation of the DRAM market is largely due to the emergence of networking and consumer applications as the key technology driver.

Price to pay

However, new memory technology does not come without a price. In addition to solving the system-level bandwidth bottleneck, designers must also deal with the increasingly diverse protocols and high speeds of these memories. They need new tools and methodologies to solve the unique issues associated with modern memory subsystem design.

While advances in synthesis and physical design have made 10 million-gate system-chips a recent reality, verification technology is still playing catch-up to the latest designs. Engineering teams are struggling with verification efforts that now regularly exceed 70 percent of the total design cycle. The complex memory subsystems required for high-bandwidth SoC designs compound the effort, especially in networking applications. Ensuring that data packets arrive and are disassembled correctly and that they can be assembled and transmitted correctly is the key to verifying these systems.

Data structures stored in memory get spread out across several physical devices, making it difficult to analyze and verify the transactions. Ideally, designers would perform the verification at a higher level of abstraction and analyze collections of data packets. However, managing these complex data structures across interleaved arrays of physical memory is a daunting task, even in the most advanced verification environments.

Another challenge in verifying these complex systems lies within the memory components themselves. The main workhorse of functional verification is hardware-description, language-based simulation. Good simulation models for memory components are essential for system-level verification.

The number of internal system states stored in memory can be orders-of-magnitude greater than the number of states that are observable at the system's pin-level boundary. Accessing memory increases the observability of the system and enables designers to catch bugs as they happen in memory instead of thousands of cycles later when they propagate to the system boundary.

These techniques are especially applicable to networking and communications designs, where much of the verification centers on validating the transfer of structured dataespecially linked listsin and out of memory.

This process begins by exposing memory contents to the system-level testbench for analysis during simulation. Once access to the memory data is established, the data stored in physical components can easily map to a contiguous system-level memory space. This requires simple constructs for width, depth and interleaving expansions among the various physical components. Being able to access memory data at a system-level abstraction, as opposed to the physical-memory abstraction, is key to performing more complex system-level verification tasks.

Raising the level

After raising the abstraction level for the memory subsystem, the next step is to raise the level of abstraction of the data stored in the memory subsystem. For networking applications, the appropriate data abstraction might be ATM cells, Ethernet frames or linked lists. In the case of an ATM cell, data might be stored in a 64B buffer. For example, it might be stored across four 8-bit-wide physical memories to form a 32-bit interface to memory. Mapping these data structures makes it possible to view, manipulate and verify at a system-level abstraction during simulation.

Now that system-level data abstractions are exposed to the verification environment, any number of verification tasks can be performed to verify the integrity of the system-level data. Placing system-level "assertions" on data and data transactions makes it easy to catch bugs associated with parity violations, invalid data and out-of-bounds or out-of-order memory accesses.

General concepts of the data-driven verification approach are somewhat obvious, but may seem impractical in many cases. It is true that data-driven verification may not be a valuable solution if the interfaces to the memories required rewriting every time the memory configuration was changed. It would also not be nearly as valuable if it required customization whenever a new memory vendor was used. The same is true for developing the mechanism for controlling accesses to memories from the top-level testbench.

Commercial solutions are available that make these tasks easier. Memory-simulation products enable designers to quickly simulate all types of memory and provide a consistent error-checking and reporting mechanism. The interface for accessing memory data usually includes utility functions for loading, saving and comparing memory contents without using the simulator to move data through the memory pins.

? Mark Gogolewski

Chief Operating Officer Vice President of Engineering, Denali Software Inc.





Article Comments - High-bandwidth networking for better...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top