Bridge software and hardware to speed up SoC validation
Keywords:systems-on-chip? software? firmware? debug?
The SoC used in this example has four capture stations: one in the processor clock domain, labeled Capture Station #1 (60MHz) targeting 362 signals; one in the RX Ethernet domain, labeled Capture Station #2 (25MHz) targeting 17 signals; one in the TX Ethernet domain, labeled Capture Station #3 (25MHz) targeting 17 signals; and finally one in the compact flash clock domain, labeled Capture Station #4 (33MHz) targeting 178 signals. Each of these stations operates in parallel, and is able to make selective observations of any combination of signals. The final output of the analyzer tool is a waveform representing the clock-cycle accurate signal transactions in the actual silicon device as shown in figure 4.
![]() |
Figure 4: Example SoC waveform. |
SoC debug challenges
While both the software and hardware debug infrastructures perform well on the target platform for issues that are confined to either software or hardware, it is a significant challenge to understand behavior that involves the interaction of software and hardware. The table outlines a short list of some of the issues encountered during the development of our test bed, and which are representative of the issues we see across the industry.
![]() |
Table: Example SoC debug issues. |
A primary challenge is that while the effects of the unexpected behaviors are "visible" using either the software or hardware debug infrastructures, it is often very difficult to determine whether the observed incorrect behavior is the cause or the symptom. The question often becomes whether the unexpected behavior in the software is a reaction to incorrect hardware behavior, or the other way around. The key is to determine the causal relationship between events, which requires a common reference between the software and hardware debug views.
Event management
The ability to re-construct a causal relationship between software and hardware debug views involves integration across the debug state and event processing from the two debug infrastructures, or integrated event management as shown in figure 5.
![]() |
Figure 5: Integrated event management. |
In this example, distributed, asynchronous instruments provided by the Clarus Suite, make it possible for each capture station to be viewed as autonomous. To support "cross-triggering" between instruments there is a shared event bus and a centralized event processor. The centralized event processor, labeled Access Control in figure 5, communicates the debug events and state to the Analyzer software that manages the overall debug infrastructure. This enables the effective hardware debug of many functional units and clock domains simultaneously. To create the integrated event management this information propagates into and collects data from the software debug infrastructure. With integrated event management in place, the infrastructure can detect software breakpoint events and the debug state of the processor. Likewise, the software debug infrastructure is able to detect hardware triggers and the debug state of the hardware debug infrastructure.
Related Articles | Editor's Choice |
Visit Asia Webinars to learn about the latest in technology and get practical design tips.