Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > FPGAs/PLDs

Examining the most underrated FPGA design tool ever

Posted: 22 Sep 2015 ?? ?Print Version ?Bookmark and Share

Keywords:FPGAs? high level design? OpenCL? MATLAB? VHDL?

FPGAs keep getting larger, the designs more complex, and the need for high level design (HLD) flows never seems to go away. C-based design for FPGAs has been promoted for over two decades and several such tools are currently on the market. Model-based design has also been around for a long time from multiple vendors. OpenCL for FPGAs has been getting lots of press in the last couple of years. Yet, despite all of this, 90+% of FPGA designs continue to be built using traditional Verilog or VHDL.

No one can deny the need for HLD. New FPGAs contain over 1 million logic elements, with thousands of hardened DSP and memory blocks. Some vendor's devices can even support floating-point as efficiently as fixed-point arithmetic. Data convertor and interface protocols routinely run at multiple GSPS (giga samples per second), requiring highly parallel or vectorised processing. Timing closure, simulation, and verification become ever-more time-consuming as design sizes grow. But HLD adoption still lags, and FPGAs are primarily programmed by hardware-centric engineers using traditional hardware description languages (HDLs).

The primary reason for this is quality of results (QoR). All high-level design tools have two key challenges to overcome. One is to translate the designer's intent into implementation when the design is described in a high-level format. This is especially difficult when software programming languages are used (C++, MATLAB, or others), which are inherently serial in nature. It is then up to the compiler to decide by how much and where to parallelise the hardware implementation. This can be aided by adding special intrinsics into the design language, but this defeats the purpose. OpenCL addresses this by having the programmer describe serial dependencies in the datapath, which is why OpenCL is often used for programming GPUs. It is then up to the OpenCL compiler to decide how to balance parallelism against throughput in the implementation. However, OpenCL programming is not exactly a common skillset in the industry.

The second key challenge is optimisation. Most FPGA hardware designers take great pride in their ability to optimise their code to achieve the maximum performance in a given FPGA, in terms of design Fmax, or the achievable frequency of the system clock data rate. This requires closing timing across the entire design, which means setup and hold times have to be met for every circuit in the programmable logic and every routing path in the design. The FPGA vendors provide automated synthesis, fitting, and routing tools, but the achievable results are heavily dependent upon the quality of the Verilog and/or VHDL source code. This requires both experience and design iteration. The timing closure process is tedious and sometime compared to "Whack-a-Mole," meaning that when a timing problem is fixed in one location of the design, a different problem often surfaces at another location.

An oft-quoted metric for a high-level design tool is to achieve results that are no more than 10% degraded from a high-quality hand-coded design, both in terms of Fmax and the utilisation of FPGA resources, typically measured in "LEs" (logic elements) or "LCs" (logic cells). In practice, very few tools can reliably deliver such results, and there is considerable scepticism among the FPGA design community when such a tool is promoted by EDA or FPGA vendors.

Having said this, there is a design tool that is being quietly adopted by FPGA engineers precisely because it not only addresses this QoR gap, butin most casesextends it in the other direction, meaning that the tool produces results that are usually better than their hand-coded counterparts.

Figure 1: Simulink high level design to optimised hardware.

This tool is called DSP Builder Advanced Blockset (the marketing folks were obviously not at their best when naming this tool). This is a model-based design tool, meaning that design entry is accomplished using models in the Mathworks' Simulink environment. The tool was first introduced to the market in 2007.

1???2???3???4???5?Next Page?Last Page

Article Comments - Examining the most underrated FPGA d...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top