Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > EDA/IP
?
?
EDA/IP??

The graphics chip as supercomputer

Posted: 01 Jun 2005 ?? ?Print Version ?Bookmark and Share

Keywords:gpu? cpu? chip? simd? moore's law?

Imagine that the appendix in primates had evolved. Instead of becoming a prehensile organ routinely removed from humans when it became troublesome, imagine the appendix had grown a dense cluster of complex neurons to become the seat of rational thought, leaving the brain to handle housekeeping and control functions. That scenario would not be far from what has happened to graphics processing units (GPUs).

Once upon a time, graphics computations were just another set of numerically intensive tasks running on a PC or workstation host CPU. Responding to the market value of prettier, more animated images, these computers began to use attached vector-arithmetic hardware to generate, rotate and scale the vectors that, in the early days, constituted the bulk of graphics images.

As the images moved from vector-based line drawings to tesselated polygon-based surfaces, the hardware also evolved. It now uses the vector-math pipeline to compute the locations of vertices that marked the corners of the polygons, and to transform the polygonal surfaces into short strings of pixels. Hardware was added to perform operations on the hue and intensity of the pixels.

Further evolution exploited the fact that these computationson polygons and pixelswere highly parallelizable. The hue and intensity of one pixel depends little on the surrounding pixels. So, as Moore's Law granted more transistors, additional identical hardware data paths could be added to create single-instruction, multiple-data (SIMD) architectures.

Finally, fixed-function data paths gave way to programmable, almost general-purpose processing elements. This change was forced by the increasing complexity and application specificity of surface-rendering algorithms.

And now, behold! The lowly vector generator has become not just a GPU, but also a highly parallel, nearly general-purpose SIMD processor array. In fact, the GPU chips in use for high-end gaming today dwarf their host CPUs' computing power. They are still specialized architectures. But for applications that can live within those limitations, they are enormous computing resources.

- Ron Wilson

EE Times




Article Comments - The graphics chip as supercomputer
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top