Nvidia scientist sees GPUs in future supercomputers
Keywords:future supercomputers? Nvidia GPUs? graphics processor?
By 2012, three of the top five supercomputers in the world will have graphics processors using parallel computing applications to crunch numbers at a clip that's not possible on standard CPU-only set-ups, predicts David Kirk, Nvidia chief scientist.
Kirk, delivering a chalk talk on "The Future of 3D Graphics," touted the advantages of GPU-based parallel computing for powering applications related to oil and gas exploration, computational finance and other computational modeling projects, as well as for faster, more-powerful hybrid rendering within the graphics discipline itself. Because GPU computing is already a "data parallel process," the work of breaking apart computing problems into smaller sets of instructions to be carried out concurrently is more easily done on GPUs than on multicore CPUs, Kirk said.
Describing a kind of Moore's Law on steroids, he promised 100x performance gains in real-world applications, just as soon as people take advantage of Nvidia's general-purpose computing on GPUs (GPGPU) initiatives that have resulted in some 50 million Nvidia GPUs already shipped that are capable of running the Compute Unified Device Architecture, or CUDA, programming language for parallel computing.
"This is truly the democratization of supercomputing. We ship a million parallel units a week," Kirk said.
CUDA is a C programming language developed by Nvidia that allows GPGPU programmers to code algorithms for execution on graphics processors. Currently, it's possible to run CUDA on Nvidia's GeForce desktop chipsets, as well as its Quadro workstation and Tesla high-performance compute products. And according to Kirk, the graphics chipmaker recently released an SDK for the Macintosh OS.
Robot senses
In addition to his prediction about GPU-powered supercomputers, Kirk touched on the tremendous potential of work being done by companies like Evolved Machines, which builds simulated models of organic neural circuit growth using accelerated GPUs.
"That means we're learning how to produce a computational model of the sense of smell or vision recognition," he said. Asked whether technology that promises self-wiring synthetic neural circuit arrays might presage the onset of "A.I. overlords," Kirk laughed but demurred from answering.
In a question-and-answer session following the talk, Kirk was asked about Nvidia rival AMD's own GPGPU offerings from its ATI division, such as the Close To Metal open thin hardware interface and FireStream stream computing processor.
"ATI let parallel computing sort of happen to them, whereas we actually went out and built a machine to do it," he said, tipping his own company as the prime mover in GPU computing.
If so, we'll know who to blame when self-aware robots start taking over the world.
- Damon Poeter
CRN
Related Articles | Editor's Choice |
Visit Asia Webinars to learn about the latest in technology and get practical design tips.