Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Processors/DSPs

Tesla GPUs enable low-cost parallel computing

Posted: 19 Nov 2009 ?? ?Print Version ?Bookmark and Share

Keywords:GPU? processor? cloud computing?

Tesla GPU processor

Nvidia Corp. launches the Tesla 20-series of parallel processors for the high performance computing (HPC) market. Codenamed Fermi, the GPUs are based on the company's CUDA processor architecture.

Designed from the ground-up for parallel computing, the NVIDIA Tesla 20-series GPUs bring down the cost of computing by delivering the same performance of a traditional CPU-based cluster at one-tenth the cost and one-twentieth the power.

The Tesla 20-series introduces features that enable many new applications to perform dramatically faster using GPU computing. These include ray tracing, 3D cloud computing, video encoding, database search, data analytics, computer-aided engineering and virus scanning.

"NVIDIA has deployed a highly attractive architecture in Fermi, with a feature set that opens the technology up to the entire computing industry," said Jack Dongarra, director of the Innovative Computing Laboratory at the University of Tennessee and co-author of LINPACK and LAPACK.

The Tesla 20-series GPUs combine parallel computing features that have never been offered on a single device before. These include support for the next generation IEEE 754-2008 double precision floating point standard; error correcting codes for uncompromised reliability and accuracy; multi-level cache hierarchy with L1 and L2 caches; and support for the C++ programming language.

It offers up to 1Tbyte of memory; concurrent kernel execution; fast context switching; 10x faster atomic instructions; 64bit virtual address space, system calls and recursive functions.

At their core, Tesla GPUs are based on the massively parallel CUDA computing architecture that offers developers a parallel computing model that is easier to understand and program than any of the alternatives developed over the last 50 years.

"There can be no doubt that the future of computing is parallel processing, and it is vital that computer science students get a solid grounding in how to program new parallel architectures," said Wen-mei Hwu, professor in electrical and computer engineering of the University of Illinois at Urbana-Champaign. "GPUs and the CUDA programming model enable students to quickly understand parallel programming concepts and immediately get transformative speed increases."

The family of Tesla 20-series GPUs includes Tesla C2050 & C2070 GPU computing processors with single GPU PCIe Gen-2 cards for workstation configurations; up to 3Gbyte and 6Gbyte (respectively) on-board GDDR5 memory; double precision performance in the range of 520GFlops-630Gflops.

The Tesla S2050 and S2070 GPU Computing Systems include four Tesla GPUs in a 1U system product for cluster and data center deployments; up to 12Gbyte and 24Gbyte (respectively) total system memory on board GDDR5 memory; and double precision performance in the range of 2.1TFlops-2.5Tflops.

The Tesla C2050 and C2070 products will retail for $2,499 and $3,999 and the Tesla S2050 and S2070 will retail for $12,995 and $18,995. Products will be available in Q2 10. The first Fermi-based consumer products are expected to be available in Q1 10.

Article Comments - Tesla GPUs enable low-cost parallel ...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top