Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > RF/Microwave

Fragmentation plagues mobile SoC sphere

Posted: 22 Apr 2013 ?? ?Print Version ?Bookmark and Share

Keywords:SoCs? smartphone? tablets? APIs?

Senior engineers from Nvidia and Qualcomm said there's no resolution in sight for the fragmented ways to handle mixtures of CPU, GPU and DSP cores in today's mobile chips. Among the options, Android backs Renderscript, Apple helped launch OpenCL and Microsoft is driving Windows Direct Compute.

Qualcomm is taking an agnostic approach, trying to create SoCs that can work with any model. It is a member of the Khronos Group working on OpenCL and the AMD-led Heterogeneous Systems Architecture alliance.

"There are many APIs still evolving, so our approach is make sure our system infrastructure is prepped to deal with any of them," said Bob Rychlik, a system architect at Qualcomm, speaking on a panel at the Linley Mobile conference here. "We are hoping with these open standards the industry will find some convergence," he added.

Meanwhile, Qualcomm uses compiler tools from the LLVM Project to make its SoCs more adaptable to different programming models. In a talk here, Rychlik said traditional computer based MESI cache coherency models and context switching methods sometimes burn too much power for mobile chips.

"There are some apps where [traditional approaches are] great and many others where they should not be used," he said. Meanwhile "there's a renaissance in papers and upcoming interesting work on different ways to achieve cache coherency without the overhead of snooping and invaliding traffic," he said.

Rychlik declined to provide specific references for new techniques well suited for mobile chips, saying Qualcomm was not ready to discuss its work in the area. However, he did refer to a February conference in Shenzhen and one last fall in Minneapolis that contained useful papers on the topic.

For its part, Nvidia is something of a "lone wolf" pursuing its own path, said Kevin Krewell, senior analyst with the Linley Group and moderator of the panel. Nvidia uses its proprietary Cuda language for general-purpose GPUs and its Chimera API for computational photography.

Using in-house techniques helped the company get products out early, said Brian Cabral, a vice president of engineering at Nvidia and a member of the panel. "As the market matures, we will do the right thing by the market," he said.

Cabral also noted CPU and GPU workloads "are vastly different" and call for different software models.

"There's a tension between the ease of use that cache coherency gives you in sharing data but at cost of power and throughput," Cabral said. "Up to now we have erred on the side of performance because that's what people are buying.

"There may be a way to solve the problem as we get smarter, but for the foreseeable future these are different workloads and should be managed differently," he said.

Nevertheless all sides on the panel agreed it benefits the industry and the software development community to have fewer programming models. "We don't want to write the same program 10 times," Cabral said.

- Rick Merritt
??EE Times

Article Comments - Fragmentation plagues mobile SoC sph...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top