Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Processors/DSPs
?
?
Processors/DSPs??

Intel unfolds technical computing roadmap

Posted: 22 Nov 2013 ?? ?Print Version ?Bookmark and Share

Keywords:Intel? technical computing? processor? CPU? memory?

Intel has revealed its roadmap for the future of technical computing including customizing its high-end Xeon and Xeon Phi processors. The company promised to start housing memory chips inside the same package with its processors as well as integrating stacked memory dies onto future processors along with integrated high-speed switches and optical fabrics.

"We have the transistor budget to do customized innovation, and secondly we have a design methodology for SoC and an architectural modularity that allows us the ability to work with our customers to customize products at various levels," said Rajeeb Hazra, Intel's VP of the technical computing group and GM of the data center group. "We are moving forward into workload-optimized architectures at a level of collaboration with our customers that we hadn't done previously."

High-bandwidth in-package memory

Intel has integrated onto its CPUs math coprocessors, memory controllers, graphics, I/O controllers and in-package memory with Knights Landing. Next will be high-speed switches, optical fabrics, next-generation storage, and 3D stacked memory CPUs. (Source: Intel)

Intel will start by adding in-package memory dies alongside the processor, beginning with the next-generation Xeon Phi—code-named Knights Landing. "I've talked about Knights Landing, and its in-package memory, and the train doesn't stop there. We are looking at various new classes of integrations, from integrating portions of the interconnect as well as next-generation storage and memory much more intimately onto the processor die," said Hazra.

Hazra said Intel would customize its in-package memory architectures for the customer's specific needs, with memory management units that enable customers to choose to implement caches, flat memory spaces, or hybrid combinations of the two.

"We have architected multiple memory usage models. So whether it's a part of the flat memory space or it's used in some form of a cache for applications that were not modified to make use of the new memory architecture—we cater to all of those constraints and needs."

Intel also described an upcoming distribution of its Apache Hadoop—the open-source software framework for processing large-scale unstructured big data on clusters—that could access its Lustre parallel distributed file system in a manner transparent to application programs.





Article Comments - Intel unfolds technical computing ro...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top