Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > EDA/IP

Moore's Law extended: Intel roadmap reveals CMOS at its core

Posted: 07 Apr 2016 ?? ?Print Version ?Bookmark and Share

Keywords:Intel? CMOS? processor? SRAM? VCO?

Intel has revealed its plan for future processors at 10nm and beyond that will maintain CMOS for cores, but the cores will be surrounded by novel circuit architectures using innovative materials that may indefinitely extend Moore's Law.

"Moore's Law was never about scaling, but about the economic benefits of putting more die on wafers," explained keynote speaker, IEEE Fellow Kevin Zhang, VP of Intel's Technology and Manufacturing Group, also Intel Director of Circuit Technology who led processor development from the 90nm to 22nm nodes. "Intel is adding new circuitry, such as adaptive voltage control that increases yields over using fixed voltages, by making its analogue circuits digital or at lease digitally assisted, and by exploring new materials for specific functions around the scaled CMOS cores."

Kevin Zhang

Figure 1: IEEE Fellow Kevin Zhang, VP of Intel's Technology and Manufacturing Group and Intel Director of Circuit Technology (Source: EE Times)

Zhang's keynote was titled "Circuit Design in Nano-Scale CMOS Technologies" at the International Symposium on Physical Design 2016 (ISPD, April 3-6, Santa Rosa, Calif.) ISPD 2016 is an Association of Computing Machinery (ACM) conference on next-generation chips sponsored by Intel, IBM, Cadence, Global Foundries, IMEC, Synopsys, TSMC, Xilinx and other chip makers worldwide.

Zhang used the static-random-access-memory (SRAM) as his first example, because its architecture has remained the same for the last six generations, even though its use as on-chip caches for multi-core processors has become increasingly important (since DRAM speeds are not keeping pace with multi-core processor's speeds).


Figure 2: DRAM technology is not keeping up with processor performance (lower right) forcing more and more SRAM caches to be put on the same chip as the processor. (Source: EE Times)

"You need bigger and bigger SRAM caches on processor chips, despite they aging design, because DRAM has not been about to keep up with processor performance," said Zhang. "You can mitigate the problems with SRAM with 3D, but the best way is merely to improve the size and performance of planar on-chip SRAM memory caches."

For the last 20 years, the venerable SRAM has essentially remained unchanged, with only minor improvements. However, going forward to the 14nm node and beyond, Intel has been tinkering with the design of SRAM cells to allow them to continue scaling. The big problem with scaling SRAM further, is the growing conflict between reading and writing conditions. Namely, according to Zhang, you can easily improve the read access time to SRAM, by minimising the disturbance to the circuit while reading, plus you can improve the writing performance by maximising the disturbance to the circuit, but "you can't do both at same time."

"No longer is progress just about scaling, but for last few years it has been about introducing new transistor architectures and new materials," said Zhang, including high-k dielectrics, metal gates and 3D FinFET transistor architectures.

Using SRAM as an example, its new architecture has been to get the best of both read/write worlds by "turning the supply voltage like a knob." according to Zhang. Specifically, on-chip circuitry now changes the supply voltage when reading and writing, using lower voltage for write column section, and increasing supply voltage for reading column selection, thus mitigating the low-disturbance/high-disturbance problem, respectively, into a happy medium that improves the overall performance of the SRAM cell.

1???2?Next Page?Last Page

Article Comments - Moore's Law extended: Intel roadmap ...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top