Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Memory/Storage

Antifuse memory IP offers lower-power operation

Posted: 17 Nov 2008 ?? ?Print Version ?Bookmark and Share

Keywords:non-volatile memory? OTP memory? antifuse memory?

Embedded non-volatile memory (NVM) is becoming more prevalent in a wide range of chips, particularly for power-sensitive applications. Memory IP for such apps requires the design of both the basic memory bit and the memory macro architecture to minimize power demands. An appropriate one-time programmable (OTP) memory macro can meet NVM requirements while offering low-power operation.

Many applications that require NVM do not need hundreds or thousands of rewrite cycles. Code storage, calibration tables, setup parameters and the like seldom, if ever, need changing once programmed. For cases in which occasional change is required, an appropriate memory management algorithm can skip over outdated information and use previously empty memory to hold the updates. Such management lets a low-cost and secure antifuse-based OTP memory serve as a design's embedded memory just as effectively as a rewritable NVM.

Antifuse gains
The cost advantages of antifuse-based OTP memory stem from the cell size and process complexity. The antifuse memory's design can be as small as one transistor using technology such as Sidense's 1T-Fuse memory IP. The result is a memory cell area that is much smaller than floating-gate multi-time programmable memories. The small bit cell size results in a smaller memory array footprint, which in turn reduces the area-related cost of the die.

The reliability of antifuse-based OTP stems from the simplicity of its operation. When programmed, the antifuse is a permanent and desired short circuit that cannot be accidentally formed under normal memory read operations. Floating-gate NVM, though rewritable, can wear out because of the need to tunnel electrons on and off the gate during programming and erasure. The tunneling operation will eventually break down the oxide layer isolating the floating gate, creating a permanent, undesired short-circuit in the memory cell.

The simplicity of operation also makes OTP memory an inherently lower-power design than other NVMs. By transistor count alone, for example, antifuse-based memory would be expected to draw less power. In addition, its compact cell size means that arrays are physically smaller. That lowers the capacitance of the bit and word lines, reducing both precharge and switching power consumption.

The small single-transistor OTP bit cell minimizes array die area and cost while cutting read power consumption.

Although OTP bit-cell design determines the minimum power needed to read a memory cell, of equal importance are many other design factors that go into the arrays of bit cells that constitute the full memory macro. Current sensing, for instance, should be avoided, because it requires the presence of DC through the cell and through a reference. That DC runs all the time, burning power even when the memory is idle. By contrast, a low-power charge-sensing scheme collects all the charge leaking through the cells to create a voltage signal for the sense amplifier. The consistent programmed-state characteristics of the single transistor split-channel cell make such charge sensing practical and reliable.

Cutting DC
Another memory cell design factor that can reduce power demands is to make the sense amplifiers not consume any DC power. Sidense Low Power (SLP) OTP memory macrocells use a cross-coupled latch-type sense amplifier that doesn't turn on until there is sufficient voltage on an input to be correctly read. Once a sense amp turns on, positive feedback drives it to one of two zero-current states. Because everything else in the design follows simple static CMOS logic, the only DC is leakage.

A technique for reducing average power consumption in an OTP array is minimizing the number of unnecessary switching and precharge operations. This is achieved on the system level by optimizing the address switching sequence.

Some switching and precharges can also be eliminated by holding the internal address decoding stable rather than return them to an unselected state between memory read cycles. If only a few of the address bits change from one cycle to the next, the approach can save power. Designers can maximize the benefit by optimizing address space allocations.

The internal array structure can also reduce power. The typical memory array decodes the address to form word lines that activate a row of memory cells. The cell outputs form bit lines from which a multiplexer selects one line to pass to the bit sense amplifier. One disadvantage is that raising a word line activates all the cells in a row, burning power in cells that are not being read. Further, the bit lines connect to every cell in the column, which increases bit-line capacitance and, in turn, increases the power used in a read operation as well as slowing the memory. Multiple-connected bit lines also create problems in advanced processing technologies because of increased leakage; increased voltage swings may be required to read the signal correctly.

One approach to reducing the power lost along the word line is to introduce hierarchy by using block decoding. Instead of using a global word line to activate an entire row, another decoding layer is added to create local word lines.

By then rearranging the columns so that all the bits in a word are on the same local word line, a memory array can minimize the number of cells a local word line activates. In addition, the long global word and bit lines can operate at a lower voltage, which reduces power during read operations.

A similar approach can reduce capacitance on the bit lines to lower power. Instead of connecting directly to the global bit line, you can use an access mechanism to turn on a local bit line only when the local word line activates. This ensures that only a small number of memory cells are active and connected during a read.

A traditional non-hierarchical memory array architecture has a long bit- and word-line architecture that dissipates a lot of power.

Controlling array voltages
One additional method is to reduce the voltages used in the array. The Sidense SLP macros require two voltage levels in addition to the core VDD supply voltage: VPP for programming and VRR for read. VPP can be supplied either from an external pin or from a charge pump. VRR sets the active word line voltage and is required during a single-ended read to determine the state of a memory cell, either programmed or unprogrammed. A differential read mode is also available, which eliminates the need for VRR but requires two physical bit cells for each addressable memory operation. A standalone integrated power supply macro generates all the necessary voltage levels from the standard array voltage supply, letting users optimize the power supply scheme.

The combination of small memory cells, arrays designed to minimize capacitance and cell activation, and optimized voltage levels can be effective in producing low-power embedded memories. The typical read power for a Sidense 256kbit macro, for example, is Ptotal = 5mW/MHz + 0.4mW/MHz/ I/O bit. Typical standby currents (nonread mode) are less than 50nA.

When frequent rewriting of stored data is not required, carefully designed antifuse-based OTP memory arrays can offer designers an extremely low-power alternative to other NVM options.

The compact size of such memory arrays also helps keep silicon costs down and performance up by minimizing the die area and trace lengths.

- Jim Lipman
Director of Marketing, Sidense Corp.

Article Comments - Antifuse memory IP offers lower-powe...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top