Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > FPGAs/PLDs
?
?
FPGAs/PLDs??

Performing math operations in FPGAs (Part 3)

Posted: 20 Jan 2014 ?? ?Print Version ?Bookmark and Share

Keywords:FPGA? BCD? floating-point? fixed-point? truncation?

I won't yammer on about this right now (that's for next time); suffice it to say that we would need a lot of bits to represent either a really big number or a really small one. Floating-point solves this problem by breaking the number up into three pieces: the sign, the mantissa (a.k.a. significand or coefficient), and the exponent (a.k.a. characteristic or scale). This gives us a fairly large dynamic range. The generic form is as follows:

Where:
n = the number being represented
= the sign of the number
x = the mantissa of the number
b = the number system base (10 in decimal; 2 in binary)
y = the exponent (power) of the number (which can itself be positive, or negative)

Easy, right? Well, maybe not sothere are some tricks involved, as well as a variety of benefits and drawbacks. So, how do we represent floating-point in our device? Well, there's plenty of different ways to do this, there's your way, there's my way, and there's some other guy's way.

For example, the exponent is usually an integer. We could extend this by allowing the exponent to have a fractional representation if we really wanted. In general, though, I don't know why we'd want to do that, as the result would just be another fractional number that we could easily represent (unless the exponent and the mantissa were both negative, in which case we'd have a complex number, and there are easier ways to represent those).

For the purposes of this column we will focus on the IEEE 754 2008 floating-point standard (hereinafter referred to as "754"). 754 defines a couple of implementations, primarily based on the width of their mantissa. These are Half, Single, Double, Double Extended, and Quad Precision. The binary representations of these would be as follows:

754 also includes some special formatting for certain values, such as NaN (not a number), infinity, and some others. I'll leave it to you to research those. For clarity, I'll stick to the half-precision (16bit) format in this article. Except for the range of possible values and biases (which I'll blather on about in a bit), things work the same for each type.

First, there's the sign bit. If our number is negative, then the sign bit will be a "1," otherwise it will be a zero (negative zero is possible to keep divide by zeroes "honest"). Easy, right?

Next is the exponent. Here, there is a trick; as the exponent does not have a sign bit, and we (should) all know that exponents can be negative. The trick is that the exponent has a bias which must be subtracted in order to find its true value. The bias can be computed as follows:

Or, equivalently:

Where:
b = the bias
n = the number of bits in the exponent

More simply, the biases are shown in the table below:


?First Page?Previous Page 1???2???3?Next Page?Last Page



Article Comments - Performing math operations in FPGAs ...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top