MAD 3401
IEEE Notes - Sect. 3
Single precision (32-bit) representation

 ====================

The 32-bit (single precision) IEEE floating-point representation is laid out as

s eeeeeeee fffffff ffffffffffffffff

in the IEEE 754 standard. There are Ne = 8 bits in the exponent field and Nf = 23 bits for the fractional part of the mantissa.

We store bias+p in the exponent field; the bias is 01111111 (binary) = 7F (hex) = 127 (decimal).

To allow for the representation of special values (0,Inf, NaN) as described in section #4, two bit patterns are reserved thus limiting the power p to the range [-126,127].

Since the mantissa has a total of 24 bits (when you count the hidden bit) and is rounded, the magnitude of the relative error in a number is bounded by 2^{-24} = 5.96... x 10^{-8}.
This means we get a bit more than 7 decimal digit precision.
(The largest possible mantissa is M = 2^{24} = 16777216, which has 7+ digits of precision.)

The largest positive number that can be stored is
1.11111111111111111111111 x 2^{127} = 3.402823... x 10^{38}.
Notice that 1.11111111111111111111111 = 2 - 2^{-23}.
Also note that log_{10}(largest) = 38.531839...

The smallest positive number is
1.00000000000000000000000 x 2^{-126} = 1.175494... x 10^{-38}.
Note that log_{10}(smallest) = -37.929779...

Examples from lecture.

 ====================

Next: Section 4: Special values

 ====================

This material is © Copyright 1996, by James Carr. FSU students enrolled in MAD-3401 have permission to make personal copies of this document for use when studying. Other academic users may link to this page but may not copy or redistribute the material without the author's permission.

Return to the Home Page for MAD-3401.