Double-Precision Floating Point

The XDR standard defines the encoding for the double-precision floating-point data type as a double.

The length of a double is 64 bits or 8 bytes. Doubles are encoded using the IEEE standard for normalized double-precision floating-point numbers.

The double-precision floating-point data type is declared as follows:

(-1)**S * 2**(E-Bias) * 1.F
Item Description
S Sign of the number. This one-bit field specifies either 0 for positive or 1 for negative.
E Exponent of the number in base 2. This field contains 11 bits. The exponent is biased by 1023.
F Fractional part of the number's mantissa in base 2. This field contains 52 bits.

See the Double-Precision Floating Point figure (Figure 1).

Figure 1. Double-Precision Floating-Point
The first line of this diagram lists bytes 0 through 7. The second line of the diagram shows the corresponding fields and their respective lengths: S (1 bit) and E (11 bits) extend under byte 0 through byte 2, while F (52 bits) extends from byte 3 to byte 7. The third line shows the total length of bytes 0 through 7, which is 64 bits.

The most and least significant bytes of a number are 0 and 3. The most and least significant bits of a double-precision floating-point number are 0 and 63. The beginning (and most significant) bit offsets of S, E, and F are 0, 1, and 12, respectively. These numbers refer to the mathematical positions of the bits but not to their physical locations, which vary from medium to medium.

The IEEE specifications should be consulted when encoding signed zero, signed infinity (overflow), and denormalized numbers (underflow). According to IEEE specifications, the NaN (not-a-number) is system-dependent and should not be used externally.