128-bit long double floating-point data type

The AIX® operating system supports a 128-bit long double data type that provides greater precision than the default 64-bit long double data type. The 128-bit data type can handle up to 31 significant digits (compared to 17 handled by the 64-bit long double). However, while this data type can store numbers with more precision than the 64-bit data type, it does not store numbers of greater magnitude.

The following special issues apply to the use of the 128-bit long double data type:

  • Compiling programs that use the 128-bit long double data type
  • Compliance with the IEEE 754 standard
  • Implementing the 128-bit long double format
  • Values of numeric macros

Compiling programs that use the 128-bit long double data type

To compile C programs that use the 128-bit long double data type, use the xlc128 command. This command is an alias to the xlc command with support for the 128-bit data type. The xlc command supports only the 64-bit long double data type.

The standard C library, libc.a, provides replacements for libc.a routines which are implicitly sensitive to the size of a long double. Link with the libc.a library when compiling applications that use the 64-bit long double data type. Link applications that use 128-bit long double values with both the libc128.a and libc.a libraries. When linking, be sure to specify the libc128.a library before the libc.a library in the library search order.

Compliance with IEEE 754 standard

The 64-bit implementation of the long double data type is fully compliant with the IEEE 754 standard, but the 128-bit implementation is not. Use the 64-bit implementation in applications that must conform to the IEEE 754 standard.

The 128-bit implementation differs from the IEEE standard for long double in the following ways:

  • Supports only round-to-nearest mode. If the application changes the rounding mode, results are undefined.
  • Does not fully support the IEEE special numbers NaN and INF.
  • Does not support IEEE status flags for overflow, underflow, and other conditions. These flags have no meaning for the 128-bit long double inplementation.
  • The 128-bit long double data type does not support the following math APIs: atanhl, cbrtl, copysignl, exp2l, expm1l, fdiml, fmal, fmaxl, fminl, hypotl, ilogbl, llrintl, llroundl, log1pl, log2l, logbl, lrintl, lroundl, nanl, nearbyintl, nextafterl, nexttoward, nexttowardf, nexttowardl, remainderl, remquol, rintl, roundl, scalblnl, scalbnl, tgammal, and truncl.

Implementing the 128- bit long double format

A 128-bit long double number consists of an ordered pair of 64-bit double-precision numbers. The first member of the ordered pair contains the high-order part of the number, and the second member contains the low-order part. The value of the long double quantity is the sum of the two 64-bit numbers.

Each of the two 64-bit numbers is itself a double-precision floating-point number with a sign, exponent, and significand. The low-order member has a magnitude that is less than 1 unit in the last place of the high part, so the values of the two 64-bit numbers do not overlap and the entire significand of the low-order number adds precision beyond the high-order number.

This representation results in several issues that must be considered in the use of these numbers:

  • The precision of 128-bit long double data type is greater than the precision of the double data type, but the exponent range is the same. Therefore, the magnitude of numbers that are represented by using 128-bit long double data type is slightly greater than the magnitude of 64-bit double precision data type.
  • As the absolute value of the magnitude decreases (near the denormal range), the additional precision available in the low-order part also decreases. When the value to be represented is in the denormal range, this representation provides no more precision than the 64-bit double-precision data type.
  • The actual number of bits of precision can vary. If the low-order part is much less than 1 ULP of the high-order part, significant bits (either all 0's or all 1's) are implied between the significant of the high-order and low-order numbers. Certain algorithms that rely on having a fixed number of bits in the significand can fail when using 128-bit long double numbers.

Values of numeric macros

Because of the storage method for the long double data type, more than one number can satisfy certain values that are available as macros.The representation of 128-bit long double numbers means that the following macros required by standard C in the values.h file do not have clear meaning:

  • Number of bits in the mantissa (LDBL_MANT_DIG)
  • Epsilon (LBDL_EPSILON)
  • Maximum representable finite value (LDBL_MAX)

Number of bits in the mantissa

The number of bits in the significand is not fixed, but for a correctly formatted number (except in the denormal range) the minimum number available is 106. Therefore, the value of the LDBL_MANT_DIG macro is 106.

Epsilon

The ANSI C standard defines the value of epsilon as the difference between 1.0 and the least representable value greater than 1.0, that is, b**(1-p), where b is the radix (2) and p is the number of base b digits in the number. This definition requires that the number of base b digits is fixed, which is not true for 128-bit long double numbers.

The smallest representable value greater than 1.0 is this number:

0x3FF0000000000000, 0x0000000000000001

The difference between this value and 1.0 is this number:

0x0000000000000001, 0x0000000000000000
0.4940656458412465441765687928682213E-323

Because 128-bit numbers usually provide at least 106 bits of precision, an appropriate minimum value for p is 106. Thus, b**(1-p) and 2**(-105) yield this value:

0x3960000000000000, 0x0000000000000000
0.24651903288156618919116517665087070E-31

Both values satisfy the definition of epsilon according to standard C. The long double subroutines use the second value because it better characterizes the accuracy provided by the 128-bit implementation.

Maximum long double value

The value of the LDBL_MAX macro is the largest 128-bit long double number that can be multiplied by 1.0 and yield the original number. This value is also the largest finite value that can be generated by primitive operations, such as multiplication and division:

0x7FEFFFFFFFFFFFFF, 0x7C9FFFFFFFFFFFFF
0.1797693134862315907729305190789002575e+309