Topic
IC4NOTICE: developerWorks Community will be offline May 29-30, 2015 while we upgrade to the latest version of IBM Connections. For more information, read our upgrade FAQ.
1 reply Latest Post - ‏2012-12-14T00:29:00Z by Robin500
PavankumarM
PavankumarM
10 Posts
ACCEPTED ANSWER

Pinned topic Help on conversion of Binary Float to Decimal Float

‏2012-03-09T17:52:32Z |
We are facing precision issues as we are having Binary FLoat datatypes declared in our program . Are the utilities to automatically convert the code or compiler options to convert this to Decimal Float          - regards Pavan
Updated on 2012-12-14T00:29:00Z at 2012-12-14T00:29:00Z by Robin500
  • Robin500
    Robin500
    8 Posts
    ACCEPTED ANSWER

    Re: Help on conversion of Binary Float to Decimal Float

    ‏2012-12-14T00:29:00Z  in response to PavankumarM
     You specify precision in binary as: FLOAT BINARY (n)
    where the constant n is the (minimum) number of binary digits that you want in the mantissa.
    If you specified just FLOAT BINARY you will get default precision (one word)
    or about 21 binary digits for hexadecimal floating-point.
    You can get up to 53 binary digits.
     
    For the PC (IEEE float), you have  up to 64 binary digits.
     
    By specifying FLOAT or FLOAT(m) you will get either float binary or decimal,
    depending on the compiler. In this case, m is the number of decimal digits of
    precision that you want.  e.g., FLOAT (18) gives 64 binary digits on the PC (IEEE float).
    Just FLOAT by itself gives default precision.