We are facing precision issues as we are having Binary FLoat datatypes declared in our program . Are the utilities to automatically convert the code or compiler options to convert this to Decimal Float - regards Pavan
NOTICE: developerWorks Community will be offline May 29-30, 2015 while we upgrade to the latest version of IBM Connections. For more information, read our upgrade FAQ.
This topic has been locked.
1 reply Latest Post - 2012-12-14T00:29:00Z by Robin500
Pinned topic Help on conversion of Binary Float to Decimal Float
Answered question This question has been answered.
Unanswered question This question has not been answered yet.
Updated on 2012-12-14T00:29:00Z at 2012-12-14T00:29:00Z by Robin500
Robin500 270005XCVK8 PostsACCEPTED ANSWER
Re: Help on conversion of Binary Float to Decimal Float2012-12-14T00:29:00Z in response to PavankumarMYou specify precision in binary as: FLOAT BINARY (n)where the constant n is the (minimum) number of binary digits that you want in the mantissa.If you specified just FLOAT BINARY you will get default precision (one word)or about 21 binary digits for hexadecimal floating-point.You can get up to 53 binary digits.For the PC (IEEE float), you have up to 64 binary digits.By specifying FLOAT or FLOAT(m) you will get either float binary or decimal,depending on the compiler. In this case, m is the number of decimal digits ofprecision that you want. e.g., FLOAT (18) gives 64 binary digits on the PC (IEEE float).Just FLOAT by itself gives default precision.