We are facing precision issues as we are having Binary FLoat datatypes declared in our program . Are the utilities to automatically convert the code or compiler options to convert this to Decimal Float  regards Pavan
Topic
This topic has been locked.
1 reply
Latest Post
 20121214T00:29:00Z by Robin500
ACCEPTED ANSWER
Pinned topic Help on conversion of Binary Float to Decimal Float
20120309T17:52:32Z

Answered question
This question has been answered.
Unanswered question
This question has not been answered yet.
Updated on 20121214T00:29:00Z at 20121214T00:29:00Z by Robin500

ACCEPTED ANSWER
Re: Help on conversion of Binary Float to Decimal Float
20121214T00:29:00Z in response to PavankumarMYou specify precision in binary as: FLOAT BINARY (n)where the constant n is the (minimum) number of binary digits that you want in the mantissa.If you specified just FLOAT BINARY you will get default precision (one word)or about 21 binary digits for hexadecimal floatingpoint.You can get up to 53 binary digits.For the PC (IEEE float), you have up to 64 binary digits.By specifying FLOAT or FLOAT(m) you will get either float binary or decimal,depending on the compiler. In this case, m is the number of decimal digits ofprecision that you want. e.g., FLOAT (18) gives 64 binary digits on the PC (IEEE float).Just FLOAT by itself gives default precision.