Do We Need Accuracy In Solvers?
I just read this Paul Rubin commented on the first post that one of the problems that leads to accuracy problem is poor scaling. I tend to agree, and would like to offer two examples. I encountered the first one when working with a large supply chain management software company. They were experimenting numerical precision issues. The application they were working on was for scheduling production of material. Looking at details, we discovered that:
So far so good, until we realized that the quantities were in thousands of metric tons, and that the scheduling horizon was a month. Some of the constraints of the problem implied products of time and quantity. Thousands of tons means 10 digits numbers when expressed in grams. Similarly, durations up to a month expressed in seconds are 7 digits numbers. Therefore, some numbers appearing in constraints were 17 digits, which is more than what can be representing using double precision floating point numbers. Moreover, the precision of solvers like CPLEX, who use double precision floating point numbers, is much smaller. As a rule of thumb, anything beyond six or seven digits accuracy in linear programming is subject to caution, except in specific cases. When looking at mixed integer programming, precision can be greater thanks to integer computation, but the fact that computation relies partly on the underlying LP relaxation makes accuracy claims a bit suspicious. We explained this to the customer, and had them rescale their numbers, using tons as a unit. We also got them to change time unit when they were scheduling for a month, moving to time buckets of 15 min. Then the numbers at hands were requiring much less digit accuracy, and the application development could proceed with good results. The above is called rescaling, i.e. multiplying, or dividing input numbers by large factors so that all numbers have more or less the same order of magnitude. What is really bad is when you have in a given constraint small coefficients and very large ones. Solvers do some scaling automatically, but it is much better to only input numbers that have similar magnitude. Once numbers have been rescaled, most problems run fine with CPLEX. It means that the standard double precision is enough for most cases indeed. This should close the discussion, shouldn't it? Both John D. Cook and Samik Raychaudhuri seem right
Well, let's look at my second example. CPLEX is being used as part of a future financial services application to optimize the settlement of trades made on various exchanges. I cannot say more now as the project is still going on, but I'll certainly communicate about it when the system will go live. There is a nice description of a (vaguely) similar system in this paper: INDE These applications deal with amounts in banking accounts. Computations involving banking account cannot be subject to numerical issues. They have to be exact to the cent. Even if the amount is in trillions. As a matter of fact there is a requirement on many banking applications that numbers be represented with 18 digits, and that numerical computations must be made with that precision. It was definitely one of the requirements for the trade settlement project I am talking about. I was even told by the customer that there is a forthcoming requirement for handling 22 digit numbers. Needless to say, the accuracy of any technology using double precision floating point numbers such as CPLEX is insufficient. What we recommended was to perform the optimization piece using the available double precision, then to store the results (eg which trades are to be settled) then to redo the computation using exact precision routines used in banking. Another option would have been be to use quad precision inside CPLEX, but that would have required a substantial rework that was incompatible with the project schedule. Sometimes greater accuracy is indeed required.

For a model whose parameters are estimated/forecast/snatched from a place the moon don't shine, obsessing about numerical accuracy (or, with a MIP model, squeezing the gap down to zero) is probably not energy well spent.
Right, this was Samik's point I guess. My point was to not lose track of cases where accuracy is required.