Knowing The Optimum Helps A Bit
Recent
A 2x speedup is good. Not doubt about it.
However, in order to get this speedup a lot of information is provided. The solver is input with the result it is supposed to compute. Indeed, solving a MIP amounts to finding an optimal solution, doesn't it?
Then, why isn't the speedup larger?
Let's look more precisely at what needs to be done when an optimal solution is provided as input. To be precise, what is being input is a feasible solution whose objective value happens to be the best possible one. But the solver isn't told that this is the best possible objective value. That's the trick. Then the solver does two things:
What about part 2? Can there be a quick way (polynomial time) to prove that there are no better feasible solutions? Most theoreticians believe this is not always possible because it is a coNP complete problem (*). If they are right, then it means that there is no general way to quickly prove that a solution is optimal. Carvajal and Achterberg experiments are confirming this. Does it matter in practice? One could say that we should not worry about proving optimality. Indeed, there are optimization methods, eg. meta heuristics, that do not prove optimality. Others will say that proving optimality has value, see for instance What Is The Gap? I won't repeat that discussion here. Let me just make the following point: if we look at Carvajal and Achterberg experiments, proving optimality requires between a quarter and a half of the solving time on average. This is not that expensive, so why not do it? (*) For those interested in knowing more about coNP completeness, here is a very short introduction. Let us define the following decision problem:
