Faster To What?
JeanFrancoisPuget 2700028FGP Comments (2) Visits (9473)
Marc-Andre Carle had a great tweet about INFORMS 2012 conference:
This triggered two interesting blog entries by Nathan Brixius and Marc-Andre Carle on what it means to be better for a mathematical optimization solver. Not surprisingly the discussion focuses rapidly on better= faster, given speed is the easiest to measure among the dimensions Nathan outlined. I agree with most of what Nathan and Marc-Andre wrote and I won't repeat their arguments here. I agree except for the following two items: determinism and how faster is defined.
Let's deal with the less important first. Marc-Andre wrote that one issue in measuring solver performance is the lack of determinism of stochastic or parallel algorithms. Let me simply remind readers that CPLEX offers deterministic parallel MIP algorithms since several releases. This MIP solver includes various stochastic methods internally using a random seed. The latest release (12.5) let users provide random seeds and let users use deterministic time limits. Therefore it is possible to ensure completely reproducible results with that release.
Let us look at the other issue. It is about how "faster" is defined. At first sight, what matters is indeed the speed at which a solver can solve your problem. Statements obtained on a set of problems are irrelevant unless your problems are really close to the ones used in a given benchmark. Marc Andre made that argument rather convincingly. This is why we often recommend that customers actually run the various solvers they consider.
Is it the only precaution to have? No, as we'll see right now. Let's have a look at the use cases Nathan has outlined for using a mathematical optimization solver.
In each of these case the person developing the mathematical models/algorithms has a limited time to do it. The limit can be set by the submission deadline for academic research, the demo due date for rapid prototyping, the amount of consulting budget for consulting engagement, the go live date for a production system. It should be clear then that what matters is the performance one can achieve within the previous time limits.
What matters is the performance of what you can develop in a given amount of time.
Therefore, is it better to have a fast solver that is hard to use, or a solver that lets you quickly try various model formulations? The answer is probably both. That's why modeling languages have been designed. They complement solvers by providing easier ways to state and tune models. So, next time you want to compare various options for using optimization technology, think about performance in both dimensions: speed at which the solver solves a given model, and speed at which your team can develop and tune models to be solved. This is reminiscent of a previous post on where efforts need to be spent during an optimization project.