Question & Answer
Is there any parameter to instruct CPLEX MIP to stop when a given value is reached as solution?
CPLEX does offer an absmipgap parameter that tells CPLEX to stop the optimization when the difference between the current solution and the bound given by the best node is within a specified absolute value. CPLEX also offers the mipgap tolerance parameter, which determines a relative difference between those same values, and hence involves a percentage. However, there is no parameter to tell CPLEX to stop once the current solution has reached a certain absolute value.
You can use CPLEX callbacks. For example you can use the informational callback, which CPLEX will call each time a new solution is found. At this point you can check the objective value, and abort the optimization process if the value satisfies your criterion. You can check the CPLEX documentation for more information regarding callbacks. While you could also use the incumbent callback to do this, doing so would disable CPLEX's default tree search methodology, potentially slowing performance.
Alternatively you can use existing CPLEX parameters. Let's assume a minimization problem where you want to stop as soon as you have a feasible solution with objective function < 0. You can set the following parameters:
- uppercutoff to 0
- MIP integer solution limit to 1
You can check the documentation for more information regarding these parameters.
This will cause CPLEX to prune any nodes with a relaxation objective value >= 0. So, the first feasible solution CPLEX finds will have an objective < 0, and the solution limit of 1 will stop the optimization as soon as CPLEX finds it. This will work for any objective value, not just 0.
However, by changing the cutoff as described above, the MIP optimization may change, and there is some potential for performance to degrade. Specifically, if we let the MIP optimization go without this cutoff value, CPLEX may initially find some solutions with objectives > 0, and then the RINS or some other local improvement heuristic may use one or more of those solutions to help find a solution with objective < 0. By setting the upper cutoff to 0, those heuristics may no longer succeed, which could increase the run time. We advice you to run some tests on your models to assess which solution is the best for you. Using parameters is easier to implement, but using the informational callback provides a more robust approach since it does not change the behavior of the optimizer.
16 June 2018