Hi,
I have heard that adding a constraint parallel to the objective function downgrades the performance. If is is the case, how I should add a new upper bound for the objective function at each node of my branch-and-cut tree?
Thanks
Hi,
I have heard that adding a constraint parallel to the objective function downgrades the performance. If is is the case, how I should add a new upper bound for the objective function at each node of my branch-and-cut tree?
Thanks
What is the sense of you objective function: minimization or maximization?
Assuming you are minimizing, then an upper bound would be provided by a feasible solution, which you can add through a heuristic callback (or just provide in advance using a MIP start). A lower bound would be a dual bound on the objective. This is the thing that is not helping much performance, except if it allows you to cut off a node. So, if you have a way of calculating a valid dual bound at each node that is stronger than the LP relaxation bound, you could do this inside a branch callback. If your bound allows to prune the node, then just create 0 children. If it does not allow to prune the node, just do nothing and accept the branching decision that was suggested by CPLEX.
If you insist on providing a better dual bound for a node, even though it is not good enough to cut off the node, then I would add an auxiliary variable z to your problem, link it to the objective function using a constraint z >= c*x (for minimization; use <= for maximization), and replace the objective function by z. Then, protect z from presolve reductions (so it will remain in the problem), and in your branch or cut callback you could then just tighten the lower bound (or upper bound for maximization) of this z variable. But still it is typically true that such an approach will typically downgrade the performance of CPLEX. The main reason is that adding a cut that is parallel to the objective function will produce a high dimensional optimal face in the LP relaxation. Thus, any cut or any branching decision will often just push the LP solution to a different vertex on this optimal face, so that it looks as if cuts and branching are just ineffective to move the objective function value. Hence, CPLEX would conclude that this type of cut or this particular variable is bad and would disable such cuts and branch on different variables (just to find out that those branches are equally bad as well). Overall, you will cripple CPLEX' main tool to dynamically evaluate and adapt heuristic decisions like branching, so that the resulting search procedure boils down to basically a random tree search.
Tobias
Tobias, thanks a lot for your comprehensive answer!
In fact, I am calculating a valid tight (dual) bound at each node for the dummy variable z, representing my second stage objective function in a Benders' decomposition.
I totally got your point and advantage of using this bound in my branching. However if I am unable to close the optimality gap, I will get a smaller optimality gap if add this bound as a cut!
In another post, I have also explained that I observe a horrible tailing effect in my branch-and-cut tree (optimality gap almost does not move) when I hit the optimality gap .002. Considering your answer, I assume that is just because I am only moving between different vertices of optimal face?
Thanks,
- amindehghanian
- 2013-08-21T19:44:50Z
Tobias, thanks a lot for your comprehensive answer!
In fact, I am calculating a valid tight (dual) bound at each node for the dummy variable z, representing my second stage objective function in a Benders' decomposition.
I totally got your point and advantage of using this bound in my branching. However if I am unable to close the optimality gap, I will get a smaller optimality gap if add this bound as a cut!
In another post, I have also explained that I observe a horrible tailing effect in my branch-and-cut tree (optimality gap almost does not move) when I hit the optimality gap .002. Considering your answer, I assume that is just because I am only moving between different vertices of optimal face?
Thanks,
Yes, it can very well be the case that the tailing off is due to the degeneracy of the LP subproblems, i.e., that they have large optimal faces.
- amindehghanian
- 2013-08-22T00:35:43Z
I forgot to ask this question!
Currently, I am tighten the bound in my cut callback. Is there any computational advantage to do it in my branch callback?
Thanks,
No, I don't think so. In the branch callback you could immediately calculate a new bound for each of the two children. Then it could be that solving the LPs for each of the children is faster than solving it first without the bound, then adding the bound in the cut callback, and then continue to solve it with the bound. But I don't think that this difference will matter much.