Comments (2)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 PaulRubin commented Permalink

My first reaction was to distinguish scope from scale. In OR, decomposing a problem based on scope tends to lead to suboptimization (solutions that are optimal for a given portion of the system but cause problems for other portions). Decomposing a problem based on scale (all relevant subsystems represented in each chunk) might be safer.

 
My second reaction is that what I just typed may be too facile. Perhaps it's a question of decomposition v. decoupling. If we break the problem into chunks, decouple them, and solve the chunks individually, we open the door wide to suboptimization. If we decompose in a properly coupled manner (which is what D-W and Benders do), we continue to ensure optimality (if we get the ultimate solution).
 
In optimization, I have a hard time predicting the time trade-off between "large batch" (unified model) and various decompositions. Theoretically, we can determine the best approach by experimentation, but on a one-off instance I tend to agree with your conclusion: decompose only if you're sufficiently frustrated trying to solve the unified model.

2 JeanFrancoisPuget commented Permalink

Paul, thank you for the comment.

 
You point to an important way to classify decomposition methods, distinguishing those who lead to suboptimal solutions from those who ensure optimality can be reached. I'd rather call the latter reformulation methods than decomposition, but I guess it is too late to rename them, isn't it?