Recently I did a simulation experiment for a particular LP. In each iteration I need to solve many instances. Say, Max(C_i*X_i| X_1 in A_i), where X_1 is a vector of decision variable and A1 is the feasible set for iteration i, i=1,...,n.
Instead of solving n small problems, I can obtain the same results by solving a large combined problem
Max(\sum_i C_i*X_i| X_i in A_i for i=1,...,n)
Then, much to my surprise, I discovered that when "n" is not too large, it's quicker to solve the large problem instead of multiple small problems. Since I know little about how the algorithm in modern software packages actually works (by the way, I used MATLAB for the experiment), I'm wondering if anyone has any explanation for this?
I'm guessing the pre-processing steps (just like the idea of fixed costs) slows down the process of solving multiple problems, but I'm totally unsure. Thanks.