Topic
• Latest Post - ‏2013-09-10T15:15:48Z by TobiasAchterberg
taidungman
1 Post

# Pinned topic solving multiple small problems vs. one large problem

‏2013-09-10T14:15:56Z |

Hello everyone,

Recently I did a simulation experiment for a particular LP. In each iteration I need to solve many instances. Say, Max(C_i*X_i| X_1 in A_i), where X_1 is a vector of decision variable and A1 is the feasible set for iteration i, i=1,...,n.

Instead of solving n small problems, I can obtain the same results by solving a large combined problem

Max(\sum_i C_i*X_i| X_i in A_i for i=1,...,n)

Then, much to my surprise, I discovered that when "n" is not too large, it's quicker to solve the large problem instead of multiple small problems. Since I know little about how the algorithm in modern software packages actually works (by the way, I used MATLAB for the experiment), I'm wondering if anyone has any explanation for this?

I'm guessing the pre-processing steps (just like the idea of fixed costs) slows down the process of solving multiple problems, but I'm totally unsure. Any comments will be highly appreciated. Thanks.

• TobiasAchterberg
8 Posts

#### Re: solving multiple small problems vs. one large problem

‏2013-09-10T15:15:48Z

If this are only LPs (linear constraints, linear objective, only continuous variables), then solving them as a single big LP should usually be roughly similar in performance than solving them individually. It is a question of overhead for calling the LP solver versus the overhead of having to deal with larger memory footprints and larger basis matrices.

If those are mostly easy problems, then you should try to solve them individually and only use a single thread (set the "threads" parameter to 1). This should already eliminate a great portion of the set-up overhead in the LP solves. Moreover, you should check whether the barrier algorithm is faster than the dual simplex.

Tobias