Multiprocessor response time
A multiprocessor can only improve the execution time of an individual program to the extent that the program can run in multithreaded mode.
There are several ways to achieve parallel execution of parts of a single program:
- Making explicit calls to libpthreads.a subroutines (or, in older programs, to the fork() subroutine) to create multiple threads that run simultaneously.
- Processing the program with a parallelizing compiler or preprocessor that detects sequences of code that can be executed simultaneously and generates multiple threads to run them in parallel.
- Using a software package that is itself multithreaded.
Unless one or more of these techniques is used, the program will run no faster in a multiprocessor system than in a comparable uniprocessor. In fact, because it may experience more locking overhead and delays due to being dispatched to different processors at different times, it may be slower.
Even if all of the applicable techniques are exploited, the maximum improvement is limited by a rule that has been called Amdahl's Law:
- If a fraction x of a program's uniprocessor execution time, t, can only be processed sequentially, the improvement in execution time in an n-way multiprocessor over execution time in a comparable uniprocessor (the speed-up) is given by the equation:

As an example, if 50 percent of a program's processing must be done sequentially, and 50 percent can be done in parallel, the maximum response-time improvement is less than a factor of 2 (in an otherwise-idle 4-way multiprocessor, it is at most 1.6).