Amdahl’s law describes the increase in speed that can be gained from parallelizing a computation across multiple processors. It is defined by:
S = 1 / ( 1 – p + p/n)
S= the Speedup gained by running a computation in parallel. This is the time taken to run the computation on a single processor / time taken to run the computation on n processors.
p = the fraction of the job that can be executed in parallel.
N = the number of processors
The above formula is derived as follows:
Time taken to run a computation on multiple processors = time taken for the parallelizable part of the job + time taken on the non-parallelizable part of the job
Time taken on parallelizable part of the job = Parallelizable fraction of the job / number of processors
The sequential part of the job is given by 1-p
Note that part of the non-parallelizable, or sequential part of the computation may involve the overhead of coordinating the multiple processors doing the work.
Thus the ratio of the sequential to the parallelizable part is given by :
S = 1 / (1-p + p/n)
For a perfectly parallelizable job running on 5 processors, the Speedup is
S= 1/ (1-1+ 1/5 ) = 5 times speedup (500%)
For a job that is 80% parallelizable
S= 1/(1-0.8 + 1/0.8) = = 1/1.45 = 45 % speedup
- Maurice Herlihy and Nir Shavit. 2008. The Art of Multiprocessor Programming. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.