# Booth's multiplication algorithm

__: Algorithm that multiplies two signed binary numbers in two's complement notation__

**Short description****Booth's multiplication algorithm** is a multiplication algorithm that multiplies two signed binary numbers in two's complement notation. The algorithm was invented by Andrew Donald Booth in 1950 while doing research on crystallography at Birkbeck College in Bloomsbury, London.^{[1]} Booth's algorithm is of interest in the study of computer architecture.

## The algorithm

Booth's algorithm examines adjacent pairs of bits of the 'N'-bit multiplier *Y* in signed two's complement representation, including an implicit bit below the least significant bit, *y*_{−1} = 0. For each bit *y*_{i}, for *i* running from 0 to *N* − 1, the bits *y*_{i} and *y*_{i−1} are considered. Where these two bits are equal, the product accumulator *P* is left unchanged. Where *y*_{i} = 0 and *y*_{i−1} = 1, the multiplicand times 2^{i} is added to *P*; and where *y*_{i} = 1 and *y*_{i−1} = 0, the multiplicand times 2^{i} is subtracted from *P*. The final value of *P* is the signed product.

The representations of the multiplicand and product are not specified; typically, these are both also in two's complement representation, like the multiplier, but any number system that supports addition and subtraction will work as well. As stated here, the order of the steps is not determined. Typically, it proceeds from LSB to MSB, starting at *i* = 0; the multiplication by 2^{i} is then typically replaced by incremental shifting of the *P* accumulator to the right between steps; low bits can be shifted out, and subsequent additions and subtractions can then be done just on the highest *N* bits of *P*.^{[2]} There are many variations and optimizations on these details.

The algorithm is often described as converting strings of 1s in the multiplier to a high-order +1 and a low-order −1 at the ends of the string. When a string runs through the MSB, there is no high-order +1, and the net effect is interpretation as a negative of the appropriate value.

## A typical implementation

Booth's algorithm can be implemented by repeatedly adding (with ordinary unsigned binary addition) one of two predetermined values *A* and *S* to a product *P*, then performing a rightward arithmetic shift on *P*. Let **m** and **r** be the multiplicand and multiplier, respectively; and let *x* and *y* represent the number of bits in **m** and **r**.

- Determine the values of
*A*and*S*, and the initial value of*P*. All of these numbers should have a length equal to (*x*+*y*+ 1).- A: Fill the most significant (leftmost) bits with the value of
**m**. Fill the remaining (*y*+ 1) bits with zeros. - S: Fill the most significant bits with the value of (−
**m**) in two's complement notation. Fill the remaining (*y*+ 1) bits with zeros. - P: Fill the most significant
*x*bits with zeros. To the right of this, append the value of**r**. Fill the least significant (rightmost) bit with a zero.

- A: Fill the most significant (leftmost) bits with the value of
- Determine the two least significant (rightmost) bits of
*P*.- If they are 01, find the value of
*P*+*A*. Ignore any overflow. - If they are 10, find the value of
*P*+*S*. Ignore any overflow. - If they are 00, do nothing. Use
*P*directly in the next step. - If they are 11, do nothing. Use
*P*directly in the next step.

- If they are 01, find the value of
- Arithmetically shift the value obtained in the 2nd step by a single place to the right. Let
*P*now equal this new value. - Repeat steps 2 and 3 until they have been done
*y*times. - Drop the least significant (rightmost) bit from
*P*. This is the product of**m**and**r**.

## Example

Find 3 × (−4), with **m** = 3 and **r** = −4, and *x* = 4 and *y* = 4:

- m = 0011, -m = 1101, r = 1100
- A = 0011 0000 0
- S = 1101 0000 0
- P = 0000 1100 0
- Perform the loop four times:
- P = 0000 110
**0 0**. The last two bits are 00.- P = 0000 0110 0. Arithmetic right shift.

- P = 0000 011
**0 0**. The last two bits are 00.- P = 0000 0011 0. Arithmetic right shift.

- P = 0000 001
**1 0**. The last two bits are 10.- P = 1101 0011 0. P = P + S.
- P = 1110 1001 1. Arithmetic right shift.

- P = 1110 100
**1 1**. The last two bits are 11.- P = 1111 0100 1. Arithmetic right shift.

- P = 0000 110
- The product is 1111 0100, which is −12.

The above-mentioned technique is inadequate when the multiplicand is the most negative number that can be represented (e.g. if the multiplicand has 4 bits then this value is −8). This is because then an overflow occurs when computing -m, the negation of the multiplicand, which is needed in order to set S. One possible correction to this problem is to extend A, S, and P by one bit each, while they still represent the same number. That is, while −8 was previously represented in four bits by 1000, it is now represented in 5 bits by 1 1000. This then follows the implementation described above, with modifications in determining the bits of A and S; e.g., the value of **m**, originally assigned to the first *x* bits of A, will be now be extended to *x*+1 bits and assigned to the first *x*+1 bits of A. Below, the improved technique is demonstrated by multiplying −8 by 2 using 4 bits for the multiplicand and the multiplier:

- A = 1 1000 0000 0
- S = 0 1000 0000 0
- P = 0 0000 0010 0
- Perform the loop four times:
- P = 0 0000 001
**0 0**. The last two bits are 00.- P = 0 0000 0001 0. Right shift.

- P = 0 0000 000
**1 0**. The last two bits are 10.- P = 0 1000 0001 0. P = P + S.
- P = 0 0100 0000 1. Right shift.

- P = 0 0100 000
**0 1**. The last two bits are 01.- P = 1 1100 0000 1. P = P + A.
- P = 1 1110 0000 0. Right shift.

- P = 1 1110 000
**0 0**. The last two bits are 00.- P = 1 1111 0000 0. Right shift.

- P = 0 0000 001
- The product is 11110000 (after discarding the first and the last bit) which is −16.

## How it works

Consider a positive multiplier consisting of a block of 1s surrounded by 0s. For example, 00111110. The product is given by:

- [math]\displaystyle{ M \times \,^{\prime\prime} 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 1 \; 0 \,^{\prime\prime} = M \times (2^5 + 2^4 + 2^3 + 2^2 + 2^1) = M \times 62 }[/math]

where M is the multiplicand. The number of operations can be reduced to two by rewriting the same as

- [math]\displaystyle{ M \times \,^{\prime\prime} 0 \; 1 \; 0 \; 0 \; 0 \; 0 \mbox{-1} \; 0\; ^{\prime\prime} = M \times (2^6 - 2^1) = M \times 62. }[/math]

In fact, it can be shown that any sequence of 1s in a binary number can be broken into the difference of two binary numbers:

- [math]\displaystyle{ (\ldots 0 \overbrace{1 \ldots 1}^{n} 0 \ldots)_{2} \equiv (\ldots 1 \overbrace{0 \ldots 0}^{n} 0 \ldots)_{2} - (\ldots 0 \overbrace{0 \ldots 1}^{n} 0 \ldots)_2. }[/math]

Hence, the multiplication can actually be replaced by the string of ones in the original number by simpler operations, adding the multiplier, shifting the partial product thus formed by appropriate places, and then finally subtracting the multiplier. It is making use of the fact that it is not necessary to do anything but shift while dealing with 0s in a binary multiplier, and is similar to using the mathematical property that 99 = 100 − 1 while multiplying by 99.

This scheme can be extended to any number of blocks of 1s in a multiplier (including the case of a single 1 in a block). Thus,

- [math]\displaystyle{ M \times \,^{\prime\prime} 0 \; 0 \; 1 \; 1 \; 1 \; 0 \; 1 \; 0 \,^{\prime\prime} = M \times (2^5 + 2^4 + 2^3 + 2^1) = M \times 58 }[/math]
- [math]\displaystyle{ M \times \,^{\prime\prime} 0 \; 1 \; 0 \; 0 \mbox{-1} \; 1 \mbox{-1} \; 0 \,^{\prime\prime} = M \times (2^6 - 2^3 + 2^2 - 2^1) = M \times 58. }[/math]

Booth's algorithm follows this old scheme by performing an addition when it encounters the first digit of a block of ones (0 1) and subtraction when it encounters the end of the block (1 0). This works for a negative multiplier as well. When the ones in a multiplier are grouped into long blocks, Booth's algorithm performs fewer additions and subtractions than the normal multiplication algorithm.

## See also

## References

- ↑ "A Signed Binary Multiplication Technique".
*The Quarterly Journal of Mechanics and Applied Mathematics***IV**(2): 236–240. 1951. http://bwrc.eecs.berkeley.edu/Classes/icdesign/ee241_s00/PAPERS/archive/booth51.pdf. Retrieved 2018-07-16. Reprinted in*A Signed Binary Multiplication Technique*. Oxford University Press. pp. 100–104. - ↑
*Signal processing handbook*. CRC Press. 1992. p. 234. ISBN 978-0-8247-7956-6. https://books.google.com/books?id=10Pi0MRbaOYC&pg=PA234.

## Further reading

- "Andrew Booth's Computers at Birkbeck College".
*Resurrection*(London: Computer Conservation Society) (5). Spring 1993. http://www.cs.man.ac.uk./CCS/res/res05.htm#e. -
*Computer Organization and Design: The Hardware/Software Interface*(Second ed.). San Francisco, California, USA: Morgan Kaufmann Publishers. 1998. ISBN 1-55860-428-6. https://archive.org/details/computerorganiz000henn. -
*Computer Organization and Architecture: Designing for performance*(Fifth ed.). New Jersey: Prentice-Hall, Inc.. 2000. ISBN 0-13-081294-3. https://archive.org/details/computerorganiza00will. - "Advanced Arithmetic Techniques".
*quadibloc*. 2018. http://www.quadibloc.com/comp/cp0202.htm.

## External links

- Radix-4 Booth Encoding
- Radix-8 Booth Encoding in A Formal Theory of RTL and Computer Arithmetic
- Booth's Algorithm JavaScript Simulator
- Implementation in Python

Original source: https://en.wikipedia.org/wiki/Booth's multiplication algorithm.
Read more |