algorithm-notes
  • Preface
  • Outline
    • Asymptotic-Analysis
    • Divide-and-Conquer (DnC)
      • Overview
      • Multiplication
      • DnC Strategy
      • Master Method
    • Randomization
      • Overview
      • Discrete Probability
      • Randomized Algorithms
      • Reservoir Sampling
    • Sorting
      • Overview
      • Selection Sort
      • Bubble Sort
      • Insertion Sort
      • Shell Sort
      • Merge Sort
      • Heap Sort
      • Quick Sort
      • Non-Comparison Sort
    • Graph Algorithms
      • Overview
      • Minimum Spanning Tree
      • Shortest Path
      • Topological Sort
    • Tree
      • Overview
      • Balanced Search Tree
        • AVL Tree
        • Red-Black Tree
        • B Tree
      • Heap
      • Disjoint Set
      • Threaded Binary Tree
      • Segment Tree
    • Searching
      • Overview
      • Binary Search
      • Graph Search
      • Hash Table
      • String Search
    • Dynamic Programming
      • Overview
      • Fibonacci Numbers
      • LCS problem
    • Greedy Algorithms
      • Overview
Powered by GitBook
On this page
  • Integer Multiplication
  • Karatsuba Multiplication
  • Matrix Multiplication
  • Naive Matrix Split
  • Strassen Matrix Multiplication

Was this helpful?

  1. Outline
  2. Divide-and-Conquer (DnC)

Multiplication

PreviousOverviewNextDnC Strategy

Last updated 6 years ago

Was this helpful?

This chapter concerns the naive multiplication algorithms and their non-trivial advanced counterparts which all take the form of DnC strategy.

Integer Multiplication

Recall from what the teachers taught in grade-school a typical integer multiplication may take a form like below:

Figure 1. The grade-school integer multiplication algorithm

In this naive algorithm, the total number of operations is 3 (3 operations per row for multiplication and addition)· 3 (3 rows in total) = 9. Thus, roughly the running time estimation is с ⋅ n2, bounded by Ο(n2).

It is so widely adopted that less attention been paid to a more complex, yet efficient method.

Assuming we are multiplying two n-digit numbers x and y; then dividing each of them: x = 10n/2 ⋅ a + b, y = 10n/2 ⋅ c + d

And multiply them together as: x ⋅ y = 10nac + 10n/2(bc + ad) + bd

Then, the algorithm in a DnC pattern has complexity: Τ(n) = 4Τ(n/2) + Ο(n)

Applying the math trick discovered by , that (bc + ad) = (a + b)(c + d) - ac - bd, the complexity will be reduced to Τ(n) = 3Τ(n/2) + Ο(n)

Pseudocode



MULTIPLY(x, y)
  x, y are positive integers
  output: their product

  n = min(x, y)
  if n is 1, return xy

  xL, xR = leftmost [n/2], rightmost [n/2] of x
  yL, yR = leftmost [n/2], rightmost [n/2] of y

  P1 = MULTIPLY(xL, yL)
  P2 = MULTIPLY(xR, yR)
  P3 = MULTIPLY(xL + xR, yL + yR)

  return P1 * 10n + (P3 - P1 - P2) * 10n/2 + P2

Note: the x and y could be odd or even numbers and produce the correct result by padding bits before recursive calls and dropping bits before combination phase.

Algorithm Analysis

Upon analyzing the complexity of both the naive and Karatsuba multiplication, the [Master Method][master-method] comes into play.

In naive version of integer multiplication, Τ(n) = Ο(n2); The DnC multiplication without optimization would be Τ(n) = 4 Τ(n / 2) + Ο(n) = Ο(nlog24) = Ο(n2);

Hence, the Karatsuba multiplication beats the naive integer multiplication algorithm in their running time efficiency.

Matrix Multiplication

Matrix Multiplication, termed as Matrix dot Product as well, is a form of multiplication involving two matrices Χ (n ✗ n), Υ (n ✗ n)like below:

The preceeding formula indicates a Ο(n3) running time given for each entry in the resulting array requires Ο(n) operations and there are Ο(n2) in total.

Naive Matrix Split

It is intuitive to devise a DnC pattern like below:

The resulting matrix dot product has 8 sub matrix products, 4 more additions and we could formulate a recurrence relation:

Τ(n) = 8 Τ(n / 2) + Ο(n2)

According to [Master Method][master-method], the recurrence relation above boils down to: Ο(nlog28) = Ο(n3). Same as it is with the default matrix multiplication method though, there is a way to more efficiently compute the result.

Strassen Matrix Multiplication

where

Р1 = A (F - H), Р2 = (A + B) H, Р3 = (C + D) E, Р4 = D (G - E), Р5 = (A + D)(E + H), Р6 = (B - D)(G + H), Р7 = (A - C)(E + F)

Thus, new recurrence relation: Τ(n) = 7Τ(n / 2) + Ο(n2)

[master-method]: master-method.md

Obviously, the DnC multiplication without optimization equates the efficiency of the naive algorithm. After applying the overall complexity would be Τ(n) = 3 Τ(n / 2) + Ο(n) = Ο(nlog23) ≈ Ο(n1.59).

Figure 2. Matrix Dot Product Mathematical Notation

Figure 3. Matrix Divided into Blocks

Developed by , an intricate matrix decomposition magically reduces the complexity;

Figure 4. Strassen Multiplication Matrix Decomposition

By the [Master Method][master-method], the running time is Ο(nlog27) ≈ Ο(n2.81); it beats the method of Ο(n3).

V. Strassen
Karatsuba multiplication
polynomial time
naive-matrix-split
Karatsuba Multiplication
Carl Friedrich Gauss