🐻
berkeley
  • #Despair
  • 16A/B
    • Thevenin and Norton Equivalence
    • Resistive Touchscreen
    • Trilateration and Correlation
    • RC Circuits & Transient Analysis
  • 61B
    • Testing
    • References in Java
    • Linked Lists in Java
    • Implementation Inheritance
    • Comparables and Iterators
    • Casting
    • Lists and Sets
    • Asymptotics
    • Trees
    • Hashing
    • PriorityQueue and Heap
    • Graphs, Tree Traversals
    • Tries
    • Sorting
  • Discrete Mathematics
    • Sets + Mathematical Notation
    • Logic and Proofs
    • Induction
    • Cryptography/RSA
    • Modular Arithmetic
    • Discrete Probability
  • Linear Algebra
    • Linear Equations
    • Matrix Algebra
    • Determinants
    • Vector Spaces
    • Eigenvectors and Eigenvalues
    • Projections and Orthogonality
    • SVD/PCA
  • Math 275 — Ordinary Differential Equations
    • Math 275 — Overview
    • Modeling via Differential Equations
    • Linear Systems
  • INTEGBI 35AC
    • Humans Came From Fish
    • Evolution's History
    • Typology
    • The Human Diaspora
  • 170
    • Basics
    • Divide and Conquer
    • Fast Fourier Transform
    • Graphs
    • Greedy
  • 61C
    • Pointers and Memory
    • Floating Point
    • Compliation, Assembly, Linking, Loading
    • RISC-V, Assembly
Powered by GitBook
On this page
  • Polynomial Multiplication
  • Relationship Between Polynomial and Integer Multiplication
  • Possible Algorithms:
  • Fast Fourier Transform
  • Matrix Representation

Was this helpful?

  1. 170

Fast Fourier Transform

Please let me off the waitlist for this class.

Polynomial Multiplication

Suppose we have two input polynomials,

A(x)=a0+a1x+a2x2...an−1xn−1B(x)=b0+b1x+b2x2...bn−1xn−1A(x) = a_0 + a_1x + a_2x^2 ... a_{n-1}x^{n - 1}\\ B(x) = b_0 + b_1x + b_2x^2 ... b_{n-1}x^{n - 1}A(x)=a0​+a1​x+a2​x2...an−1​xn−1B(x)=b0​+b1​x+b2​x2...bn−1​xn−1

such that n=2d−1.n = 2d - 1.n=2d−1.

Relationship Between Polynomial and Integer Multiplication

Given ints α,β\alpha, \betaα,β, and we want γ=αβ.\gamma = \alpha\beta.γ=αβ. Given the coefficients, we want to compute the coefficients of the product polynomial.

Possible Algorithms:

First, we have the straight-forward algorithm. We have

c0=a0b0c1=a0b1+a1b0ck=∑j=0kajbk−jc_0 = a_0b_0\\ c_1 = a_0b_1 + a_1b_0\\ c_k = \sum_{j= 0}^ka_jb_{k - j}c0​=a0​b0​c1​=a0​b1​+a1​b0​ck​=j=0∑k​aj​bk−j​

Total time to compute this is O(N2)O(N^2)O(N2) since we need to loop over j=0j = 0j=0to N−1N - 1N−1. Computing each of cn4\frac{c_n}{4}4cn​​requires ≥N4\geq \frac{N}{4}≥4N​flops, meaning we're gonna have Θ(N2)\Theta(N^2)Θ(N2)flops.

Second, we have a Karatsuba related approach. Recall that we have

A(x)=a0+a1x+a2x2...an−1xn−1B(x)=b0+b1x+b2x2...bn−1xn−1A(x) = a_0 + a_1x + a_2x^2 ... a_{n-1}x^{n - 1}\\ B(x) = b_0 + b_1x + b_2x^2 ... b_{n-1}x^{n - 1}A(x)=a0​+a1​x+a2​x2...an−1​xn−1B(x)=b0​+b1​x+b2​x2...bn−1​xn−1

We can split this up into lows and highs as we did for integer multiplication, that is

A(x)=Al(x)+xn2+Ah(x)B(x)=Bl(x)+xn2+Bh(x)A(x) = A_l(x) + x^{\frac{n}{2}} + A_h(x)\\ B(x) = B_l(x) + x^{\frac{n}{2}} + B_h(x)A(x)=Al​(x)+x2n​+Ah​(x)B(x)=Bl​(x)+x2n​+Bh​(x)

It's the same as last time — you'd get four recursive polynomial multiplications to do with half the degree, but applying the Karatsuba trick, we'd only have three polynomial multiplications, meaning we'd have, just like Karatsuba, Θ(nlog⁡23).\Theta(n^{\log_23}).Θ(nlog2​3).

Finally, we have an even faster approach using polynomial interpolation.

Polynomial intermolation states that a degree <N< N<Npolynomial can be uniquely determined by its evaluation on NNNdistinct points (refer to CS 70, Lagrange interpolation)

Instead of directly computing the coefficients of C, we can determine C(x0)...C(xn−1)C(x_0) ... C(x_{n - 1})C(x0​)...C(xn−1​). We can therefore compute the evaluation of A(x)A(x)A(x)and B(x)B(x)B(x)on NNNdistinct points each, then multiply them, and then just interpolate them to find C(x).C(x).C(x).

Fast Fourier Transform

Clarification: The Discrete Fourier Transform is a matrix, and the Fast Fourier Transform is an algorithm.

In order to increase the efficiency of our algorithm, we can choose our points as ±x0,±x1,±x2\pm x_0, \pm x_1, \pm x_2±x0​,±x1​,±x2​, et cetera, because then the computation for the even powers would be the same, saving us a lot of resources.

The main issue, is, this only works at the top level of recursion; that is, the second level is all squares, meaning there can be no negatives. However, we can use complex numbers to get around this. The main question is, which complex numbers do we use? We can start from bottom up. We start off with one, which we can split into 111and −1-1−1. −1-1−1is split into iiiand −i-i−i, and we already know what 111 splits into. We continue until we reach a set of nnnpoints to use for our interpolation.

We call these roots of one the nnnth complex roots or unity. In other words, the solutions to zn=1z^n = 1zn=1. For even nnn, the numbers are plus-minus paired, and their squares are the n/2n/2n/2th roots of unity.

Here is a diagram,

Of course, in order to pull this off, we split A(x)A(x)A(x)into its odd and even powers, such that this trick can be used for both.

A(x)=Ae(x)+xAo(x)Ae(x) represents the even powers of A(x)Ao(x) represents the odd powers of A(x)A(x) = A_e(x) + xA_o(x)\\ A_e(x) \text{ represents the even powers of }A(x)\\ A_o(x) \text{ represents the odd powers of }A(x)A(x)=Ae​(x)+xAo​(x)Ae​(x) represents the even powers of A(x)Ao​(x) represents the odd powers of A(x)

We factor out xxxfrom Ao(x)A_o(x)Ao​(x)(the odd powers) so the powers inside become even. We can therefore write our function as

FFT(A, omega):
    # Input is coefficient rep of A(x) of degree n - 1, n is a power of
    # 2 omega, an nth root of unity.
    # We want to output A(omega) ... A(omega^{n - 1}) so we can use 
    # lagrange interpolation to find the original polynomial
    if (omega = 1):
        return A(1)
    even_pol, odd_pol = split(A) #Splits into odd/even pwrs
    even_results = FFT(even_pol, omega^2) #Refer to the diagram
    odd_results = FFT(odd_pol, omega^2)
    for j in range(n): #0 to n - 1
        find A(w^j), which is even_result[j] + omega^j * odd_result[j]
    return A(omega) ... A(omega^{n - 1}) #Computed by the for loop

Matrix Representation

For a polynomial with degree n−1n-1n−1, we can represent the set of polynomials A(x0) ... A(xn−1)A(x_0)\ ...\ A(x_{n-1})A(x0​) ... A(xn−1​)as a matrix-vector equation. The matrix is called a Vandermonde matrix, which we will call M(x).M(x).M(x).

[A(x0)A(x1)A(x2)⋮A(xn−1)]=[1x02x03...x0n−1⋮⋮⋮⋮⋮1xn−12xn−13...xn−1n−1][a0a1a2⋮an−1]\begin{bmatrix} A(x_0)\\ A(x_1)\\ A(x_2)\\ \vdots\\ A(x_{n-1}) \end{bmatrix} = \begin{bmatrix} 1 & x_0^2 & x^3_0 & ... & x_0^{n-1}\\ \vdots & \vdots & \vdots & \vdots & \vdots\\ 1 & x_{n-1}^2 & x^3_{n-1} & ... & x_{n-1}^{n-1} \end{bmatrix} \begin{bmatrix} a_0\\ a_1\\ a_2\\ \vdots\\ a_{n-1} \end{bmatrix} ​A(x0​)A(x1​)A(x2​)⋮A(xn−1​)​​=​1⋮1​x02​⋮xn−12​​x03​⋮xn−13​​...⋮...​x0n−1​⋮xn−1n−1​​​​a0​a1​a2​⋮an−1​​​

Using this same model we can represent the FFT as Mn(ω).M_n(\omega).Mn​(ω).

Inversion is possible — Mn(ω)−1=1nMn(ω−1)M_n(\omega)^{-1} = \frac{1}{n}M_n(\omega^{-1})Mn​(ω)−1=n1​Mn​(ω−1)

Of course, since we're looking for even powers, we'd have to split the matrix up between the even and odd powers, and rearrange the coefficient vector accordingly.

PreviousDivide and ConquerNextGraphs

Last updated 3 years ago

Was this helpful?