The cost of this approach in bit operations is O(n log n) if implemented very carefully see KnuthSeries volume 2, Section 4.3.3.C. We then carry out the pointwise multiplication and invert the FFT. (See StrassensAlgorithm.)Ī still faster method is to apply a FastFourierTransform to the two numbers, which changes convolution (z i = Sigma j x i+jy i-j this is another way to compute the product) into pointwise multiplication (z i=x iy i). The value of lg 7 is about 2.8, so this is somewhat better than the Theta(n 3) running time for the straightforward algorithm. The algorithm must solve the following problem: Input: A, an integer array and k an integer. The resulting algorithm, known as Strassen's Algorithm, has a recurrence T(n) = 7T(n/2) + O(n 2) with solution T(n) = Theta(n lg 7). Give a divide and conquer algorithm to search an array for a given integer. This is O(n 1.59) or thereabouts, much better than the straightforward algorithm.Ī similar trick can be used to do fast matrix multiplication, by reducing multiplication of two n by n matrices to seven products of n/2 by n/2 matrices. Since we can compute the three products recursively using the same algorithm, we have T(n) = 3T(n/2) + Theta(n) = Theta(n lg 3) by the MasterTheorem. So to compute xy, compute the three products above, and then let The trick to the Karatsuba-Ofman algorithm is to compute the x 1y 1 and x 0y 0 terms with one recursive multiplication each, and then compute sum of the remaining terms x 1y 0+x 0y 1 with a single additional multiplication. We have thus reduced the problem of computing a product on n=2k bits to the problem of computing four products on k bits, which gives running time T(n) = 4T(n/2) + Theta(n) = T(n 2) by the MasterTheorem. We will assuming that shifting (i.e., multiplying by 2 something) is free and addition takes linear time. Imagine splitting x and y into two very huge digits of k bits each: x = x 12 k + x 0, y = y 12 k + y 0. The description below is adapted from a description in Section 5.2.2 of Goodrich and Tamassia's Algorithm Design (Wiley, 2002). The paper is written in Russian, so I haven't read it. This algorithm is due to Karatsuba and Ofman, in a 1962 paper from Doklady Akademii Nauk SSSR. by multiplying the numbers 16 bits at a time instead of one bit at a time. In practice we can usually speed up this algorithm by a large constant factor by using larger digits, e.g. Recursive algorithms are a natural fit for divide-and-conquer. X * y = (∑ i x i2 i)(∑ jy j2 j) = ∑ i∑ j x iy j2 i+j.Īssuming that the addition is cheap, this takes Theta(n 2) time. Instead, we treat each integer as an array of bits x nx n-1.x 1x 0 and multiply using the algorithm: When dealing with such monsters it no longer makes sense to assume that multiplication is constant time. Suppose we wish to multiply two very large integers (e.g.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |