


Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The concept of growth of functions in computer science. It compares the computing time of two algorithms that perform the same task on n inputs, and explains how to determine which algorithm is better. The document also introduces the concept of orders of magnitude and polynomial time algorithms. It provides examples of algorithms with different computing times and shows how the computing times grow with a constant equal to one. The document emphasizes the importance of improving an algorithm by an order of magnitude.
Typology: Cheat Sheet
1 / 4
This page cannot be seen from the preview
Don't miss anything!
Text:
themselves polynomials referred to by their degrees: linear, quadratic, and cubic. However, there is no integer m such that nm^ bounds 2 n , or 2 n^ ≠ O(nm) for any integer m. The order of this formula is O(2n^ ) An algorithm whose computing time is bounded below by Ω(2n^ ) is said to require exponential time. As n gets large, there becomes a tremendous difference between exponential and polynomial time algorithms. If one finds an algorithm which reduces the time to solve a problem from exponential to polynomial, that is a great accomplishment. Figure 6.1 and Table 6.1 show how the computing times for six of the typical functions grow with a constant equal to one. Notice how the times O(n) and O(n log n) grow much more slowly than the others. For large data sets, algorithms with a complexity greater than O(n log n) are often impractical. An algorithm which is exponential will be practical only for very small values of n and even if we decrease the leading constant, say by a factor of 2 or 3, we will not improve the amount of data we can handle by very much. Figure 6.1 Rate of growth of common computing time functions
Example 6.2: f(n) = n^2 log n & g(n) = n (log n)^10 Applying log to both functions f(n) = log (n^2 log n) = log n^2 + log log n = 2 log n + log log n g(n) = log (n (log n)^10 ) = log n + log (log n)^10 = log n + 10 log log n since , 2log n > log n (log log n is very small term) => f(n) > g(n) Example 6.3: f(n) = 3 n √n^ & g(n) = 2 √n log n g(n) = 2 log^ 𝑛 √𝑛 = (n√n) log 2^ = n√n^ (a log b^ = b log a) Here, 3 n √n^ > n√n^ (value wise) But, f(n) = g(n) (asymptotically order is same because we didn’t apply log) Example 6.4: f(n) = 2n^ & g(n) = 22n Applying log to both functions f(n)= log 2n^ = n log 2 = n g(n)= log 22n^ = 2n log 2 = 2n since, n < 2 n f(n) < g(n) (after taking log, we cannot say that they are asymptotically equal, even if they are of same order)