Department: MCA
Semester : II
Semester : II
Subject : Data
Structures through C++
Paper : CCMCA
203
Faculty : Avinash
Kumar
Syllabus covered in this blog
Asymptotic
notations
Asymptotic Notations
When
it comes to analyzing the complexity of any algorithm in terms of time and
space, we can never provide an exact number to define the time required and the
space required by the algorithm, instead we express it using some standard
notations called Asymptotic notations.
The main idea of asymptotic analysis is to have a
measure of efficiency of algorithms that doesn’t depend on machine specific
constants, and doesn’t require algorithms to be implemented and
time taken by programs to be compared. Asymptotic notations are mathematical
tools to represent time complexity of algorithms for asymptotic analysis.
When
we analyze any algorithm, we generally get a formula to represent the amount of
time required for execution or the time required by the computer to run the
lines of code of the algorithm, number of memory accesses, number of comparisons,
temporary variables occupying memory space etc. This formula often contains
unimportant details that don't really tell us anything about the running time.
What is Asymptotic Behavior
The word Asymptotic means approaching a
value or curve arbitrarily closely (i.e., as some sort of limit is taken).
Remember studying about Limits in High
School, this is the same.
The only difference being, here we do not have to find the
value of any expression where n is approaching any finite number or infinity, but in
case of Asymptotic notations, we use the same model to ignore the constant
factors and insignificant parts of an expression, to device a better way of
representing complexities of algorithms, in a single coefficient, so that
comparison between algorithms can be done easily.
Let's take an example to understand this:
If we have two algorithms with the following expressions
representing the time required by them for execution, then:
Expression 1: (20n2 + 3n - 4)
Expression 2: (n3 + 100n - 2)
Now, as per asymptotic notations, we should just worry about
how the function will grow as the value of n(input) will grow, and that will entirely depend on n2 for the Expression
1, and on n3 for Expression 2.
Hence, we can clearly say that the algorithm for which running time is
represented by the Expression 2, will grow faster than the other one, simply by
analyzing the highest power coefficient and ignoring the other constants(20 in
20n2) and insignificant parts
of the expression(3n - 4 and 100n - 2).
The main idea behind casting aside the less important part
is to make things manageable.
All we need to do is, first analyze the algorithm to find
out an expression to define it's time requirements and then analyze how that
expression will grow as the input(n) will grow.
Types
of Asymptotic Notations
We
use three types of asymptotic notations to represent the growth of any
algorithm:
1. Theta
(Θ)
2. Omega
(Ω)
3. Big
Oh(O)
Θ Notation:
The theta notation bounds a functions from above and below,
so it defines exact asymptotic behavior.A
simple way to get Theta notation of an expression is to drop low order terms
and ignore leading constants.
For example, consider the following expression.
3n3 + 6n2 + 6000 = Θ(n3)
Dropping lower order terms is always fine because there will
always be a n0 after which Θ(n3) has higher values than Θn2)irrespective of
the constants involved.
For a given function g(n), we denote Θ(g(n)) is following set
of functions.
Θ(g(n))
= {f(n): there exist positive constants c1, c2 and n0
such that 0 <= c1*g(n) <=
f(n) <= c2*g(n) for all n >= n0}
The above definition means, if f(n) is theta of g(n), then
the value f(n) is always between c1*g(n) and c2*g(n) for large values of n (n
>= n0). The definition of theta also requires that f(n) must be non-negative
for values of n greater than n0.
Ω
Notation:
Omega
notation is used to define the lower
bound of any algorithm or we can say the
best case of any algorithm. This always indicates the minimum time
required for any algorithm for all input values, therefore the best case of any
algorithm.
Just as Big O notation
provides an asymptotic upper bound on a function, Ω notation provides an
asymptotic lower bound.
Ω Notation can be useful when we have lower bound
on time
complexity of an algorithm. As discussed
in the previous post, the best
case performance of
an algorithm is generally not useful, the Omega
notation is
the least used notation among all three.
For a given function g(n),
we denote by Ω(g(n)) the set
of functions.
Ω (g(n)) = {f(n):
there exist positive constants c and
n0 such that 0 <= c*g(n)
<= f(n) for
all n >= n0}.
Let us consider the same Insertion sort example here. The
time complexity of Insertion Sort can be written as Ω(n), but it is not a very
useful information about insertion sort, as we are generally interested in
worst case and sometimes in average case.
Big O Notation:
This notation is known as
the upper bound of the algorithm, or a Worst Case of an algorithm.
The Big O notation defines an upper
bound of an algorithm, it bounds a function only from above. For example,
consider the case of Insertion Sort. It takes linear time in best case and
quadratic time in worst case. We can safely say that the time complexity of
Insertion sort is O(n^2). Note that O(n^2) also covers linear time.
If
we use Θ notation to represent time complexity of Insertion sort, we have to
use two statements for best and worst cases:
1.
The worst case time complexity of Insertion Sort is Θ(n^2).
2.
The best case time complexity of Insertion Sort is Θ(n).
The Big O notation is useful when we only have upper
bound on time complexity of an algorithm. Many times we easily find an upper
bound by simply looking at the algorithm.
O(g(n)) = { f(n):
there exist positive constants c and
n0 such that 0 <= f(n)
<= c*g(n) for
all n >= n0}
This is the reason, most of the time you will
see Big-O notation being used to represent the time complexity of any
algorithm, because it makes more sense.
Big O notation
|
Order of Growth
|
O(1)
|
Order of Growth of an O(1) Algorithm
|
O (log n)
|
Order of Growth of an O(log n) Algorithm
|
O(n)
|
Order of Growth of an O(n) Algorithm
|
O (n log n)
|
Order of Growth of an O(n log n) Algorithm
|
O(n2)
|
Order of Growth of an O(n2) Algorithm
|
O(n3)
|
Order of Growth of an O(n3) Algorithm
|
O(2n)
|
Order of Growth of an O(2n) Algorithm
|
O(10n)
|
Order of Growth of an O(10n) Algorithm
|
No comments:
Post a Comment