01. design & analysis of agorithm intro & complexity analysis

26
Kamalesh Karmakar, Assistant Professor, Dept. of C.S.E. Meghnad Saha Institute of Technology

Upload: onkar-nath-sharma

Post on 13-Jul-2015

83 views

Category:

Engineering


2 download

TRANSCRIPT

Kamalesh Karmakar,Assistant Professor,

Dept. of C.S.E.

Meghnad Saha Institute of Technology

Algorithm & Complexity Analysis

Algorithm Design Techniques

Lower Bound Theory

Disjoint Set Manipulation

Graph Traversal Algorithm

String Matching Problem

Amortized Analysis

Network Flow

Matrix Manipulation Algorithm

Notion of NP-Completeness

Approximation Algorithms

Text Book: T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein,

“Introduction to Algorithms” A. Aho, J.Hopcroft and J.Ullman “The Design and Analysis of

Algorithms” D.E.Knuth “The Art of Computer Programming”, Vol. 3 Jon Kleiberg and Eva Tardos, “Algorithm Design”

References: K.Mehlhorn , “Data Structures and Algorithms” - Vol. I & Vol. 2. S.Baase “Computer Algorithms” E.Horowitz and Shani “Fundamentals of Computer Algorithms” E.M.Reingold, J.Nievergelt and N.Deo- “Combinational Algorithms-

Theory and Practice”, Prentice Hall, 1997

Time & Space Complexity

Different Asymptotic Notation

Their Mathematical Significance

An algorithm is any well-defined sequence of computational steps

that transforms the input into the desired output.

Instance of a Problem

Each combination of the input is called the instance of a problem.

Example

Say, we have got an input sequence ( 22,21,19,20,23) in a sorting

problem the output will be (19,20,21,22,23) . Such an input

sequence is called an instance of the sorting problem. In general an

instance of a problem consists of all the inputs needed to compute a

solution to the problem after satisfying the condition given in the

problem statement.

An algorithm primarily concerns with

Memory space require to store and execute the algorithm.

Communication Bandwidth.

Logic gates involved and gate delays.

Computational time.

During the analization phase of the algorithm it is very difficult to

deal with the above mentioned first three points. Because it involves

the internal architecture of the system.

But it is comparatively easy to deal with computational time that is

the time taken by an algorithm for its execution. It is also called “the

time complexity of an algorithm”.

To determine the time complexity of an algorithm we have to get the

knowledge of :

Input Size

The input size depends on the problem being studied. For many

problems such as sorting , the most natural measure is the no. of items

in the input. For example, like ‘N’ elements of an array for sorting.

If the input to an algorithm is a graph, then the input size can be

described by the no. of vertices and edges of the graph.

Running Time

The running time of an algorithm is sum of running times of each

statements being executed.

The running time of an statement is measured by Ci*N, where , Ci is the

constant time taken by the statement for execution and N is equal to the

no. of times the statement being executed.

Running Time of Insertion sort:

Best Case Analysis:The best case occurs if the array is already sorted. For each j = 2, 3, .. . , n, we then find that A[i ] ≤ key in line 5 when i has its initial value

of j − 1. Thus t j = 1 for j = 2, 3, . . . , n, and the best-case runningtime is

This running time can be expressed as an +b

Worst Case Analysis:If the array is in reversed order that is in decreasing order the worstcase results. In this case we have to compare each element A[j] witheach element in the entire sorted sub array A[1 to j-1] and so tj=j forj=2,3…..,n

This worst-case running time can be expressed as an2 + bn + c

We very often study the asymptotic deficiency of algorithms that is

we are concern with how the running time of an algorithm increases

with the size of the input . Usually an algorithm that is asymptotically

more efficient will be the best choice for all but small input.

Comparison Graph for the function of n, n2 and log2n

This big O that is ‘O’ gives the asymptotic upper bound. A function

f(n) is the order of g(n) that is O(g(n)) if there exists positive constant

C and n0 such that :

f(n) ≤ C* g(n) for, n ≥ n0 in all case

Example

Say, f(n)=3*n+2

3*n+2 ≤ 4*n if n ≥ 2

f(n) ≤ C*g(n)

so f(n) in O(n)

Here, f(n) =3*n+2 and this satisfies the following equation 3*n+2 ≤

4*n where, C=4 & n0=2.

Table 3.1: The names of common big oh expressions.

O(1) constant

O(log n) logarithmic

log squared

O(n) linear

O(nlog n) n log n

quadratic

cubic

exponential

expression name

2(log )O n

2( )O n3( )O n

(2 )nO

The big omega notation is asymptotic lower bound. The function

f(n) is the order of g(n) i.e. Ω(g(n)) if there exists positive constant C

& n0 such that

f(n) ≥ C*g(n) for all n ≥ n0

Example

Say, f(n)=3*n+2

3*n+2 ≥ 3*n if n ≥ 1

f(n) ≥ C*g(n)

so f(n) in Ω(n)

Here, f(n) =3*n+2 and this satisfies the following equation

3*n+2 ≥ 3*n where, C=3 & n0=1.

This Θ notation gives asymptotic tight bound. The function f(n) is in the order of Θ(g(n)) if there exists positive constant C1, C2 & n0 such that

C1*g(n) ≤ f(n) ≤ C2*g(n) for all n>=n0

Example

Say, f(n)=3*n+2 3*n ≤ 3*n+2 ≤ 4*n if n ≥ 2 C1*g(n) ≤ f(n) ≤ C2*g(n)

so f(n) in Θ(n)

Where, C1=3, C2=4 & n0=2.

The definition of (g(n)) requires that every member f (n) ∈ (g(n)) beasymptotically nonnegative, that is, that f (n) be nonnegativewhenever n is sufficiently large.

Example

Let us briefly justify this intuition by using the formal definition to

show that (1/2)n2 − 3n = Θ (n2). To do so, we must determine

positive constants c1, c2, and n0 such that

The right-hand inequality can be made to hold for any value of n ≥ 1

by choosing c2 ≥ 1/2. Likewise, the left-hand inequality can be made

to hold for any value of n ≥ 7 by choosing c1 ≤ 1/14. Thus, by

choosing c1 = 1/14, c2 = 1/2, and n0 = 7, we can verify that

½ n2 − 3n = Θ(n2).

The asymptotic upper bound provided by O-notation may or may not be

asymptotically tight. The bound 2n2 = O(n2) is asymptotically tight, but the

bound 2n = O(n2) is not. We use o-notation to denote an upper bound that

is not asymptotically tight. We formally define o(g(n)) (“little-oh of g of n”)

as the set

The definitions of O-notation and o-notation are similar. The main

difference is that in f (n) = O(g(n)), the bound 0 ≤ f (n) ≤ cg(n) holds for

some constant c > 0, but in f (n) = o(g(n)), the bound 0 ≤ f (n) < cg(n) holds

for all constants c > 0. Intuitively, in the o-notation, the function f (n)

becomes insignificant relative to g(n) as n approaches infinity; that is,

By analogy, ω-notation is to Ω-notation as o-notation is to O-notation. Weuse ω-notation to denote a lower bound that is not asymptotically tight. Oneway to define it is by

Formally, however, we define ω(g(n)) (“little-omega of g of n”) as the set

For example, n2/2 = ω(n), but n2/2 ≠ ω(n2). The relation f (n) = ω(g(n))implies that

if the limit exists. That is, f (n) becomes arbitrarily large relative to g(n) as napproaches infinity.

Divide and Conquer:

Binary Search, Merge Sort, Quick Sort and their complexity.

Heap Sort and its complexity

Dynamic Programming:

Matrix Chain Manipulation, All pair shortest paths, single source shortest

path.

Backtracking:

8 queens problem, Graph coloring problem.

Greedy Method:

Knapsack problem, Job sequencing with deadlines, Minimum cost spanning

tree by Prim’s and Kruskal’s algorithm.