week 12: running time and performancea graph is a set of nodes and edges it’s a really simple way...
TRANSCRIPT
Week 12: Running Time and Performance
1
Most of the problems you have written in this class run in a few seconds or less
Some kinds of programs can take much longer: Chess algorithms (Deep Blue) Routing and scheduling algorithms (Traveling
Sales Person, Purdue classroom scheduling) Medical imaging (massive amounts of data) Graphics processing (Pixar: Toy Story, Incredibles)
2
You can time the program What if it takes 1000 years to run?
You can look at the program What are you looking for?
Does the input matter? Sorting 100,000 numbers must take longer than
sorting 10 numbers, right?
3
A sequence of steps used to solve a problem In CS, we take an algorithm and convert it
into code -- Java in this class The study of algorithms is intended to: Make sure that the code you write gets the right
answer Make sure that the code runs as quickly as
possible Make sure that the code uses a reasonable
amount of memory
4
You want to find the definition of the word “rheostat” in the dictionary
The Oxford English Dictionary has 301,100 main entries
What if you search for the word “rheostat” by starting at the beginning and checking each word to see if it is the one we are interested in
Given a random word, you’ll have to check about 150,550 entries before you find it
Alternatively, you could jump to the correct letter of the alphabet, saving tons of time
Searching for “zither” takes forever otherwise5
Write a program to sum up the numbers between 1 and n
Easy, right? Let us recall that a faster way might be to use
the summation equation:
∑=
+=
n
i
nni
1 2)1(
6
Memory usage is a problem too If you run out of memory, your program can
crash Memory usage can have serious performance
consequences too
7
Remember, there are multiple levels of memory on a computer
Each next level is on the order of 100 times larger and 100 times slower
Cache• Actually on the CPU• Fast and expensive
RAM• Primary memory for a desktop computer• Pretty fast and relatively expensive
Hard Drive• Secondary memory for a desktop computer• Slow and cheap
100X 100X
100X 100X
Size Speed
8
If you can do a lot of number crunching without leaving cache, that will be very fast
If you have to fetch data from RAM, that will slow things down
If you have to read and write data to the hard drive (unavoidable with large pieces of data like digital movies), you will slow things down a lot
9
Memory can be easier to estimate than running time
Depending on your input, you will allocate a certain number of objects, arrays, and primitive data types
It is possible to count the storage for each item allocated
A reference to an object or an array costs an additional 4 bytes on top of the size of the object
10
A graph is a set of nodes and edges It’s a really simple way to represent almost any
kind of relationship
The numbers on the edges could be distances, costs, time, whatever
5
3 4
16
A
B
D
C
E
2
8
6
11
There’s a classic CS problem called the Traveling Salesperson Problem (TSP)
Given a graph, find the shortest path that visits every node and comes back to the starting point
Like a lazy traveling salesperson who wants to visit all possible clients and then return home as soon as possible
12
Strategies: Always visit the nearest neighbor next Randomized approaches Try every possible combination
How can you tell what’s the best solution?
13
We are tempted to always take the closest neighbor, but there are pitfalls
14
In a completely connected graph, we can try any sequence of nodes
If there are n nodes, there are (n – 1)! tours For 30 cities, 29! =
8841761993739701954543616000000 If we can check 1,000,000,000 tours in one
second, it will only take about 20,000 times the age of the universe to check them all
We will (eventually) get the best answer!15
This and similar problems are useful problems to solve for people who need to plan paths UPS Military commanders Pizza delivery people Internet routing algorithms
No one knows a solution that always works in a reasonable amount of time
We have been working on it for decades
16
The problem is so hard that you can win money if you solve it
Clay Mathematics Institute has offered a $1,000,000 prize
You can do one of two things to collect it: Find an efficient solution to the general problem Prove that it is impossible to find an efficient
solution in some cases
17
How do we measure algorithm running time and memory usage in general?
We want to take into account the size of the problem (TSP with 4 cities is certainly easier than TSP with 4,000 cities)
Some algorithms might work well in certain cases and badly in others
How do we come up with a measuring scheme?
18
Ideally, we want to classify both problems and algorithms
We want to figure out which problems take a really long time to solve
We want to figure out how we can compare one algorithm to another
We want to prove that some algorithm is the best possible algorithm to solve a particular problem
19
Computer scientists use Big-Oh notation to give an indication of how much time (or memory) a program will take
Big-Oh notation is: Asymptotic
Focuses on the behavior as the size of input gets large Worst-Case
Focuses on the absolute longest the program can take
20
Add up the operations done by the following code:
Initialization: 1 operation Loop: 1 initialization + n checks + n increments + n
additions to sum = 3n + 1 Output: 1 operation Total: 3n + 3
int sum = 0;
for( int i = 0; i < n; i++ )sum += i;
System.out.println(“Sum: “ + sum);
21
We could express the time taken by the code on the previous slide as a function of n: f(n) = 3n + 3
This approach has a number of problems: We assumed that each line takes 1 time unit to
accomplish, but the output line takes much longer than an integer operation
This program is 4 lines long, a longer program is going to have a very messy running time function
We can get nit picky about details: Does sum += i;take one operation or two if we count the addition and the store separately?
22
In short, this way of getting a running time function is almost useless because: It cannot be used to give us an idea of how long the
program really runs in seconds It is complex and unwieldy
The most important thing about the analysis of the code that we did is learning that the growthof the function should be linear (3n+3)
A general description of how the running time grows as the input grows would be useful
23
Enter Big Oh notation Big Oh simplifies a complicated running time
function into a simple statement about its worst case growth rate
All constant coefficients are ignored All low order terms are ignored 3n + 3 is O(n) Big Oh is a statement that a particular
running time is no worse than some function, with appropriate constants
24
147n3 + 2n2 + 5n + 12083 isO(n3)
15n2 + 6n + 7log n + 145 isO(n2)
log n + 87829 isO(log n)
Note: In CS, we use log2 unless stated otherwise
25
How long does it take to do multiplication by hand?123
x 456738615
492__56088
Let’s assume that the length of the numbers is n digits (n multiplications + n carries) x n digits + (n + 1 digits) x n
additions = 3n2 + n Running time: O(n2)
26
How do we find the largest element in an array?
Running time: O(n) if n is the length of the array What if the array is sorted in ascending order?
Running time: O(1)
int largest = array[0];
for( int i = 1; i < array.length; i++ )if( array[i] > largest )
largest = array[i];
System.out.println(“Largest: “ + largest);
System.out.println(“Largest: “ + array[array.length-1]);
27
Given two n x n matrices A and B, the code to multiply them is:
Running time: O(n3) Is there a faster way to multiply matrices? Yes, but it’s complicated and has other problems
double[][] c = new double[n][n];
for( int i = 0; i < n; i++ )for( int j = 0; j < n; j++ )
for( int k = 0; k < n; k++ )c[i][j] += a[i][k]*b[k][j];
28
Here is some code that sorts an array in ascending order
What is its running time?
Running time: O(n2)
for( int i = 0; i < array.length; i++ )for( int j = 0; j < array.length - 1; j++ )
if( array[j] > array[j + 1] ){
int temp = array[j];array[j] = array[j + 1];array[j + 1] = temp;
}
29
There is nothing better than constant time (O(1)) Logarithmic time (O(log n)means that the problem
can become much larger and only take a little longer Linear time (O(n))means that time grows with the
problem Quadratic time (O(n2)) means that expanding the
problem size significantly could make it impractical Cubic time (O(n3)) is about the reasonable maximum if
we expect the problem to grow Exponential and factorial time mean that we cannot
solve anything but the most trivial problem instances
30
Looking at the code and determining the order of complexity is one technique
Sometimes this only captures worst-case behavior
The code can be complicated and difficult to understand
Another possibility is running experiments to see how the running time changes as the problem size increases
31
The doubling hypothesis states that it is often possible to determine the order of the running time of a program by progressively doubling the input size and measuring the changes in time
32
Let’s use the doubling hypothesis to test the running time of the program that finds the largest element in an unsorted array
Looks like a decent indication of O(n)
Size Time Ratio
1024 0.035829 ms -
2048 0.071727 ms 2.00
4096 0.188013 ms 2.62
8192 0.278877 ms 1.48
33
Let’s check matrix multiplication
A factor of 8 is expected for O(n3) Not too bad
Size Time Ratio
128 39.786341 ms -
256 229.902823 ms 5.78
512 2945.396691 ms 12.81
1024 25957.367417 ms 8.81
34
Let’s try bubble sort
Success! O(n2) is supposed to give a ratio of 4 We should have done more trials to do this
scientifically
Size Time Ratio
1024 8.648306 ms -
2048 28.331604 ms 3.28
4096 108.931582 ms 3.84
8192 404.451683 ms 3.71
35