midterm 1 review solution

10
M IDTERM  1 R EVIEW S HEET  1 COMPUTER S CIENCE  61 A September 15, 2012 All re lev ant midter m mat eri al can be found on the course we bsi te under Announce men ts, in the link to Midterm 1. Disclaimer:  This review se ss ion is no t me ant to be ex ha ustive. Wh il e the mate ri al cove re d during the session is important, there might be material that we can’t get to that is equally important. For the exhaustive list of possible material for the exam, refer to the website. 0. 1 Warmup 1. Figure out what this code is supposed to do and correct it: >>>  def  nondescriptive_name(n): ... k = 1 ...  while  k <= n: ... if  n % k == 0: ...  print(k, n//k) ... k += 1 ...  return  total Solution:  Move  k += 1 inside the while loop. Also, this function will print out redundant factors, so you only have to go up to the square root of  n. 2. The factorial of a number n is the total product of all numbers before it, down to 1. That is,  7!  is 7 6 5 4 3 2 1. Write prime factorial which only multiplies the prime numbers before n. That is,  7!  is  7 5 3 2. Assume you are given the function is prime. def  prime_factorial(n): 1

Upload: tehkronos

Post on 15-Oct-2015

20 views

Category:

Documents


0 download

DESCRIPTION

CS review problems

TRANSCRIPT

  • MIDTERM 1 REVIEW SHEET 1COMPUTER SCIENCE 61A

    September 15, 2012

    All relevant midterm material can be found on the course website under Announcements,in the link to Midterm 1.

    Disclaimer: This review session is not meant to be exhaustive. While the material coveredduring the session is important, there might be material that we cant get to that is equallyimportant. For the exhaustive list of possible material for the exam, refer to the website.

    0.1 Warmup

    1. Figure out what this code is supposed to do and correct it:

    >>> def nondescriptive_name(n):... k = 1... while k

  • DISCUSSION 1: MIDTERM 1 REVIEW SHEET Page 2

    Solution:

    k, total = 2, 1while k >> from operator import mul>>> accumulate(mul, 1, 4, lambda x: pow(x, 2))

    Solution: Returned: 576

    2. Recall the compose function from lecture, reproduced below. Assume that squareand cube are given, and behave as expected.

    >>> def compose(f, g):... def h(x):... return f(g(x))... return h

    >>> compose(compose(print, square), cube)(3)

    Solution: Printed: 729 Returned: None

    3. Its a lambda free-for-all!

    >>> tic = lambda num: lambda y: num + y>>> tac, toe = 10, 20>>> tic(tac)(toe)

    Solution: 30

    CS61A Fall 2012: John Denero, withAkihiro Matsukawa, Hamilton Nguyen, Phillip Carpenter, Steven Tang, Varun Pai, Joy Jeng, KeeganMann, Allen Nguyen, Stephen Martinis, Andrew Nguyen, Albert Wu, Julia Oh, Shu Zhong

  • DISCUSSION 1: MIDTERM 1 REVIEW SHEET Page 3

    2 Higher Order Functions

    1. Try to figure out what Python would print at the interpreter:

    >>> f = lambda x: lambda y: x + y>>> f

    Solution: < function < lambda > ... >

    >>> f(4)

    Solution: < function < lambda > ... >

    >>> f(4)(5)

    Solution: 9

    >>> f(4)(5)(6)

    Solution: Error: int type is not callable

    2. Define a function g such that the following returns 3:

    >>> g(3)()3

    def g(n):

    Solution:

    g = lambda x: lambda: 3

    3. Smoothing a function is an important concept in signal processing. If f is a functionand dx is some small number, then the smoothed version of f is the function whosevalue at a point x is the average of f(x-dx), f(x) and f(x+dx). Write a functionsmooth that takes as input a function f and a number dx and returns a function thatcomputes the smoothed f.

    CS61A Fall 2012: John Denero, withAkihiro Matsukawa, Hamilton Nguyen, Phillip Carpenter, Steven Tang, Varun Pai, Joy Jeng, KeeganMann, Allen Nguyen, Stephen Martinis, Andrew Nguyen, Albert Wu, Julia Oh, Shu Zhong

  • DISCUSSION 1: MIDTERM 1 REVIEW SHEET Page 4def smooth(f, dx):

    Solution:

    def smoothed(x):return (f(x- dx) + f(x) + f(x + dx)) / 3

    return smoothed

    3 Environment Diagrams

    1. Draw the environment diagram

    >>> p, q = 3, 6>>> def foo(f, p, q):... return f()>>> foo(lambda: pow(p, q), 2, 4)

    CS61A Fall 2012: John Denero, withAkihiro Matsukawa, Hamilton Nguyen, Phillip Carpenter, Steven Tang, Varun Pai, Joy Jeng, KeeganMann, Allen Nguyen, Stephen Martinis, Andrew Nguyen, Albert Wu, Julia Oh, Shu Zhong

  • DISCUSSION 1: MIDTERM 1 REVIEW SHEET Page 52. Draw the remaining environment diagram after the last call and fill in the blanks.

    >>> from operator import add, sub>>> def burrito(x, y):... return add(x, y)...>>> sandwich = (lambda f: lambda x, y: f(x, y))(add)>>> add = sub>>> burrito(3, 2)

    >>> sandwich(3, 2)

    CS61A Fall 2012: John Denero, withAkihiro Matsukawa, Hamilton Nguyen, Phillip Carpenter, Steven Tang, Varun Pai, Joy Jeng, KeeganMann, Allen Nguyen, Stephen Martinis, Andrew Nguyen, Albert Wu, Julia Oh, Shu Zhong

  • DISCUSSION 1: MIDTERM 1 REVIEW SHEET Page 6

    4 Newtons Method and Iterative Improvement

    1. In this problem, rather than finding zeros of functions, we will attempt to find theminima of functions. As we will see, both Newtons method and different iterativeimprovement method can be used to solve this problem. You can assume that yourinputs are functions with exactly one minimum and no maximum (known as convexfunctions).

    Before we discuss a specific iterative improvement algorithm to find a local minimum,lets first consider the problem in general. What should be the terminating condition(i.e. the done function) be for this minimization problem?

    Solution: f(x) f(x) 0

    2. You learned in calculus that the minima of a function occur at zeros of the first deriva-tive. Hey! We already know how to find derivatives and zeros! Here are the relevantfunctions:

    def iter_improve(update, done, guess=1, max_updates=1000):"""Iteratively improve guess with update untildone returns a true value."""k = 0while not done(guess) and k < max_updates:

    guess = update(guess)k = k + 1

    return guess

    def approx_derivative(f, x, delta=1e-5):"""Return an approximation to the derivative of f at x."""df = f(x + delta) - f(x)return df/delta

    def find_root(f, guess=1):"""Return a guess of a zero of the function f, near guess."""return iter_improve(newton_update(f), lambda x: f(x) == 0, guess)

    Use them to find the minimum of f by solving for the root of its derivative!

    def find_minimum(f, guess=1):""" Return a guess of a zero of the function f, near guess.

    CS61A Fall 2012: John Denero, withAkihiro Matsukawa, Hamilton Nguyen, Phillip Carpenter, Steven Tang, Varun Pai, Joy Jeng, KeeganMann, Allen Nguyen, Stephen Martinis, Andrew Nguyen, Albert Wu, Julia Oh, Shu Zhong

  • DISCUSSION 1: MIDTERM 1 REVIEW SHEET Page 7

    Solution:

    return iter_improve(newton_update(approx_derivative(f), \lambda x: f(x) == 0, guess)

    Note: the \ symbol means continue the codeon a new line

    3. There is a very commonly used iterative improvement algorithm used to find the clos-est local minimum of a function, known as gradient descent. The idea behind gradientdescent is to take a tiny step at each point towards the direction of the minimum. Howthe direction and size of this step is is a (small) constant factor of the derivative. Hereis an illustration of what it does:

    Since this is not a math class, well make your life easier by giving you the updateequation, for some small :

    x = x f (x)Given all of this, implement the gradient descent algorithm by filling in the skeletonbelow. You may use the functions given in (2).

    Hint: To check for termination, your current guess must remember the old valueyou were just at. Therefore, your guess should be a tuple of 2 numbers and youshould update it like (x, x) (x, x).def gradient_done(f, size = pow(10, -6)):

    """ Return a function to evaluate whether our answer isgood enough, that is, when the change is less than size.

    """

    Solution:

    def done(guess):before = guess[0]

    CS61A Fall 2012: John Denero, withAkihiro Matsukawa, Hamilton Nguyen, Phillip Carpenter, Steven Tang, Varun Pai, Joy Jeng, KeeganMann, Allen Nguyen, Stephen Martinis, Andrew Nguyen, Albert Wu, Julia Oh, Shu Zhong

  • DISCUSSION 1: MIDTERM 1 REVIEW SHEET Page 8

    if before == None:return False

    after = guess[1]return abs(f(after)-f(before)) < size

    return done

    def gradient_step(f, alpha=0.05):""" Return an update function for f using the

    gradient descent method."""

    Solution:

    def update(x):current = x[1]return (current, current - alpha* \

    approx_derivative(f, current))return update

    def gradient_descent(f, guess=1):""" Performs gradient descent on the function f,

    starting at guess"""guess = (None, guess)

    Solution:

    return iter_improve(gradient_step(f), \gradient_done(f), guess)[1]

    5 Data Abstraction

    1. Consider the following data abstraction for 2-element vectors:

    def make_vect(xcor, ycor):return (xcor, ycor)

    def xcor(vect):return vect[0]

    def ycor(vect):

    CS61A Fall 2012: John Denero, withAkihiro Matsukawa, Hamilton Nguyen, Phillip Carpenter, Steven Tang, Varun Pai, Joy Jeng, KeeganMann, Allen Nguyen, Stephen Martinis, Andrew Nguyen, Albert Wu, Julia Oh, Shu Zhong

  • DISCUSSION 1: MIDTERM 1 REVIEW SHEET Page 9return vect[1]

    Change this definition of vector sum to not have abstraction barrier violations.

    def vector_sum(vect_tuple):xsum, ysum, i = 0, 0, 0while i < len(vect_tuple): # len returns length of tuple

    xsum = vect_tuple[i][0] + xsumysum = vect_tuple[i][1] + ysum

    return xsum, ysum

    Solution:

    def vect_sum(vect_tuple):xsum = 0ysum = 0i = 0while i < len(vect_tuple):

    xsum = xcor(vect_tuple[i]) + xsumysum = ycor(vect_tuple[i]) + ysumi += 1

    return make_vect(xsum, ysum)

    2. Suppose we redefine make vect as follows:

    def make_vect(xcor, ycor):vect = lambda f: f(xcor, ycor)return vect

    What functions would we need to change? Go ahead and redefine them below.

    Solution:

    def xcor(vect):return vect(lambda x, y: x)

    def ycor(vect):return vect(lambda x, y: y)

    CS61A Fall 2012: John Denero, withAkihiro Matsukawa, Hamilton Nguyen, Phillip Carpenter, Steven Tang, Varun Pai, Joy Jeng, KeeganMann, Allen Nguyen, Stephen Martinis, Andrew Nguyen, Albert Wu, Julia Oh, Shu Zhong

    WarmupWhat Would Python Do?Higher Order FunctionsEnvironment DiagramsNewton's Method and Iterative ImprovementData Abstraction