dipqnaunitvi

Upload: hemavathyrajasekaran

Post on 04-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 DIPQNAUNITVI

    1/18

    Digital Image Processing Question & Answers

    GRIET/ECE1

    1. Explain about gray level interpolation.

    The distortion correction equations yield non integer values for x' and y'. Because thedistorted image g is digital, its pixel values are defined only at integer coordinates. Thus usingnon integer values for x' and y' causes a mapping into locations of g for which no gray levels aredefined. Inferring what the gray-level values at those locations should be, based only on the pixelvalues at integer coordinate locations, then becomes necessary. The technique used toaccomplish this is called gray-level interpolation.

    The simplest scheme for gray-level interpolation is based on a nearest neighbor approach.This method, also called zero-order interpolation, is illustrated in Fig. 6.1. This figure shows

    (A) The mapping of integer (x, y) coordinates into fractional coordinates (x', y') by means of following equations

    x' = c 1x + c 2y + c 3xy + c 4

    and

    y' = c 5x + c 6y + c 7xy + c 8

    (B) The selection of the closest integer coordinate neighbor to (x', y');

    and

    (C) The assignment of the gray level of this nearest neighbor to the pixel located at (x, y).

    Fig. 6.1 Gray-level interpolation based on the nearest neighbor concept.

    Although nearest neighbor interpolation is simple to implement, this method often has thedrawback of producing undesirable artifacts, such as distortion of straight edges in images of high resolution. Smoother results can be obtained by using more sophisticated techniques, suchas cubic convolution interpolation, which fits a surface of the sin(z)/z type through a much larger number of neighbors (say, 16) in order to obtain a smooth estimate of the gray level at any

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    2/18

    Digital Image Processing Question & Answers

    GRIET/ECE2

    desired point. Typical areas in which smoother approximations generally are required include3-D graphics and medical imaging. The price paid for smoother approximations is additional

    computational burden. For general-purpose image processing a bilinear interpolation approachthat uses the gray levels of the four nearest neighbors usually is adequate. This approach isstraightforward. Because the gray level of each of the four integral nearest neighbors of a nonintegral pair of coordinates (x', y') is known, the gray-level value at these coordinates, denotedv(x', y'), can be interpolated from the values of its neighbors by using the relationship

    v (x', y') = ax' + by' + c x' y' + d

    where the four coefficients are easily determined from the four equations in four unknowns thatcan be written using the four known neighbors of (x', y'). When these coefficients have been

    determined, v(x', y') is computed and this value is assigned to the location in f{x, y) that yieldedthe spatial mapping into location (x', y'). It is easy to visualize this procedure with the aid of Fig.6.1. The exception is that, instead of using the gray-level value of the nearest neighbor to (x', y'),we actually interpolate a value at location (x', y') and use this value for the gray-level assignmentat (x, y).

    2. Explain about Wiener filter used for image restoration.

    The inverse filtering approach makes no explicit provision for handling noise. This approachincorporates both the degradation function and statistical characteristics of noise into therestoration process. The method is founded on considering images and noise as random

    processes, and the objective is to find an estimate f of the uncorrupted image f such that the meansquare error between them is minimized. This error measure is given by

    e2 = E {(f- f )2}

    where E{} is the expected value of the argument. It is assumed that the noise and the image areuncorrelated; that one or the other has zero mean; and that the gray levels in the estimate are alinear function of the levels in the degraded image. Based on these conditions, the minimum of the error function is given in the frequency domain by the expression

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    3/18

    Digital Image Processing Question & Answers

    GRIET/ECE3

    where we used the fact that the product of a complex quantity with its conjugate is equal to themagnitude of the complex quantity squared. This result is known as the Wiener filter, after N.Wiener [1942], who first proposed the concept in the year shown. The filter, which consists of the terms inside the brackets, also is commonly referred to as the minimum mean square error filter or the least square error filter. The Wiener filter does not have the same problem as theinverse filter with zeros in the degradation function, unless both H(u, v) and S (u, v) are zero for the same value(s) of u and v.

    The terms in above equation are as follows:

    H (u, v) = degradation function

    H*(u, v) = complex conjugate of H (u, v)

    H (u, v 2 = H*(u, v)* H (u, v)

    S (u, v) = N (u, v) 2 = power spectrum of the noise

    S f (u, v) = F (u, v) 2 = power spectrum of the undegraded image.

    As before, H (u, v) is the transform of the degradation function and G (u, v) is thetransform of the degraded image. The restored image in the spatial domain is given by the

    inverse Fourier transform of the frequency-domain estimate F (u, v). Note that if the noise iszero, then the noise power spectrum vanishes and the Wiener filter reduces to the inverse filter.

    When we are dealing with spectrally white noise, the spectrum N (u, v 2 is a constant,which simplifies things considerably. However, the power spectrum of the undegraded imageseldom is known. An approach used frequently when these quantities are not known or cannot beestimated is to approximate the equation as

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    4/18

    Digital Image Processing Question & Answers

    GRIET/ECE4

    where K is a specified constant.

    3. Explain a Model of the Image Degradation/Restoration Process.

    The Fig. 6.3 shows, the degradation process is modeled as a degradation function that,together with an additive noise term, operates on an input image f(x, y) to produce a degradedimage g(x, y). Given g(x, y), some knowledge about the degradation function H, and someknowledge about the additive noise term (x, y), the objective of restoration is to obtain anestimate f (x, y) of the original image. the estimate should be as close as possible to the originalinput image and, in general, the more we know about H and , the closer f (x, y) will be to f(x, y).

    The degraded image is given in the spatial domain by

    g (x, y) = h (x, y) * f (x, y) + (x, y)where h (x, y) is the spatial representation of the degradation function and, the symbol *

    indicates convolution. Convolution in the spatial domain is equal to multiplication in thefrequency domain, hence

    G (u, v) = H (u, v) F (u, v) + N (u, v)

    where the terms in capital letters are the Fourier transforms of the corresponding terms in aboveequation.

    Fig. 6.3 model of the image degradation/restoration process.

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    5/18

    Digital Image Processing Question & Answers

    GRIET/ECE5

    4. Explain about the restoration filters used when the image degradation is due to noiseonly.

    If the degradation present in an image is only due to noise, then,

    g (x, y) = f (x, y) + (x, y)

    G (u, v) = F (u, v) + N (u, v)

    The restoration filters used in this case are,

    1. Mean filters2. Order static filters and

    3. Adaptive filters

    Also read 5, 6, 7 answers.

    5. Explain Mean filters.

    There are four types of mean filters. They are

    (i) Arithmetic mean filter

    This is the simplest of the mean filters. Let S xy represent the set of coordinates in arectangular subimage window of size m X n, centered at point (x, y).The arithmetic meanfiltering process computes the average value of the corrupted image g(x, y) in the area defined bySxy. The value of the restored image f at any point (x, y) is simply the arithmetic mean computedusing the pixels in the region defined by S xy. In other words

    This operation can be implemented using a convolution mask in which all coefficients havevalue 1/mn

    (ii) Geometric mean filter

    An image restored using a geometric mean filter is given by the expression

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    6/18

    Digital Image Processing Question & Answers

    GRIET/ECE6

    Here, each restored pixel is given by the product of the pixels in the subimage window, raised tothe power 1/mn. A geometric mean filter achieves smoothing comparable to the arithmetic meanfilter, but it tends to lose less image detail in the process.

    (iii) Harmonic mean filter

    The harmonic mean filtering operation is given by the expression

    The harmonic mean filter works well for salt noise, but fails for pepper noise. It does well alsowith other types of noise like Gaussian noise.

    (iv) Contra harmonic mean filter

    The contra harmonic mean filtering operation yields a restored image based on the expression

    where Q is called the order of the filter. This filter is well suited for reducing or virtually

    eliminating the effects of salt-and-pepper noise. For positive values of Q, the filter eliminates pepper noise. For negative values of Q it eliminates salt noise. It cannot do both simultaneously. Note that the contra harmonic filter reduces to the arithmetic mean filter if Q = 0, and to theharmonic mean filter if Q = -1.

    6. Explain the Order-Statistic Filters.

    There are four types of Order-Statistic filters. They are

    (i) Median filter

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    7/18

    Digital Image Processing Question & Answers

    GRIET/ECE7

    The best-known order-statistics filter is the median filter, which, as its name implies,replaces the value of a pixel by the median of the gray levels in the neighborhood of that pixel:

    The original value of the pixel is included in the computation of the median. Median filters arequite popular because, for certain types of random noise, they provide excellent noise-reductioncapabilities, with considerably less blurring than linear smoothing filters of similar size. Medianfilters are particularly effective in the presence of both bipolar and unipolar impulse noise.

    (ii) Max and min filters

    Although the median filter is by far the order-statistics filler most used in image processing, it is by no means the only one. The median represents the 50th percentile of a rankedset of numbers, but the reader will recall from basic statistics that ranking lends itself to manyother possibilities. For example, using the 100 th percentile results in the so-called max filter,given by

    This filter is useful for finding the brightest points in an image. Also, because pepper noise hasvery low values, it is reduced by this filter as a result of the max selection process in thesubimage area S xy.

    The 0 th percentile filter is the min filter.

    This filter is useful for finding the darkest points in an image. Also, it reduces salt noise as aresult of the min operation.

    (iii) Midpoint filter

    The midpoint filter simply computes the midpoint between the maximum and minimumvalues in the area encompassed by the filter:

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    8/18

    Digital Image Processing Question & Answers

    GRIET/ECE8

    Note that this filter combines order statistics and averaging. This filter works best for randomlydistributed noise, like Gaussian or uniform noise.

    (iv) Alpha - trimmed mean filter

    It is a filter formed by deleting the d/2 lowest and the d/2 highest gray-level values of g(s,t) in the neighborhood S xy. Let g r (s, t) represent the remaining mn - d pixels. A filter formed byaveraging these remaining pixels is called an alpha-trimmed mean filter:

    where the value of d can range from 0 to mn - 1. When d = 0, the alpha- trimmed filter reduces tothe arithmetic mean filter. If d = (mn - l)/2, the filter becomes a median filter. For other valuesof d, the alpha-trimmed filter is useful in situations involving multiple types of noise, such as acombination of salt-and-pepper and Gaussian noise.

    7. Explain the Adaptive Filters.

    Adaptive filters are filters whose behavior changes based on statistical characteristics of

    the image inside the filter region defined by the m X n rectangular window Sxy.

    Adaptive, local noise reduction filter:

    The simplest statistical measures of a random variable are its mean and variance. Theseare reasonable parameters on which to base an adaptive filler because they are quantities closelyrelated to the appearance of an image. The mean gives a measure of average gray level in theregion over which the mean is computed, and the variance gives a measure of average contrast inthat region.

    This filter is to operate on a local region, S xy. The response of the filter at any point (x, y)on which the region is centered is to be based on four quantities: (a) g(x, y), the value of thenoisy image at (x, y); (b) a2, the variance of the noise corrupting /(x, y) to form g(x, y); (c) ray,the local mean of the pixels in S xy; and (d) 2L , the local variance of the pixels in S xy.

    The behavior of the filter to be as follows:

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    9/18

    Digital Image Processing Question & Answers

    GRIET/ECE9

    1. If 2 is zero, the filler should return simply the value of g (x, y). This is the trivial, zero-noisecase in which g (x, y) is equal to f (x, y).

    2. If the local variance is high relative to 2 the filter should return a value close to g (x, y). Ahigh local variance typically is associated with edges, and these should be preserved.

    3. If the two variances are equal, we want the filter to return the arithmetic mean value of the pixels in S xy. This condition occurs when the local area has the same properties as the overallimage, and local noise is to be reduced simply by averaging.

    Adaptive local noise filter is given by,

    The only quantity that needs to be known or estimated is the variance of the overall noise, a2.The other parameters are computed from the pixels in S xy at each location (x, y) on which thefilter window is centered.

    Adaptive median filter:

    The median filter performs well as long as the spatial density of the impulse noise is not large (asa rule of thumb, P a and P b less than 0.2). The adaptive median filtering can handle impulse noisewith probabilities even larger than these. An additional benefit of the adaptive median filter isthat it seeks to preserve detail while smoothing nonimpulse noise, something that the"traditional" median filter does not do. The adaptive median filter also works in a rectangular window area S xy. Unlike those filters, however, the adaptive median filter changes (increases) thesize of S xy during filter operation, depending on certain conditions. The output of the filter is asingle value used to replace the value of the pixel at (x, y), the particular point on which thewindow S xy is centered at a given time.

    Consider the following notation:

    zmin = minimum gray level value in S xy

    zmax = maximum gray level value in S xy

    zmcd = median of gray levels in S xy

    zxy = gray level at coordinates (x, y)

    Smax = maximum allowed size of S xy.

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    10/18

    Digital Image Processing Question & Answers

    GRIET/ECE10

    The adaptive median filtering algorithm works in two levels, denoted level A and level B, asfollows:

    Level A: A1 = zmed - zmin

    A2 = zmed - zmax

    If A1 > 0 AND A2 < 0, Go to level B

    Else increase the window size

    If window size Smax repeat level A

    Else output z xy

    Level B : B1 = zxy - zmin

    B2 = z xy - zmax

    If B1> 0 AND B2 < 0, output zxy

    Else output z med

    8. Explain a simple Image Formation Model.

    An image is represented by two-dimensional functions of the form f(x, y). The value or amplitude of f at spatial coordinates (x, y) is a positive scalar quantity whose physical meaning isdetermined by the source of the image. When an image is generated from a physical process, itsvalues are proportional to energy radiated by a physical source (e.g., electromagnetic waves). Asa consequence, f(x, y) must be nonzero and finite; that is,

    0 < f (x, y) < . (1)The function f(x, y) may be characterized by two components:

    A) The amount of source illumination incident on the scene being viewed.

    B) The amount of illumination reflected by the objects in the scene.

    Appropriately, these are called the illumination and reflectance components and are denoted byi (x, y) and r (x, y), respectively. The two functions combine as a product to form f (x, y).

    f (x, y) = i (x, y) r (x, y) . (2)

    where

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    11/18

    Digital Image Processing Question & Answers

    GRIET/ECE11

    0 < i (x, y) < . (3)

    and

    0 < r (x, y) < 1 . (4)

    Equation (4) indicates that reflectance is bounded by 0 (total absorption) and 1 (totalreflectance).The nature of i (x, y) is determined by the illumination source, and r (x, y) isdetermined by the characteristics of the imaged objects. It is noted that these expressions also areapplicable to images formed via transmission of the illumination through a medium, such as achest X-ray.

    9. Write brief notes on inverse filtering.

    The simplest approach to restoration is direct inverse filtering, where F (u, v), thetransform of the original image is computed simply by dividing the transform of the degradedimage, G (u, v), by the degradation function

    The divisions are between individual elements of the functions.But G (u, v) is given by

    G (u, v) = F (u, v) + N (u, v)

    Hence

    It tells that even if the degradation function is known the undegraded image cannot berecovered [the inverse Fourier transform of F( u, v)] exactly because N(u, v) is a randomfunction whose Fourier transform is not known.

    If the degradation has zero or very small values, then the ratio N(u, v)/H(u, v) couldeasily dominate the estimate F(u, v).

    One approach to get around the zero or small-value problem is to limit the filter frequencies to values near the origin. H (0, 0) is equal to the average value of h(x, y) and that this

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    12/18

    Digital Image Processing Question & Answers

    GRIET/ECE12

    is usually the highest value of H (u, v) in the frequency domain. Thus, by limiting the analysis tofrequencies near the origin, the probability of encountering zero values is reduced.

    10. Write about Noise Probability Density Functions.

    The following are among the most common PDFs found in image processing applications.

    Gaussian noise

    Because of its mathematical tractability in both the spatial and frequency domains,Gaussian (also called normal) noise models are used frequently in practice. In fact, thistractability is so convenient that it often results in Gaussian models being used in situations inwhich they are marginally applicable at best.

    The PDF of a Gaussian random variable, z, is given by

    (1)

    where z represents gray level, is the mean of average value of z, and a is its standarddeviation. The standard deviation squared, 2, is called the variance of z. A plot of this functionis shown in Fig. 5.10. When z is described by Eq. (1), approximately 70% of its values will be inthe range [( - ), ( + )], and about 95% will be in the range [( - 2 ), ( + 2 )].

    Rayleigh noise

    The PDF of Rayleigh noise is given by

    The mean and variance of this density are given by

    = a + /4

    2 = b(4 )/4

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    13/18

    Digital Image Processing Question & Answers

    GRIET/ECE13

    Figure 5.10 shows a plot of the Rayleigh density. Note the displacement from the origin and thefact that the basic shape of this density is skewed to the right. The Rayleigh density can be quite

    useful for approximating skewed histograms.

    Erlang (Gamma) noise

    The PDF of Erlang noise is given by

    where the parameters are such that a > 0, b is a positive integer, and "!" indicates factorial. Themean and variance of this density are given by

    = b / a

    2 = b / a 2

    Exponential noise

    The PDF of exponential noise is given by

    The mean of this density function is given by

    = 1 / a

    2

    = 1 / a2

    This PDF is a special case of the Erlang PDF, with b = 1.

    Uniform noise

    The PDF of uniform noise is given by

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    14/18

    Digital Image Processing Question & Answers

    GRIET/ECE14

    The mean of this density function is given by

    = a + b /2

    2 = (b a ) 2 / 12Impulse (salt-and-pepper) noise

    The PDF of (bipolar) impulse noise is given by

    If b > a, gray-level b will appear as a light dot in the image. Conversely, level a willappear like a dark dot. If either P a or P b is zero, the impulse noise is called unipolar. If neither

    probability is zero, and especially if they are approximately equal, impulse noise values willresemble salt-and-pepper granules randomly distributed over the image. For this reason, bipolar impulse noise also is called salt-and-pepper noise. Shot and spike noise also are terms used torefer to this type of noise.

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    15/18

    Digital Image Processing Question & Answers

    GRIET/ECE15

    Fig.5.10 Some important probability density functions

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    16/18

    Digital Image Processing Question & Answers

    GRIET/ECE16

    11. Enumerate the differences between the image enhancement and image restoration.

    (i) Image enhancement techniques are heuristic procedures designed to manipulate an image inorder to take advantage of the psychophysical aspects of the human system. Whereas image

    restoration techniques are basically reconstruction techniques by which a degraded image isreconstructed by using some of the prior knowledge of the degradation phenomenon.

    (ii) Image enhancement can be implemented by spatial and frequency domain technique,whereas image restoration can be implement by frequency domain and algebraic techniques.

    (iii) The computational complexity for image enhancement is relatively less when compared tothe computational complexity for irrrage restoration, since algebraic methods requiresmanipulation of large number of simultaneous equation. But, under some conditioncomputational complexity can be reduced to the same level as that required by traditionalfrequency domain technique.

    (iv) Image enhancement techniques are problem oriented, whereas image restoration techniquesare general and are oriented towards modeling the degradation and applying the reverse processin order to reconstruct the original image.

    (v) Masks are used in spatial domain methods for image enhancement, whereas masks are notused for image restoration techniques.

    (vi) Contrast stretching is considered as image enhancement technique because it is based on the pleasing aspects of the review, whereas removal of image blur by applying a deblurring functionis considered as a image restoration technique.

    12. Explain about iterative nonlinear restoration using the LucyRichardson algorithm.

    Lucy-Richardson algorithm is a nonlinear restoration method used to recover a latentimage which is blurred by a Point Spread Function (psf). It is also known as Richardson-Lucydeconvolution.

    With as the point spread function, the pixels in observed image are expressed as,

    Here,

    u j = Pixel value at location j in the imageci = Observed value at i th pixel locacation

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    17/18

    Digital Image Processing Question & Answers

    GRIET/ECE17

    The L-R algorithm cannot be used in application in which the psf (P ij ) is dependent on one or more unknown variables.

    The L-R algorithm is based on maximum-likelihood formulation, in this formulation Poisson statistics areused to model the image. If the likelihood of model is increased, then the result is an equation whichsatisfies when the following iteration converges.

    Here,

    f = Estimation of undegraded image.

    The factor f which is present in the right side denominator leads to non-linearity. Since, thealgorithm is a type of nonlinear restorations; hence it is stopped when satisfactory result isobtained.

    The basic syntax of function deconvlucy with the L-R algorithm is implemented is given below.

    fr = Deconvlucy (g, psf, NUMIT, DAMPAR, WEIGHT)

    Here the parameters are,

    g = Degraded image

    f r = Restored image

    psf = Point spread function

    NUMIT = Total number of iterations.

    The remaining two parameters are,

    DAMPAR

    The DAMPAR parameter is a scalar parameter which is used to determine the deviationof resultant image with the degraded image (g). The pixels which gel deviated from their originalvalue within the DAMPAR, for these pixels iterations are cancelled so as to reduce noisegeneration and present essential image information.

    w.jntuworld.com

    www. ntuworld.com

    s a m e e r

  • 7/30/2019 DIPQNAUNITVI

    18/18

    Digital Image Processing Question & Answers

    GRIET/ECE18

    WEIGHT

    WEIGHT parameter gives a weight to each and every pixel. It is array of size similar tothat of degraded image (g). In applications where a pixel leads to improper image is removed byassigning it to a weight as 0. The pixels may also be given weights depending upon the flat-fieldcorrection, which is essential according to image array. Weights are used in applications such as

    blurring with specified psf. They are used to remove the pixels which are pre9ent at the boundaryof the image and are blurred separately by psf.

    If the array size of psf is n x n then the width of weight of border of zeroes being used isceil (n / 2)

    w.jntuworld.com

    s a m e e r