image processing project
DESCRIPTION
the project is for beginners..it describes the step by step scaling of an imageTRANSCRIPT
-
Bioelectronics Project
Medical Image Processing
May 27th, 2015
Objectives
To introduce the concept of thresholding and the effects of edge detecting, mean
and median filters on an image.
Background
A common method of storing a digital image is as a set of two 2-dimensional
matrices. The first matrix is the color/gray-scale map matrix which may have any
number of rows but must have exactly 3 columns. Each row is interpreted as a color,
with the first column specifying the intensity of red light, the second green, and the
third blue. In the event that the three elements have the same value the row is
interpreted as a gray level rather than a color. Color intensity is normally specified on
the interval 0.0 to 1.0, for example [0 0 0] is black, [1 1 1] is white, [0.5 0.5 0.5] is
gray and [1 0 0] is pure red. The number of rows (R) in the color/gray-scale map
matrix defines the number of different colors/gray-scale levels that may be
represented in the image.
The second matrix is a direct representation of the image itself, where each
element of the matrix (or known as image matrix) corresponds to the color/gray-scale
level of the corresponding pixel selected from the image matrix. A common
configuration is to use 256 color/gray-scale levels so that each pixel can be
represented by a single byte (1 byte = 8bits => 28 or 256 levels).
The image used in this project represents the abdominal section an aorta cut in
half longitudinally. The vessel is first stained so that diseased tissue appears darker,
then mounted flat and scanned into a 65 by 339 image with 256 gray-levels. We will
apply a number of different image processing techniques to the original image and to
two versions of the image which have been contaminated respectively by white noise
and by salt and pepper noise.
The following is a brief background on each of the image processing techniques:
Thresholding
A threshold (T) is used to transform a multiple-level image into a binary image.
The operation is described by the following statement:
p(x,y) = g1 if p(x,y)T
p(x,y) = g2 if p(x,y) > T
where p(x,y) and p(x,y) are the pixel intensities of the original and thresholded
images respectively at the location (x,y), and g1 and g2 are two gray levels chosen to
-
represent the thresholded image.
Edge Detection
We will consider three types of edge detectors: a horizontal edge detector, a
vertical edge detector and a Laplacian filter. The three types of filtering involve the
discrete convolution of N by N sub-arrays of an image with a filter kernel (or mask).
For an N by N kernel (N is odd), this operation is carried out by the equation:
where p(x,y) and p(x,y) are the pixel intensities of the original and thresholded
images respectively at the location (x,y), and M(m,n) is the filter kernel. Thus the
kernel is slid over the image point by point and at each location the intensity value of
the new image is calculated based on the intensity of the corresponding pixel in the
original image and its surrounding pixels.
In this project we will be dealing with exclusively with 3 by 3 kernels. The
kernels corresponding to the three edge detectors used in this project are:
Mean (Arithmetic/Average) Filtering
The mean filter is applied in a way identical to the edge detectors described
above in that it also involves the discrete convolution of an N by N sub-array of an
image with a filter kernel. The only difference lies in the values of the elements
comprising the kernel, which are as shown below:
Convolution of this kernel with an image results in a new value for each pixel which
is equal to the mean of the original pixel value and its surrounding pixels. This results
in a smoothing out effect on the image.
-
Median Filtering
The median filter works in a way similar to the mean filter except that the value
of each new pixel is equal to the median of the original pixel value and its
surrounding pixels rather than the mean. The median filtering of an image using a 3
by 3 kernel may be represented by the following equation:
where the function median{ } determines the median of the numbers within the
parenthesis. The median filter is especially suited to eliminate spikes, such as those
found in salt and pepper noise, since the new pixel value is independent of
abnormally high or low values within corresponding sub-array.
Project Procedure and Questions
1. You will receive a copy of the following image files:
orig.bmp contains the original aorta image
rand.bmp contains the image contaminated by Gaussian white noise
s_p.bmp contains the image contaminated by salt and pepper noise
The following Matlab m-files will be also sent to you:
threshold.m performs thresholding
mask_filter.m performs edge detection or other filtering process
median_filter.m performs median filtering
h_gram.m calculates a histogram of pixel values in an image
org_gaus.m for your convenience
org_s_p.m for your convenience
avf_gaus.m for your convenience
mdf_gaus.m for your convenience
avf_s_p.m for your convenience
mdf_s_p.m for your convenience
2. Start Matlab
3. Read in, display and print the three images by executing the following commands:
-
4. Find the size of the original image matrix (imgo) and the map matrix (map) using
the size command. Answer the questions: size(imgo) = ? size(map) = ? How
many pixels does the image consist of? What is the maximum number of gray
levels possible in the image? List the map matrix simply by typing: map. Why are
the values of the elements in each row of the matrix the same?
5. Create three new maps using the following commands:
6. Enter the following commands to display the image imgo using each of the three
new maps:
[imgo,map]=imread('orig.bmp');
[imgn,map]=imread('rand.bmp');
[imgs,map]=imread('s_p.bmp');
-
Answer the questions: How do these three renderings of the image differ from the
original rendering? Why are the renderings different even though they use the
same image matrix? Why is the background for all three of the new renderings
still back?
7. The following Matlab function is to perform thresholding on the image:
0)
-
pct100 = ? pct175 = ? pct245 = ? Why is the value obtained for pct100 smaller
then that obtained for pct245? How is this apparent when comparing the plots of
imgt100 and imgt245?
When we threshold the image to separate the healthy from the diseased tissue we
are assuming that the pixels corresponding to each group belong to different
populations and that a fixed threshold may distinguish one from the other. To test
whether this is true for our image we will calculate a histogram which has
gray-scale level as the x-axis and for which each column represents the number of
image
matrix
threshold hi value lo value thresholded
image
percent
diseased
imgo 100 1 0 imgt100 pct100
imgo 175 2 0 imgt175 pct175
imgo 245 3 0 imgt245 pct245
[imgt100,pct100]=threshold(imgo,100,1,0);
-
pixels having the corresponding gray-scale level. If our assumption regarding the
two populations is true we should see two peaks in the histogram, one
corresponding to the healthy tissue and the other to the diseased tissue. The
following function has been written to do this:
Type in the following commands to plot the histogram for the original image (We
use lo=2 to ignore the background and hi=255 because this gives a nicer
histogram. If you dont believe try 256 instead!).
By looking at the histogram which of the three thresholds best separates the two
populations of tissue (100, 175, or 245)? Could you have chosen a better threshold
than this? If so what value would you have used and why?
8. Combine the three threshold images to form the image matrix imgtcmb and plot
and print this image using the following commands:
How does this combined image differ from the original image? How many bits
would be necessary to represent one pixel in this image? In what way does this
remind you of the quantization error discussed before in our class?
image = single(image);
-
9. The following Matlab function has been written to perform the discrete
convolution of a 3 by 3 filter kernel with an image matrix.
Use this function to apply horizontal and vertical edge detectors and a Laplacian
filter to the imgt175 thresholded image we obtained in Step 8 as follows:
-
Do all the above edge detectors work (Yes/No)? What are their effects on the
image?
Before going to the next step, convert the matrices into the double format using
the commands:
imgtbin = min(1,imgtbin);
imgtbin = double(imgtbin);
imgo = double(imgo);
imgn = double(imgn);
imgs = double(imgs);
-
10. Use the mask_filter function to apply a mean filter to the original image:
With reference to the values of the mean filter kernel, describe and explain the
effect the filter had on the image?
11. Use the median_filter function to apply a median filter to the original image and
plot it below the mean filtered image on Figure 9:
Describe and explain the effect the Median filter had on the image? Compare and
contrast the effects of the mean and median filters on the original image with
special reference to shifts in the edges of the image outline and the image features
(i.e., the box and the two notches). In conclusion specify which of the two filters
introduces least distortion of the image.
12. In this last step we will examine the effect of mean and median filtering on
Gaussian white noise and salt and pepper noise. We will create six figures
which will enable comparison of corresponding images. To facilitate this job and
save you a lot of boring typing you have six Matlab m-files which will do the
work. These files are all similar to org_gaus.m, as given below:
-
By comparing Figure 10 to Figure 11, what is the main difference between the
Gaussian white noise and salt and pepper noise in terms of the noise pixel
intensity? Explain this in terms of the definition of the two types of noise.
Now, enter the following commands:
The commands should plot the original image contaminated by Gaussian noise at
the top, the same image after mean and median filtering respectively in the middle,
and the absolute difference between the two corresponding images at the bottom.
Compare these two figures together and with Figure 10 and answer the following
questions: Which of the two filters seems to have reduced the Gaussian noise
better? Explain what made you reach this conclusion. Also, by comparing the
unfiltered noise image to the mean filtered noise image describe the effect that the
filter had on the Gaussian noise and explain how this came about; repeat the above
question for the median filter.
Then, enter the following commands:
-
The commands should plot the original image contaminated by salt and pepper
noise at the top, the same image after mean and median filtering respectively in
the middle, and the absolute difference between the two corresponding images at
the bottom. Compare these two figures together and with Figure 11 and answer
the following questions: Which of the two filters seems to have reduced the salt
and pepper noise better? Explain what made you reach this conclusion. Also, by
comparing the unfiltered noise image to the mean filtered noise image describe the
effect that the filter had on the salt and pepper noise and explain how this came
about; repeat the above question for the median filter.
Project Requirements (Due on 6/3):
You should submit a copy of your answers to the questions raised by this project
with all the printouts (Figures 1, 5, 7, 8) requested in the project.