blood vessel - report

Upload: valgin

Post on 06-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 Blood Vessel - REPORT

    1/38

    1

    CHAPTER -1

    INTRODUCTION

    1. INTRODUCTION

    1.1 IMAGE SEGMENTATION

    Segmentation refers to the process of partitioning a digital image into multiple

    segments (sets of pixels, also known as superpixels). The goal of segmentation is to

    simplify and/or change the representation of an image into something that is more

    meaningful and easier to analyze. Image segmentation is typically used to locate objects

    and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the

    process of assigning a label to every pixel in an image such that pixels with the same

    label share certain visual characteristics.

    1.2 APPLICATIONS

    Some of the practical applications of image segmentation are:

    y Medical imaging

    o Locate tumors and other pathologies

    o Measure tissue volumes

    o Computer-guided surgery

    o Diagnosis

    o Treatment planning

    o Study of anatomical structure

    y Locate objects in satellite images (roads, forests, etc.)

    y Face recognition

    y Iris recognition

  • 8/3/2019 Blood Vessel - REPORT

    2/38

    2

    y Fingerprint recognition

    y Traffic control systems

    y Brake light detection

    y Machine vision

    y Agricultural imaging crop disease detection

    Several general-purpose algorithms and techniques have been developed for image

    segmentation. Since there is no general solution to the image segmentation problem,

    these techniques often have to be combined with domain knowledge in order to

    effectively solve an image segmentation problem for a problem domain.

    1.3 VARIOUS SEGMENTATION METHODS

    There exist various types of image segmentation techniques

    1.3.1 THRESHOLDING

    The simplest method of image segmentation is called the thresholding method.

    This method is based on a clip-level (or a threshold value) to turn a gray-scale image into

    a binary image.

    The key of this method is to select the threshold value. Several popular methods

    are used in industry including the maximum entropy method, Otsu's method (maximum

    variance), and k-means clustering can also be used.

    1.3.2 COMPRESSION-BASED METHODS

    Compression based methods postulate that the optimal segmentation is the one that

    minimizes, over all possible segmentations, the coding length of the data. The connection

    between these two concepts is that segmentation tries to find patterns in an image and any

    regularity in the image can be used to compress it. The method describes each segment

    by its texture and boundary shape. Each of these components is modeled by a probability

    distribution function and its coding length is computed as follows:

  • 8/3/2019 Blood Vessel - REPORT

    3/38

    3

    1. The boundary encoding leverages the fact that regions in natural images tend to

    have a smooth contour. This prior is used by Huffman coding to encode the

    difference chain code of the contours in an image.

    2. Texture is encoded by lossy compression in a way similar to minimum description

    length (MDL) principle.

    For any given segmentation of an image, this scheme yields the number of bits

    required to encode that image based on the given segmentation. Thus, among all

    possible segmentations of an image, the goal is to find the segmentation which

    produces the shortest coding length. This can be achieved by a simple

    agglomerative clustering method.

    1.3.3 HISTOGRAM-BASED METHODS

    Histogram-based methods are very efficient when compared to other image

    segmentation methods because they typically require only one pass through the pixels. In

    this technique, a histogram is computed from all of the pixels in the image, and the peaks

    and valleys in the histogram are used to locate the clusters in the image. Color or

    intensity can be used as the measure.

    A refinement of this technique is to recursively apply the histogram-seeking

    method to clusters in the image in order to divide them into smaller clusters. This is

    repeated with smaller and smaller clusters until no more clusters are formed.

    One disadvantage of the histogram-seeking method is that it may be difficult to

    identify significant peaks and valleys in the image. In this technique of image

    classification distance metric and integrated region matching are familiar.

    Histogram-based approaches can also be quickly adapted to occur over multiple

    frames, while maintaining their single pass efficiency. The histogram can be done in

    multiple fashions when multiple frames are considered. The same approach that is taken

    with one frame can be applied to multiple, and after the results are merged, peaks and

    valleys that were previously difficult to identify are more likely to be distinguishable. The

  • 8/3/2019 Blood Vessel - REPORT

    4/38

    4

    histogram can also be applied on a per pixel basis where the information results are used

    to determine the most frequent color for the pixel location. This approach segments based

    on active objects and a static environment, resulting in a different type of segmentation

    useful in Video tracking.

    1.3.4 EDGE DETECTION

    Edge detection is a well-developed field on its own within image processing.

    Region boundaries and edges are closely related, since there is often a sharp adjustment

    in intensity at the region boundaries. Edge detection techniques have therefore been used

    as the base of another segmentation technique.

    The edges identified by edge detection are often disconnected. To segment an

    object from an image however, one needs closed region boundaries. Edge is nothing but

    boundry between two images. Edge detection technique refers to the identification and

    locating the sharp discontinuities in the image.

    1.3.5 REGION GROWING METHODS

    The first region growing method was the seeded region growing method. This

    method takes a set of seeds as input along with the image. The seeds mark each of the

    objects to be segmented. The regions are iteratively grown by comparing all unallocated

    neighbouring pixels to the regions. The difference between a pixel's intensity value and

    the region's mean, , is used as a measure of similarity. The pixel with the smallest

    difference measured this way is allocated to the respective region. This process continues

    until all pixels are allocated to a region.

    Seeded region growing requires seeds as additional input. The segmentation

    results are dependent on the choice of seeds. Noise in the image can cause the seeds to be

    poorly placed. Unseeded region growing is a modified algorithm that doesn't require

    explicit seeds. It starts off with a single region A1 the pixel chosen here does not

    significantly influence final segmentation. At each iteration it considers the neighbouring

    pixels in the same way as seeded region growing. It differs from seeded region growing

  • 8/3/2019 Blood Vessel - REPORT

    5/38

    5

    in that if the minimum is less than a predefined threshold T then it is added to the

    respective region Aj. If not, then the pixel is considered significantly different from all

    current regions Ai and a new region An + 1 is created with this pixel.

    One variant of this technique is based on pixel intensities. The mean and scatter of

    the region and the intensity of the candidate pixel is used to compute a test statistic. If the

    test statistic is sufficiently small, the pixel is added to the region, and the regions mean

    and scatter are recomputed. Otherwise, the pixel is rejected, and is used to form a new

    region.

    A special region growing method is called -connected segmentation. It is based

    on pixel intensities and neighborhood linking paths. A degree of connectivity will be

    calculated based on a path that is formed by pixels. For a certain value of , two pixels

    are called -connected if there is a path linking those two pixels and the connectedness of

    this path is at least . -connectedness is an equivalence relation.

    1.3.6 SPLIT-AND-MERGE METHODS

    Split-and-merge segmentation is based on a quad tree partition of an image. It is

    sometimes called quad tree segmentation.

    This method starts at the root of the tree that represents the whole image. If it is

    found non-uniform, then it is split into four son-squares (the splitting process), and so on

    so forth. Conversely, if four son-squares are homogeneous, they can be merged as several

    connected components (the merging process). The node in the tree is a segmented node.

    This process continues recursively until no further splits or merges are possible. When a

    special data structure is involved in the implementation of the algorithm of the method,

    its time complexity can reach O(nlogn), an optimal algorithm of the method.

    1.3.7 PARTIAL DIFFERENTIAL EQUATION-BASED METHODS

    Using a partial differential equation (PDE)-based method and solving the PDE

    equation by a numerical scheme, one can segment the image.

  • 8/3/2019 Blood Vessel - REPORT

    6/38

    6

    1.3.8 GRAPH PARTITIONING METHODS

    Graph partitioning methods can effectively be used for image segmentation. Inthese methods, the image is modeled as a weighted, undirected graph. Usually a pixel or a

    group of pixels are associated with nodes and edge weights define the (dis)similarity

    between the neighborhood pixels. The graph (image) is then partitioned according to a

    criterion designed to model "good" clusters. Each partition of the nodes (pixels) output

    from these algorithms are considered an object segment in the image. Some popular

    algorithms of this category are normalized cuts, random walker, minimum cut,

    isoperimetric partitioning and minimum spanning tree-based segmentation.

    1.3.9 WATERSHED TRANSFORMATION

    The watershed transformation considers the gradient magnitude of an image as a

    topographic surface. Pixels having the highest gradient magnitude intensities (GMIs)

    correspond to watershed lines, which represent the region boundaries. Water placed on

    any pixel enclosed by a common watershed line flows downhill to a common local

    intensity minimum (LIM). Pixels draining to a common minimum form a catch basin,

    which represents a segment.

    1.3.10 MODEL BASED SEGMENTATION

    The central assumption of such an approach is that structures of interest/organs

    have a repetitive form of geometry. Therefore, one can seek for a probabilistic model

    towards explaining the variation of the shape of the organ and then when segmenting an

    image impose constraints using this model as prior. Such a task involves (i) registration

    of the training examples to a common pose, (ii) probabilistic representation of the

    variation of the registered samples, and (iii) statistical inference between the model and

    the image. State of the art methods in the literature for knowledge-based segmentation

    involve active shape and appearance models, active contours and deformable templates

    and level-set based methods.

  • 8/3/2019 Blood Vessel - REPORT

    7/38

    7

    1.3.11 MULTI-SCALE SEGMENTATION

    Image segmentations are computed at multiple scales in scale-space andsometimes propagated from coarse to fine scales; see scale-space segmentation.

    Segmentation criteria can be arbitrarily complex and may take into account global as well

    as local criteria. A common requirement is that each region must be connected in some

    sense.

    1.3.12 ONE-DIMENSIONAL HIERARCHICAL SIGNAL SEGMENTATION

    Seminal work in scale space included the notion that a one-dimensional signal

    could be unambiguously segmented into regions, with one scale parameter controlling the

    scale of segmentation.

    A key observation is that the zero-crossings of the second derivatives (minima

    and maxima of the first derivative or slope) of multi-scale-smoothed versions of a signal

    form a nesting tree, which defines hierarchical relations between segments at different

    scales. Specifically, slope extrema at coarse scales can be traced back to correspondingfeatures at fine scales. When a slope maximum and slope minimum annihilate each other

    at a larger scale, the three segments that they separated merge into one segment, thus

    defining the hierarchy of segments.

    1.3.13 SEMI-AUTOMATIC SEGMENTATION

    In this kind of segmentation, the user outlines the region of interest with the

    mouse clicks and algorithms are applied so that the path that best fits the edge of theimage is shown.

    Techniques like SIOX, Livewire, Intelligent Scissors or IT-SNAPS are used in

    this kind of segmentation

  • 8/3/2019 Blood Vessel - REPORT

    8/38

    8

    1.3.14 NEURAL NETWORKS SEGMENTATION

    Neural Network segmentation relies on processing small areas of an image using

    an artificial neural network or a set of neural networks. After such processing the

    decision-making mechanism marks the areas of an image accordingly to the category

    recognized by the neural network. A type of network designed especially for this is the

    Kohonen map.

    Pulse-coupled neural networks (PCNNs) are neural models proposed by modeling

    a cats visual cortex and developed for high-performance biomimetic image processing.

    PC NNs have been utilized for a variety of image processing applications, including:

    image segmentation, feature generation, face extraction, motion detection, region

    growing, noise reduction, and so on. A PCNN is a two-dimensional neural network. Each

    neuron in the network corresponds to one pixel in an input image, receiving its

    corresponding pixels color information (e.g. intensity) as an external stimulus. Each

    neuron also connects with its neighboring neurons, receiving local stimuli from them.

    The external and local stimuli are combined in an internal activation system, which

    accumulates the stimuli until it exceeds a dynamic threshold, resulting in a pulse output.

    Through iterative computation, PCNN neurons produce temporal series of pulse outputs

    of pulse outputs contain information of input images and can be utilized for various

    image processing applications, such as image segmentation and feature generation.

    PCNNs have several significant merits, including robustness against noise, independence

    of geometric variations in input patterns, capability of bridging minor intensity variations

    in input patterns, etc.

    1.4 RETINAL BLOOD VESSEL SEGMENTATION

    The appearance and structure of blood vessels in retinal images play an important

    role in diagnosis of eye diseases. There includes various methods for segmentation of

    blood vessels in color retinal images. Diabetic eye disease refers to a group of eye

    problems that people with diabetes may face as a complication. Patients with diabetes are

    more likely to develop eye problems such as cataracts and glaucoma, but the disease's

  • 8/3/2019 Blood Vessel - REPORT

    9/38

    9

    affect on the retina is the main threat to vision. One of the complications of abnormalities

    in the retina and in the worst case blindness or severe vision loss is called Diabetic

    Retinopathy [DR].

    Non-proliferative retinopathy is the less serious form of diabetic retinopathy and

    occurs when an abnormality develops in the retinal capillaries, allowing fluid to leak into

    the tissue of the eye. In this condition, a network of small blood vessels, called choroidal

    neovascularization (C NV), arises in the choroid and taking a portion of the blood

    supplying the retina. As the amount of blood supplying the retina is decreased, the sight

    may be degraded and in the severe cases, blindness may occur.

    The most common signs of diabetic retinopathy include hemorrhages, cotton wool

    spots, dilated retinal veins, and hard exudates. Retinal images, also known as fundus or

    ocular images are acquired by making photographs of the back of the eye. Eye care

    specialists can screen large populations for vessel abnormalities after the development of

    an efficient and effective computer based approach to the automated segmentation of

    blood vessels in retinal images.

    The detection and measurement of blood vessels can be used to classify the

    severity of disease, as part of the process of automated diagnosis of disease or in the

    assessment of the progression of therapy. Retinal blood vessels have measurable changes

    in diameter, branching angles, length, as a result of a disease. Thus a reliable method of

    blood vessel extraction and segmentation would be valuable for the early detection and

    characterization of changes due to such diseases.

  • 8/3/2019 Blood Vessel - REPORT

    10/38

    10

    CHAPTER 2

    LITERATURE REVIEW

    2.1 LINE OPERATORS AND SUPER VECTOR CLASSIFICATION

    In reference [1] the framework of computer-aided diagnosis of eye diseases,

    retinal vessel segmentation based on line operators is proposed. A line detector,

    previously used in mammography, is applied to the green channel of the retinal image. It

    is based on the evaluation of the average grey level along lines of fixed length passing

    through the target pixel at different orientations. Two segmentation methods areconsidered. The first uses the basic line detector whose response is thresholded to obtain

    unsupervised pixel classification. As a further development, we employ two orthogonal

    line detectors along with the grey level of the target pixel to construct a feature vector for

    supervised classification using a support vector machine. The effectiveness of both

    methods is demonstrated through receiver operating characteristic analysis on two

    publicly available databases of color fundus images.

    2.2 MATHEMATICAL MORPHOLOGY AND CURVATURE EVALUATION

    This reference [2] presents an algorithm based on mathematical morphology and

    curvature evaluation for the detection of vessel-like patterns in a noisy environment. Such

    patterns are very common in medical images. Vessel detection is interesting for the

    computation of parameters related to blood flow. Its tree-like geometry makes it a usable

    feature for registration between images that can be of a different nature. In order to define

    vessel-like patterns, segmentation will be performed with respect to a precise model. We

    define a vessel as a bright pattern, piece-wise connected, and locally linear. Mathematical

    Morphology is very well adapted to this description, however other patterns fit such a

    morphological description. In order to differentiate vessels from analogous background

    patterns, a cross-curvature evaluation is performed. They are separated out as they have a

  • 8/3/2019 Blood Vessel - REPORT

    11/38

    11

    specific Gaussian-like profile whose curvature varies smoothly along the vessel. The

    detection algorithm that derives directly from this modeling is based on four Steps:

    1) Noise reduction 2) linear pattern with Gaussian-like profile improvement 3) cross-

    curvature evaluation 4) linear filtering. We present its theoretical background and

    illustrate it on real images of various natures, then evaluate its robustness and its accuracy

    with respect to noise.

    2.3 2D GOBAR WAVELET AND SUPERVISED CLASSIFICATION

    This reference [3] presents a method for automated segmentation of the

    vasculature in retinal images. The method produces segmentations by classifying each

    image pixel as vesselornonvessel, based on the pixels feature vector.F

    eature vectors arecomposed of the pixels intensity and two-dimensional Gabor wavelet transform

    responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific

    frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use

    a Bayesian classifier with class-conditional probability density functions (likelihoods)

    described as Gaussian mixtures, yielding a fast classification, while being able to model

    complex decision surfaces. The probability distributions are estimated based on a training

    set of labeled pixels obtained from manual segmentations. The methods performance is

    evaluated on publicly available DRIVE and STARE databases of manually labeled

    images. On the DRIVE database, it achieves an area under the receiver operating

    characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-

    art approaches. We are making our implementation available as open source MATLAB

    scripts for researchers interested in implementation details, evaluation, or development of

    methods.

    2.4 RIDGE BASED

    A method is presented in reference [4] for automated segmentation of vessels in

    two-dimensional color images of the retina. This method can be used in computer

    analyses of retinal images, e.g., in automated screening for diabetic retinopathy. The

  • 8/3/2019 Blood Vessel - REPORT

    12/38

    12

    system is based on extraction of image ridges, which coincide approximately with vessel

    centerlines. The ridges are used to compose primitives in the form of line elements. With

    the line elements an image is partitioned into patches by assigning each image pixel to

    the closest line element. Every line element constitutes a local coordinate frame for its

    corresponding patch. For every pixel, feature vectors are computed that make use of

    properties of the patches and the line elements. The feature vectors are classified using a

    NN-classifier and sequential forward feature selection. The algorithm was tested on a

    database consisting of40 manually labeled images. The method achieves an area under

    the receiver operating characteristic curve of 0.952. The method is compared with two

    recently published rule-based methods of Hoover and Jiang. The results show that our

    method is significantly better than the two rule-based methods (

  • 8/3/2019 Blood Vessel - REPORT

    13/38

    13

    growing method that integrates the contents of several binary images resulting from

    vessel width dependent morphological filters. Our approach was tested on two publicly

    available databases and its results are compared with recently published methods. The

    results demonstrate that our algorithm outperforms other solutions and approximates the

    average accuracy of a human observer without a significant degradation of sensitivity and

    specificity.

    2.7 TWO-DIMENSIONAL MATCHED FILTERS

    Although current literature in reference [6] abounds in a variety of edge detection

    algorithms, they do not always lead to acceptable results in extracting various features in

    an image. In this paper, we address the problem of detecting blood vessels in retinal

    images. Blood vessels usually have poor local contrast and the application of existing

    edge detection algorithms yield results which are not satisfactory. We introduce an

    operator for feature extraction based on the optical and spatial properties of objects to be

    recognized. The gray-level profile of the cross section of a blood vessel is approximated

    by a Gaussian shaped curve. The concept of matched filter detection of signals is used to

    detect piecewise linear segments of blood vessels in these images. We construct 12

    different templates that are used to search for vessel segments along all possible

    directions. We discuss various issues related to the implementation of these matched

    filters. The results are compared to those obtained with other methods. The automatic

    detection of blood vessels in the retina could help physicians in diagnosing ocular

    diseases.

    2.8 DETECTION AND TRACKING BY MATCHED GAUSSIAN AND KALMAN

    FILTERS

    The detection and tracking algorithms of the blood vessel network in the retinal

    images is proposed in reference [7]. Two main groups of algorithms are employed for

    this task, i.e., scanning and tracking. According to the known blood vessel feature, a

    second-order derivative Gaussian matched filter is designed and used to locate the center

    point and width of a vessel in its cross sectional profile. Together with this the Extended

  • 8/3/2019 Blood Vessel - REPORT

    14/38

    14

    Kalman Filter is employed for the optimal linear estimation of the next possible location

    of blood vessel segment by appropriate formulation of its pattern changing process and

    observation model. To check the bikcation in the vessel network, a simple branching

    detection strategy is implemented during tracking. The proposed algorithms all work well

    in the whole tracking process and candetect more complete vessel network in the ocular

    fundus photographs.

    2.9 DETECTION AND QUANTIFICATION OF RETINOPATHY USING

    DIGITAL ANGIOGRAMS

    An algorithm is presented for the analysis and quantification of the vascular

    structures of the human retina in reference [8]. Information about retinal blood vessel

    morphology is used in grading the severity and progression of a number of diseases.

    These disease processes are typically followed over relatively long time courses, and

    subjective analysis of the sequential images dictates the appropriate therapy for these

    patients. In this research, retinal fluorescein angiogram are acquired digitally in a 1024x

    1024 16-b image format and are processed using an automated vessel tracking program to

    identify and quantitate stenotic and/or tortuous vessel segments. The algorithm relies on a

    matched filtering approach coupled with a priori knowledge about retinal vessel

    properties to automatically detect the vessel boundaries, track the midline of the vessel,

    and extract useful parameters of clinical interest. By modeling the vessel profile using

    Gaussian functions, improved estimates of vessel diameters are obtained over previous

    algorithms. An adaptive densitometric tracking technique based on local neighborhood

    information is also used to improve computational performance in regions where the

    vessel is relatively straight.

    2.10 AMPLITUDE MODIFIED SECOND ORDER GAUSSIAN FILTER

    In this reference [9], the fitness of estimating vessel profiles with Gaussian

    function is evaluated and an amplitude-modified second-order Gaussian filter is proposed

    for the detection and measurement of vessels. Mathematical analysis is given and

  • 8/3/2019 Blood Vessel - REPORT

    15/38

    15

    supported by a simulation and experiments to demonstrate that the vessel width can be

    measured in linear relationship with the spreading factor of the matched filter when the

    magnitude coefficient of the filter is suitably assigned. The absolute value of vessel

    diameter can be determined simply by using a precalibrated line, which is typically

    required since images are always system dependent. The experiment shows that the

    inclusion of the width measurement in the detection process can improve the performance

    of matched filter and result in a significant increase in success rate of detection.

    2.11 MEASUREMENT BASED ON 2-D MODELLING

    In the reference [10]Changes in retinal vessel diameter are an important sign of

    diseases such as hypertension, arteriosclerosis and diabetes mellitus. Obtaining precise

    measurements of vascular widths is a critical and demanding process in automated retinalimage analysis as the typical vessel is only a few pixels wide. This paper presents an

    algorithm to measure the vessel diameter to subpixel accuracy. The diameter

    measurement is based on a two-dimensional difference of Gaussian model, which is

    optimized to fit a two-dimensional intensity vessel segment. The performance of the

    method is evaluated against Brinchmann-Hansens half height, Gregsons rectangular

    profile and Zhous Gaussian model. Results from 100 sample profiles show that the

    presented algorithm is over30% more precise than the compared techniques and is

    accurate to a third of a pixel.

  • 8/3/2019 Blood Vessel - REPORT

    16/38

    16

    CHAPTER 3

    EXISTING SYSTEM

    Many methods for retinal vessel segmentation have been reported. These can bedivided into two groups: rule-based methods and supervised methods. In the first group,

    we highlight methods using vessel tracking, mathematical morphology, matched filtering,

    model-based locally adaptive thresholding or deformable models. On the other hand,

    supervised methods are those based on pixel classification.

    3.1 RULE-BASED METHODS

    3.1.1 VESSEL TRACKING METHODS

    This method attempts to obtain the vasculature structure by following vessel

    center lines. Starting from an initial set of points established automatically or by manual

    labeling, vessels are traced by deciding from local information the most appropriate

    candidate pixel from those close to that currently under evaluation.

    3.1.2 MATHEMATICAL MORPHOLOGY

    This method is to benefit from a priori-known vasculature shape features, such as

    being piecewise linear and connected. Then, by applying morphological operators, the

    vasculature is filtered from the background for final segmentation.

    3.1.3 MATCHED FILTERING TECHNIQUE

    This technique usually use a 2-D linear structural element with a Gaussian cross-

    profile section, extruded or rotated into three dimensions for blood vessel cross-profile

    identification (typically a Gaussian or Gaussian-derivative profile). The kernel is rotated

    into many different orientations (usually 8 or 12) to fit into vessels of different

    configuration. The image is then thresholded to extract the vessel silhouette from the

    background.

  • 8/3/2019 Blood Vessel - REPORT

    17/38

    17

    3.1.4MODEL-BASED LOCALLY ADAPTIVE THRESHOLDING

    This general framework based on a verification-based multithreshold probing

    scheme was presented by Jiang. These authors enriched this generic methodology by

    incorporating relevant information related to retinal vessels into the verification process

    with the aim of enabling its application to retinal images.

    3.1.5 DEFORMABLE OR SNAKE MODELS

    A snake is an active contour model that, once placed on the image near the

    contour of interest, can evolve to fit the shape of the desired structure by an iterative

    adaption.

    3.1.6 MULTISCALE FEATURE EXTRACTION

    The local maxima over scales of the gradient magnitude and the maximum

    principal curvature of the Hessian tensor were used in a multiple pass region growing

    procedure. Growth progressively segmented the blood vessels by using both feature and

    spatial information.

    3.1.7 LAPLACIAN AND GRADIENT VECTOR FIELD

    In this method, blood vessel-like objects were extracted by using the Laplacian

    operator and noisy objects were pruned according to centerlines, detected by means of the

    normalized gradient vector field.

  • 8/3/2019 Blood Vessel - REPORT

    18/38

    18

    3.2 SUPERVISED METHODS

    Supervised methods are based on pixel classification, which consists on

    classifying each pixel into two classes, vessel and non-vessel. Classifiers are trained by

    supervised learning with data from manually-labeled images.

    3.2.1 BACK PROPAGATION MULTILAYER NEURAL NETWORK

    This method is proposed for vascular tree segmentation. After histogram

    equalization, smoothing and edge detection, the image was divided into 2020 pixel

    squares (400 input neurons). The NN was then fed with the values of these pixel windows

    for classifying each pixel into vessel or not.

    3.2.2 MULTILAYER PERCEPTRON NN

    Each pixel in the image was classified by using the first principal component, and

    the edge strength values from a 1010 pixel subimage centered on the pixel under

    evaluation, as input data.

    3.2.3 K-NEAREST NEIGHBOR (KNN) CLASSIFIER

    A31-component pixel feature vector was constructed with the Gaussian and its

    derivatives up to order2 at 5 different scales, augmented with the gray-level from the

    green channel of the original image.

    3.2.4 SUPERVISED RIDGE- BASED VESSEL DETECTION METHOD

    This assumps that vessels are elongated structures is the basis for the supervised

    ridge- based vessel detection method. Ridges were extracted from the image and used as

    primitives to form line elements. Each pixel was then assigned to its nearest line element,

    the image thus being partitioned into patches. For every pixel, 27 features were firstly

    computed and those obtaining the best class separability were finally selected.

  • 8/3/2019 Blood Vessel - REPORT

    19/38

    19

    3.2.5 KNN-CLASSIFIER AND SEQUENTIAL FORWARD FEATURE

    SELECTION

    Feature vectors were classified by using a kNN-classifier and sequential forward

    feature selection.

    3.2.6 GABOR WAVELET TRANSFORM

    Multiscale analysis was performed on the image by using this transform. The

    gray-level of the inverted green channel and the maximum Gabor transform response

    over angles at four different scales were considered as pixel features.

    3.2.7 SUPPORT VECTOR MACHINE (SVM)

    SVM is used for pixel classification as vessel or nonvessel. They used two

    orthogonal line detectors along with the gray-level of the target pixel to construct the

    feature vector.

  • 8/3/2019 Blood Vessel - REPORT

    20/38

    20

    CHAPTER 4

    PROPOSED METHODOLOGY

    This project presents precise measurement of retinal vessel diameter using two

    modules i.e., segmentation and width measurement. A new supervised method is

    proposed for the blood vessel segmentation in retinal images by using gray-level and

    moment invariants-based features. The change in width of retinal vessels within the

    fundus indicates the risk level of diabetic retinopathy. . After the segmentation of blood

    vessels of retina, width of the blood vessel is measured which inorder help to diagnose

    hypertension and cardiovascular diseases. This measurement of blood vessels in retinalimage is done by a proposed method known to be graphic theoretic method.

    To evaluate the vessel segmentation methodology described in the next section,

    two publicly available databases containing retinal images, the DRIVE and STARE

    databases, were used. These databases have been widely used by other researchers to test

    their vessel segmentation methodologies since, apart from being public; they provide

    manual segmentations for performance evaluation.

    The STARE database is given as input for proposed retinal blood vessel

    segmentation, originally collected by Hoover, comprises 20 eye-fundus color images (ten

    of them contain pathology) captured with a Topcon TRV-50 fundus camera at 35FOV.

    The images were digitalized to 700 605 pixels, 8 bits per color channel and are available

    in PPM format. The database contains two sets of manual segmentations made by two

    different observers. Performance is computed with the segmentations of the first observer

    as ground truth.

    The database is divided into two sets: a test set and a training set, each of them

    containing 20 images. The test set provides the corresponding FOV masks for the images,

    which are circular (approximated diameter of540 pixels) and two manual segmentations

  • 8/3/2019 Blood Vessel - REPORT

    21/38

    21

    generated by two different specialists for each image. The selection of the first observer

    is accepted as ground truth and used for algorithm performance evaluation in literature.

    The training set also includes the FOV masks for the images and a set of manual

    segmentations made by the first observer.

    4.1 PROPOSED VESSEL SEGMENTATION METHOD

    This proposes a new supervised approach for blood vessel detection based on a

    NN for pixel classification. The necessary feature vector is computed from preprocessed

    retinal images

    in the neighborhood of the pixel under consideration.

    The following process stages may be identified:

    1) Original fundus image preprocessing for gray-level homogenization and blood vessel

    enhancement,

    2) feature extraction for pixel numerical representation,

    3) Application of a classifier to label the pixel as vessel or nonvessel,

    4) Post processing for filling pixel gaps in detected blood vessels and removing falsely-

    detected isolated vessel pixels.4.2 BLOCK DIAGRAM FOR PROPOSED VESSEL SEGMENTATION

    METHOD

    INPUT IMAGE

    OUTPUT IMAGE

    Fig 1: Block diagram for vessel segmentation method

    PRE

    PROCESSING

    FEATURE

    EXTRACTIONCLASSIFICATION

    POSTPROCESSING

  • 8/3/2019 Blood Vessel - REPORT

    22/38

    22

    4.3 BLOCK DIAGRAM DESCRIPTION

    Input images are monochrome and obtained by extracting the green band from

    original RGB retinal images. The green channel provides the best vessel-background

    contrast of the RGB-representation, while the red channel is the brightest color channel

    and has low contrast, and the blue one offers poor dynamic range. Thus, blood containing

    elements in the retinal layer (such as vessels) are best represented and reach higher

    contrast in the green channel.

    The application of the methodology to retinas of different size (i.e., the diameter

    in pixels of STARE database retinas is approximately 650 pixels) demands either resizing

    input images to fulfill this condition or adapting proportionately the whole set of used

    parameters to this new retina size. Vessel segmentation undergo following steps with the

    input from STARE database from fundus images.

    4.3.1 PREPROCESSING

    Color fundus images often show important lighting variations, poor contrast and

    noise. In order to reduce these imperfections and generate images more suitable for

    extracting the pixel features demanded in the classification step, a preprocessing

    comprising the following steps is applied: 1) vessel central light reflex removal.

    2) Background homogenization.

    3) Vessel enhancement.

    4.3.1.1 VESSEL CENTRAL LIGHT REFLEX REMOVAL

    Since retinal blood vessels have lower reflectance when compared to other retinal

    surfaces, they appear darker than the background. Although the typical vessel cross-

    sectional gray-level profile can be approximated by a Gaussian shaped curve (inner

    vessel pixels are darker than the outermost ones), some blood vessels include a light

    streak (known as a light reflex) which runs down the central length of the blood vessel.

    To remove this brighter strip, the green plane of the image is filtered by applying a

    morphological opening using a three-pixel diameter disc, defined in a square grid by

  • 8/3/2019 Blood Vessel - REPORT

    23/38

    23

    using eight-connexity, as structuring element. Disc diameter was fixed to the possible

    minimum value to reduce the risk of merging close vessels .Idenotes the resultant image

    for future references.

    Fig 2 shows the fragment of original image containing a vessel with central light reflex

    Fig 3 shows the effect of light reflex removal

    4.3.1.2 BACKGROUND HOMOGENIZATION

    Fundus images often contain background intensity variation due to nonuniform

    illumination. Consequently, background pixels may have different intensity for the same

    image and, although their gray-levels are usually higher than those of vessel pixels, the

    intensity values of some background pixels is comparable to that of brighter vessel pixels.

    Since the feature vector used to represent a pixel in the classification stage is formed by

    gray-scale values, this effect may worsen the performance of the vessel segmentation

    methodology. With the purpose of removing these background lightening variations, a

    shade-corrected image is accomplished from a background estimate. This image is the

    result of a filtering operation with a large arithmetic mean kernel.

  • 8/3/2019 Blood Vessel - REPORT

    24/38

    24

    Fig 4: Shows Homogenized images

    1)33 mean filter is applied to smooth occasional salt-and-pepper noise. Noise

    smoothing is performed by convolving the resultant image with Gaussian kernel of

    dimensions mm=99, mean=0 and variance 2=1.82, = .

    2) A background image IB , is produced by applying a 6969 mean filter . When

    this filter is applied to the pixels in the FOV near the border, the results are strongly

    biased by the external dark region. To overcome this problem, out-of-the FOV gray-

    levels are replaced by average gray-levels in the remaining pixels in the square. Then, the

    differenceD betweenI andIB is calculated for every pixel.

    D(x,y)=I(x,y) - IB(x,y) (1)

    3)A shade-corrected image ISC is obtained by transforming linearlyRD values into

    integers covering the whole range of possible gray-levels (0 255), referred to 8-bit

    images).The proposed shade-correction algorithm is observed to reduce background

    intensity variations and enhance contrast in relation to the original green channel image.

    Besides the background intensity variations in images, intensities can reveal significant

    variations between images due to different illumination conditions in the acquisition

    process. In order to reduce this influence, a homogenized image IH is produced as

    follows: the histogram of ISC is displaced toward the middle of the gray-scale by

    modifying pixel intensities according to the following gray-level global transformation

    function:

  • 8/3/2019 Blood Vessel - REPORT

    25/38

    25

    Output=

    (2)

    where

    =Input+128-Input_Max (3)

    Input and output are the gray-level variables of input and output images (ISC and

    IH respectively). The variable denoted by Input_Maxdefines the gray-level presenting the

    highest number of pixels in ISC. By means of this operation, pixels with gray-level

    Input_Max, which are observed to correspond to the background of the retina, are set to

    128 for 8-bit images. Thus, background pixels in images with different illumination

    conditions will standardize their intensity around this value.

    4.3.1.3 VESSEL ENHANCEMENT

    The final preprocessing step consists on generating a new vessel-enhanced image

    (IVE), which proves more suitable for further extraction of moment invariants- based

    features. Vessel enhancement is performed by estimating the complementary image of

    the homogenized image IH,

    , and subsequently applying the morphological Top-Hat

    transformation.

    IVE= (

    ) (4)

    Where is a morphological opening operation using a disc of eight pixels in radius.

    Thus, while bright retinal structures are removed (i.e., optic disc, possible presence of

    exudates or reflection artifacts), the darker structures remaining after the opening

    operation become enhanced.

  • 8/3/2019 Blood Vessel - REPORT

    26/38

    26

    Fig 5: shows vessel-enhanced image

    4.3.2 FEATURE EXTRACTION

    The aim of the feature extraction stage is pixel characterization by means of a

    feature vector, a pixel representation in terms of some quantifiable measurements which

    may be easily used in the classification stage to decide whether pixels belong to a real

    blood vessel or not. In this paper, the following sets of features were selected.

    Gray-level-based features: features based on the differences between the gray-level in

    the candidate pixel and a statistical value representative of its surroundings.

    Momentinvariants-based features: features based on moment invariants for describing

    small image regions formed by the gray-scale values of a window centered on the

    represented pixels.

    4.3.2.1 GRAY-LEVEL-BASED FEATURES

    Since blood vessels are always darker than their surroundings, features based on

    describing gray-level variation in the surroundings of candidate. pixels seem a good

    choice. A set of gray-level-based descriptors taking this information into account were

    derived from homogenized images IH considering only a small pixel region centered on

    the described pixel (x,y) . Sx,y

    wstands for the set of coordinates in a ww sized square

    window centered on point . Then, these descriptors can be expressed as

    f1(x,y)=IH(x,y)-min {IH(s,t)} (5)(s,t)

    f2(x,y)=max {IH(s,t)}-IH(s,t) (6)(s,t)

  • 8/3/2019 Blood Vessel - REPORT

    27/38

    27

    f3(x,y)=IH(x,y)- mean {IH(s,t)} (7)(s,t)

    f1(x,y)= std {IH(s,t)} (8)(s,t)

    f5(x,y)=IH(x,y) (9)

    4.3.2.2 MOMENT INVARIANT BASED FEATURES

    The vasculature in retinal images is known to be piecewise linear and can beapproximated by many connected line segments. For detecting these quasi-linear shapes,

    which are not all equally wide and may be oriented at any angle, shape descriptors

    invariant to translation, rotation and scale change may play an important role. , moment

    invariants provide an attractive solution and are included in the feature vector.

    Given a pixel(x,y) of the vessel-enhanced image IVE , a subimage is generated by taking

    the region defined by . The size of this region was fixed to 17 17 so that,

    considering that the region is centered on the middle of a wide vessel (8-9-pixel wide

    and referred to retinas of approximately 540 pixels in diameter), the subimage includes an

    approximately equal number of vessel and nonvessel pixels. For this subimage, denoted

    by .

    A set of seven moment invariants under size, translation, and rotation, known as Hu

    moment invariants, can be derived from combinations of regular moments.

    1=20+02 (10)

    2= (20+02) +4112 (11)

    the values describing vessel and non vessel central pixels become sensitive ,they reflect

    significant differences between both of them. Both 1and2value increases comparing

    the original one increases to describe if they are vessel pixels and decreases if they are

  • 8/3/2019 Blood Vessel - REPORT

    28/38

    28

    non vessel central pixel. In conclusion the descriptors were considered to be a part of the

    feature vector of a pixel located at (x,y)

    f6(x,y)=|log(1)| (12)

    f7(x,y)=|log(2)| (13)

    Fig: 6 Example of obtaining pixel environments for moment invariant calculation

    To overcome the problem moments are computed on new subimage IHu produced by

    multiplyingthe original ones

    p-1a p-1b p-2a p-2 b p-3a p-3 b p-4a p-4b

    |log(1)|

    |log(2)|

    5.26

    11.70

    4.73

    11.29

    4.87

    10.71

    5.02

    11.81

    4.36

    10.92

    4.23

    10.90

    3.96

    10.59

    3.92

    12.11

    Table 1: module of the 1 and 2 moments logarithm calculated from the subimages

    p-1a p-1b p-2a p-2 b p-3a p-3 b p-4a p-4b

    |log(1)|

    |log(2)|

    5.34

    13.57

    2.89

    9.16

    5.16

    12.85

    3.13

    11.11

    4.79

    11.19

    2.34

    8.31

    4.12

    10.82

    2.21

    7.79

    Table 2:module of the 1 and2 moments logarithm calculated from the subimagesIHu

  • 8/3/2019 Blood Vessel - REPORT

    29/38

    29

    4.3.3 CLASSIFICATION

    In the feature extraction stage, each pixel from a fundus image is characterized by a

    vector in a 7-D feature space

    F(x,y)=(f1(x,y),..,f7(x,y)) (14)

    Now, a classification procedure assigns one of the classes C1 (vessel) orC2 (nonvessel)

    to each candidate pixel when its representation is known. In order to select a suitable

    classifier, the distribution of the training set data (described below) in the feature space

    was analyzed. The results of this analysis showed that the class linear separability grade

    was not high enough for the accuracy level required for vasculature segmentation in

    retinal images. Therefore, the use of a non linear classifier was necessary.

    The following nonlinear classifiers can be found in the existing literature on this

    topic: the kNN method, support vector machines, Bayesian classifier, or neural networks .

    A multilayer feedforward NN was selected in this paper.

    Two classification stages can be distinguished:

    DESIGN STAGE, in which the NN configuration is decided and the NN is trained,

    APPLICATION STAGE, in which the trained NN is used to classify each pixel as vessel

    or nonvessel to obtain a vessel binary image.

    4.3.3.1 NEURAL NETWORK DESIGN

    A multilayer feedforward network, consisting of an input layer, three hidden

    layers and an output layer, is adopted in this paper. The input layer is composed by a

    number of neurons equal to the dimension of the feature vector (seven neurons).

    Regarding the hidden layers, several topologies with different numbers of neurons were

    tested. A number of three hidden layers, each containing 15 neurons, provided optimal

    NN configuration. The output layer contains a single neuron and is attached, as the

    remainder units, to a nonlinear logistic sigmoid activation function, so its output ranges

    between 0 and 1. This choice was grounded on the fact of interpreting NN output as

  • 8/3/2019 Blood Vessel - REPORT

    30/38

    30

    posterior probabilities The training set,ST , is composed of a set of candidates for which

    the feature vector [F(20)], and the classification result (C1 orC2 : vessel or nonvessel)

    are known

    ST= {(F(n,),

    Ck(n)

    )|n=1N;k{1,2}} (15)

    The samples forming were collected from manually labeled nonvessel and vessel pixels

    in the DRIVE training images. Specifically, around 30 000 pixel samples, fairly divided

    into vessel and non-vessel pixels, were used. Unlike others selected their training set by

    random pixel-sample extraction from available manual segmentations of DRIVE and

    STARE images, we produced our own training set by hand. As in, gold-standard images

    may contain errors due to the considerable difficulty involved by the creation of these

    handmade images. To reduce the risk of introducing errors in and, therefore, of

    introducing noise in the NN, we opted for carefully selecting specific training samples

    covering all possible vessel, background, and noise patterns. It was applied to compute

    method performance with both DRIVE and STARE databases.

    Since the features of have very different ranges and values, each of these features

    is normalized to zero mean and unit variance independently by applying

    =

    (16)

    where and stand for the average and standard deviation of the feature calculated

    over. Once is established, NN is trained by adjusting the weights of the connections

    through error interpretation. The back-propagation training algorithm was used with this

    purpose.

    4.3.3.2 NEURAL NETWORK APPLICATION

    At this stage, the trained NN is applied to an unseen fundus image to generate a

    binary image in which blood vessels are identified from retinal background: pixels

    mathematical descriptions are individually passed through the NN. In our case, the NN

    input units receive the set of features provided by (5)(9) and (12) and (13), normalized

    according to (16). Since a logistic sigmoidal activation function was selected for the

  • 8/3/2019 Blood Vessel - REPORT

    31/38

    31

    single neuron of the output layer, the NN decision determines a classification value

    between 0 and 1. Thus, a vessel probability map indicating the probability for the pixel to

    be part of a vessel is produced.

    The bright pixels in this image indicate higher probability of being vessel pixel. In

    order to obtain a vessel binary segmentation, a thresholding scheme on the probability

    map is used to decide whether a particular pixel is part of a vessel or not. Therefore, the

    classification procedure assigns one of the classes (vessel) or (nonvessel) to each

    candidate pixel, depending on if its associated probability is greater than a threshold.

    Thus, a classification output image is obtained by associating classes and to the gray level

    values 255 and 0, respectively.

    (x,y) =

    (17)

    where denotes the probability of a pixel described by featurevector to belong to class C1.

    4.3.4 POST PROCESSING

    Classifier performance is enhanced by the inclusion of a two-step postprocessing

    stage:

    The first step is aimed at filling pixel gaps in detected blood vessels,

    The second step is aimed at removing falsely detected isolated vessel pixels.

    Fig7: Postprocessing image

  • 8/3/2019 Blood Vessel - REPORT

    32/38

    32

    From visual inspection of the NN output, vessels may have a few gaps (i.e., pixels

    completely surrounded by vessel points, but not labeled as vessel pixels). To overcome

    this problem, an iterative filling operation is performed by considering that pixels with at

    least six neighbors classified as vessel points must also be vessel pixels. Besides, small

    isolated regions misclassified as blood vessel pixels are also observed. In order to remove

    these artifacts, the pixel area in each connected region is measured. In artifact removal,

    each region connected to an area below 25 is reclassified as nonvessel.

  • 8/3/2019 Blood Vessel - REPORT

    33/38

    33

    CHAPTER 5

    EXPERIMENTAL RESULTS

    5.1 PERFORMANCE MEASURES

    In order to quantify the algorithmic performance of the proposed method on a

    fundus image, the resulting segmentation is compared to its corresponding gold-standard

    image. This image is obtained by manual creation of a vessel mask in which all vessel

    pixels are set to one and all nonvessel pixels are set to zero. Thus, automated vessel

    segmentation performance can be assessed.

    This algorithm was evaluated in terms of sensitivity (Se), specificity (Sp), positive

    predictive value (Ppv), negative predictive value (Npv) and accuracy (Acc). These metrics

    are defined as

    =

    (22)Se and Sp metrics are the ratio of well-classified vessel and nonvessel pixels,

    respectively. Ppv is the ratio of pixels classified as vessel pixels that are correctly

    classified. Npv

    is the ratio of pixels classified as background pixels that are correctly

    classified.Acc is a global measure providing the ratio of total well-classified pixels.

    This algorithm performance was also measured with receiver operating

    characteristic (ROC) curves. A ROC curve is a plot of true positive fractions () versus

    false positive fractions (1-) by varying the threshold on the probability map. The closer

    a curve approaches the top left corner, the better the performance of the system. The area

  • 8/3/2019 Blood Vessel - REPORT

    34/38

    34

    under the curve (AUC), which is 1 for a perfect system, is a single measure to quantify

    this behavior.

    5.2 PROPOSED METHOD EVALUTION

    This method was evaluated on DRIVE and STARE database images with

    available gold-standard images. Since the images dark background outside the FOV is

    easily detected ,,,and values were computed for each image consideringFOV pixels only. Since FOV masks are not provided for STARE images, they were

    generated with an approximate diameter of 650 550. The average values

    of ,,

    ,

    and are shown for the 20 images in each database. Threshold value

    for all the images in the same database 0.91 for STARE images. The AUC measuredfor ROC was 0.9769 for the STARE databases.5.3 COMPARISION TO OTHER METHODS

    In order to compare our approach to other retinal vessel segmentation algorithms,

    Acc and AUC were used as measures of method performance. Since these

    measurements were performed by other authors, this choice facilities comparing our

    results to theirs. The proposed method proves especially useful for vessel detection in

    STARE images. Its application to this database resulted in the second highest accuracy

    score among all experiments and the first when is the reference measurement. There are

    no available labeled training images for STARE images, Soares performed leave-one-out

    tests on this database.

    The STARE database contains ten images with pathologies, while the test of

    DRIVE only contains four. Moreover, abnormal regions are wider in STARE. Regarding

    performance comparison in terms of when results are jointly analyzed forDRIVE and

    STARE images, our algorithm renders greater accuracy than others authors algorithms,

    being outperformed only by Ricci and Perfettis proposal. This method proved very

    dependent on the training set, to research the dependence of their classification method

    on the dataset, carried out an experiment based on.

  • 8/3/2019 Blood Vessel - REPORT

    35/38

    35

    Firstly, training the classifier on each of the DRIVE and STARE databases, and

    then, testing it on the other. Their maximum accuracy values are can be observed that

    performance is worse now, since strongly decreases from 0.9595 to 0.9266 on DRIVE

    and 0.9646 to 0.9452 on STARE database images. Therefore, classifier retraining is

    necessary before applying their methodology on a new database. Training the classifier

    with STARE images, the resulting values are shown to facilitate comparisons between

    both methods under identical conditions. In this case, it is clearly observed that our

    estimated performance in terms of method accuracy is higher, thus proving higher

    training set robustness.

    FIG 8: INPUT IMAGE FIG 9: PROPOSED OUTPUT IMAGE

  • 8/3/2019 Blood Vessel - REPORT

    36/38

    36

    CHAPTER 6

    CONCLUSION

    This method is based on a NN scheme for pixel classification, being the feature

    vector representing each pixel composed of gray-level and moment invariants-based

    features.

    The experiments aimed at evaluating the efficiency of the applied descriptors

    prove this method is capable of rendering accurate results, even when these types of

    features are used independently. Thus, accuracy improves up 0.9526 for the 20 test

    images in the STARE databases. Therefore, the method finally adopts a 7-D feature

    vector composed by the five gray-level and the two moment invariants-based features.

    The proposed method uses a NN for pixel classification as vessel or non-vessel.

    This classifier was selected after method accuracy assessment by means of a kNN and a

    SVM, instead of a NN. This NN showed better accuracy than a kNN and a SVM for all

    cases. Its application to the STARE database results in the second highest accuracy score

    among all experiments.

    In addition, method simplicity should also be highlighted. Its pixel classification

    procedure is based on computing only seven features for each pixel, thus needing shorter

    computational time. The total time required to process a single image is less than

    approximately one minute and thirty seconds, running on a PC with an Intel Core2Duo

    CPU at 2.13 GHz and 2 GB of RAM. Since our implementation is experimental, this

    performance might still be improved.

    The demonstrated effectiveness and robustness, together with its simplicity and

    fast implementation, make this proposed automated blood vessel segmentation method a

    suitable tool for being integrated into a complete prescreening system for early DR

    detection. After segmentation of retinal blood vessel, the segmented output is proposed to

    measure the width of the retinal blood vessel. This second module will be continued in

    the next phase. The change in retinal blood vessel diameter indicates that the vessel is

    affected by DR.

  • 8/3/2019 Blood Vessel - REPORT

    37/38

    37

    CHAPTER 7

    REFERENCES

    [1] E. Ricci and R. Perfetti, Retinal blood vessel segmentation using line operators and

    support vector classification, IEEE Trans. Med. Imag., vol. 26, no. 10, pp. 13571365,

    Oct. 2007.

    [2] F. Zana and J. C. Klein, Segmentation of vessel-like patterns using mathematical

    morphology and curvature evaluation, IEEE Trans. Image Process., vol. 10, no. 7, pp.

    10101019, Jul. 2001.

    [3] J. V. B. Soares, J. J. G. Leandro, R. M. Cesar, Jr., H. F. Jelinek, and M.J. Cree,

    Retinal vessel segmentation using the 2D Gabor wavelet and supervised classification,

    IEEE Trans. Med. Imag., vol. 25, no. 9, pp. 12141222, Sep. 2006.

    [4]J. Staal, M. D. Abrmoff, M. Niemeijer, M. A. Viergever, and B. v. Ginneken, Ridge

    based vessel segmentation in color images of the retina, IEEE Trans. Med. Imag., vol.

    23, no. 4, pp. 501509, Apr. 2004.

    [5] B. S. Y. Lam and H. Yan, A novel vessel segmentation algorithm for pathological

    retina images based on the divergence of vector fields, IEEE Trans. Med. Imag., vol. 27,

    no. 2, pp. 237246, Feb. 2008.

    [6] A. M. Mendona and A. Campilho, Segmentation of retinal blood vessels by

    combining the detection of centerlines and morphological reconstruction, IEEE Trans.

    Med. Imag., vol. 25, no. 9, pp. 12001213, Sep. 2006.

  • 8/3/2019 Blood Vessel - REPORT

    38/38

    38

    [7] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, Detection of

    blood vessels in retinal images using two-dimensional matched filters, IEEE Trans.

    Med. Imag., vol. 8, no. 3, pp. 263269, Sep. 1989.

    [8] O. Chutatape, L. Zheng, and S. Krishman, Retinal blood vessel detection and

    tracking by matched Gaussian and Kalman filters, in Proc. IEEE Int. Conf. Eng. Biol.

    Soc., 1998, vol. 20, pp. 31443149.

    [9] L. Zhou, M. S. Rzeszotarski, L. J. Singerman, and J. M. Chokreff, The detection and

    quantification of retinopathy using digital angiograms, IEEE Trans. Med. Imag., vol. 13,

    no.4, pp.

    619626

    ,D

    ec. 1994

    .

    [10] L. Gang, O. Chutatape, and S. M. Krishnan, Detection and measurement of retinal

    vessels in fundus images using amplitude modified second-order Gaussian filter, IEEE

    Trans.Biomed.Eng., vol. 49, pp. 168172, Feb. 2002.

    [11] J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, and R. L. Kennedy,

    Measurement of retinal vessel widths from fundus images based on 2-D modeling,

    IEEETrans. Med.Imag., vol. 23, no. 10, pp. 11961204, Oct. 2004.