an approach for chest tube detection in chest radiographs

8

Click here to load reader

Upload: mustafa-serdar

Post on 25-Dec-2016

215 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: An approach for chest tube detection in chest radiographs

www.ietdl.org

1&

Published in IET Image ProcessingReceived on 19th April 2013Revised on 14th July 2013Accepted on 5th August 2013doi: 10.1049/iet-ipr.2013.0239

22The Institution of Engineering and Technology 2014

ISSN 1751-9659

An approach for chest tube detection in chestradiographsCem Ahmet Mercan, Mustafa Serdar Celebi

Informatics Institute, Istanbul Technical University (ITU), Maslak 34469, Istanbul, Turkey

E-mail: [email protected]

Abstract: It is known that overlapping tissues cause highly complex projections in chest radiographs. In addition, artificialobjects, such as catheters, chest tubes and pacemakers can appear on these radiographs. It is important that the anomalydetection algorithms are not confused by these objects. To achieve this goal, the authors propose an approach to train aconvolutional neural network (CNN) to detect chest tubes present on radiographs. To detect the chest tube skeleton as thefinal output in a better manner, non-uniform rational B-spline curves are used to automatically fit with the CNN output. Thisis the first study conducted to automatically detect artificial objects in the lung region of chest radiographs. Other automaticdetection schemes work on the mediastinum. The authors evaluated the performance of the model using a pixel-basedreceiver operating characteristic (ROC) analysis. Each true positive, true negative, false positive and false negative pixel iscounted and used for calculating average accuracy, sensitivity and specificity percentages. The results were 99.99% accuracy,59% sensitivity and 99.99% specificity. Therefore they obtained promising results on the detection of artificial objects.

1 Introduction

Chest radiography is the most common radiological modalityin practice [1, 2]. In the short term, it appears that modernimaging techniques will not replace chest radiography [2]because radiography is fast, dense, inexpensive and moreaccessible than most modern imaging techniques, such asmagnetic resonance (MR) or computed tomography (CT)[3]. In fact, the most important problem underlying thesemodern techniques is the resulting amount of data that mustbe handled [4]. CT and MR are known as volumetricmethods, which require radiologists to evaluate up to 1500slices per patient [4] and up to 300 slices per thorax [5]. Incontrast, chest radiography produces only a single image,and therefore is widely used for scanning and screeningpurposes. Even patients in the intensive care unit aremonitored daily using chest radiography [6]. On average,236 chest radiographs are taken per 1000 patients each year[1]. For these reasons, computer aided diagnosis (CAD) forchest radiography, which is the computerised analysis ofchest radiography for anomaly detection, is an importantand active research area. Moreover, the importance of CADis increased with the widespread use of a picture archivingand communication system [7].Unfortunately, chest radiography is one of the most difficult

radiological modalities to evaluate [8, 9] because overlappingtissues create a highly complex projection. In addition,artificial objects such as catheters, chest tubes, pacemakersand even clothes can be present in the projection image(Fig. 1). Clinical practice shows that the presence of anartificial object is common and creates further complexities.For example, 33% of chest radiographs contain a catheter

[10]. When a researcher attempts to develop a robust CADapproach that works for chest radiography, he or she must becertain that each algorithm works properly for all chestradiographs that contain foreign objects. Therefore theanomaly detection algorithm should not be confused by theseartificial objects. As a result, detecting foreign objects is acritical issue for CAD research. However, a survey by vanGinneken et al. [2] reported that the detection of artificialobjects is one of the unsolved problems of CAD.Only a few studies in the CAD literature have focused

on the detection of artificial objects in chest radiography.Four prominent studies can be highlighted. First, asemi-automated method for tracking the location ofnasogastric tubes, endo-tracheal tubes, chest tubes,peripherally inserted central catheter (PICC) and centralvenous catheters using five chest radiographs was proposedby Keller and Reeves [11]. Second and third studies haveinvestigated the automatic detection of tubes that are locatedonly in the mediastinum. A method for automatic detectionand positioning of endotracheal, feeding and nasogastrictubes using 107 chest radiographs was studied by Shenget al. [12]. In addition, Ramakrishna et al. [13, 14] workedon the automatic detection of endotracheal and nasogastrictubes. All of these automatic methods work for objectslocated in the mediastinum only. Fourth, the detection andremoval of simulated chest tubes from radiographs werealso studied in our previous work [15].In this work, we proposed the use of convolutional neural

network (CNN) together with non-uniform rational B-splines(NURBS) to detect the presence of chest tubes in chestradiographs (Fig. 2). Although there are some medicalimage processing and CAD that work on chest radiographs

IET Image Process., 2014, Vol. 8, Iss. 2, pp. 122–129doi: 10.1049/iet-ipr.2013.0239

Page 2: An approach for chest tube detection in chest radiographs

Fig. 1 Examples of artificial objects in a chest radiograph

(1) Oxygen cannula, (2) connected EKG electrode, (3) non-connected EKGelectrode and (4) chest tube

www.ietdl.org

and use a CNN such as segmentation of bones [16], anddetection of cancer on chest radiographs [17, 18], there areno artificial object detection schemes which use a CNN.Moreover, our proposed model is the first study conducted

to automatically detect artificial objects in the lung region ofchest radiographs. Other automatic detection schemes workon the mediastinum.In our research, we noted that in some cases, the CNN output

of the form of the detected chest tube skeletonwas not clear andhad short and long discontinuities. To overcome this problem,we introduced a NURBS-based curve fitting algorithm that wasapplied to the detected chest tube image to obtain a better chesttube skeleton as a final output. The model was trained with 62chest radiographs (our training dataset) andwas testedwith twodatasets: our test dataset (21 chest radiographs) and theStandard Digital Image Database Project Team of theScientific Committee of the Japanese Society of RadiologicalTechnology (JRST) dataset (247 chest radiographs) [19].Chest radiographs in the JRST dataset do not contain chesttubes, and they are used here to prove that there are no falsepositives (FPs) for a general dataset. Our results showed thatthe CNN approach for a given chest radiograph with the

Fig. 2 Flowchart of the proposed system

IET Image Process., 2014, Vol. 8, Iss. 2, pp. 122–129doi: 10.1049/iet-ipr.2013.0239

NURBS-based curve fitting algorithm is a promising methodfor detecting a chest tube in a radiograph.

2 Model for detecting a chest tube

2.1 CNN architecture

A CNN approach is preferred because it has been developedto work on images and it uses pictures directly as an inputwithout requiring a feature extraction stage [20]. Moreover,a CNN is inspired by biological visual systems [21] andcan handle some image processing problems internally,such as shifting, scaling and distortion variations. A CNNalso reduces the number of free parameters of the neuralnetwork (NN) using shared weights. Instead of the scalarsused by a traditional NN method, a CNN uses matrices thatare defined by (1) and (3). The main advantage in usingmatrices is that they work as local receptive fields andprotect local spatial neighbourhoods. In this way, a CNNallows us to simulate both the feature extraction and patternrecognition stages that are similar to the classic schemeused for pattern recognition [20, 18].A CNN requires training with many image pairs. An image

pair is composed of a chest image, which contains a chest tubeas an input, and a map image, which contains a linerepresenting the opaque line of a chest tube as the target.After the training stage for a given X-ray image thatcontains a chest tube, the CNN output will be a map imagecontaining the chest tube.Let netk denote the kth pre-activation function image of the

output layer for our NN, which contains L layers (in otherwords, the output of the kth node of the output layer). Oj

denotes the jth image group, which is the output of the jthnode at layer L − 1, and let J be the number of suchgroups. The bias value at the kth node of the output layer isdenoted as bk. The convolution kernel between the kthoutput node and the jth node at layer L − 1, have the width(S) and height (T ) dimensions.In addition, kw, kh, jw and jh are the column and row indices

of netk and Oj, respectively. netk is given by the equation

netkw, khk = bk +∑Jj

∑Ss

∑Tt

W s, tkj O jw , jh

j (1)

where jw = (kw ×m) + s and jh = (kh × n) + t.The last two equations show the relationship between the

indices of the kth output node (kw, kh) and the indices of theprevious layer’s jth node ( jw, jh). The indices (s and t) of

the convolution kernel Ws, tkj

( )and step sizes (m for

horizontal, n for vertical) define the relationship.Then, netk is used to calculate Ok as follows

Okw , khk = f netkw , khk

( ), k = 1, . . . ., K (2)

where f () is a transfer function that we select.Almost the same equations are valid for the hidden layers,

with a few differences. Let netj denote the jth output image ofthe ℓth layer of our NN, which contains L layers (in otherwords, the output of the jth node of the ℓth layer). Theconvolution kernel between the jth output image of the ℓthlayer and the ith node at layer ℓ− 1, have the width (U )and height (V ) dimensions. Oi denotes the ith image group,which is the output of the ith node at layer ℓ− 1, and let I

123& The Institution of Engineering and Technology 2014

Page 3: An approach for chest tube detection in chest radiographs

Table 1 Training and testing datasets

Radiographcontent(s)

Ourtraining

set

Testing dataset

Ourtestset

JRSTdatabase

Testingtotal

no chest tube 24 8 247 255a single chesttube

25 11 0 11

two chest tubes 13 2 0 2other artificialobject(s)

36 13 4 17

total 62 21 247 268

www.ietdl.org

be the number of such groups. Then

net jw, jhj = bj +∑Ii

∑Uu

∑Vv

Wu, vji Oiw , ih

i (3)

where iw = (kw ×m′) + u and ih = ( jh × n′) + v.These two equations also show the relationship between

the indices of the jth hidden node ( jw, jh) and the indices ofthe previous layer’s ith node (iw, ih). The convolution kernelindices (u and v) and step sizes (m′ for horizontal, n′ forvertical) define the relationship.Then, netj is used to calculate Oj as follows

Ojw , jhj = f net jw , jhj

( ), j = 1, . . . ., J

(4)

To train an NN, an optimisation is performed on the weightsof the neurons according to an error formula of the NN. Ourerror function is chosen as follows

E = 1

2

∑Kk

∑H kw( )

kw

∑H kh( )

kh

tkw, khk − Okw, khk

( )2(5)

where K is the total node count in an output layer. The output

image of the kth output node have the width H kw( )( )and

height H kh( )( )dimensions. The value of a pixel of the

output image Okw, khk

( )produces a local error, based on its

own target value tkw , khk

( ). The error function is a sum of

these local errors.In the CNN architecture, the back propagation step is also

modified because of the indices of the matrices. Let η denotethe global learning rate. For an output layer, the weight update

DWs, tkj

( )is calculated using the following formulae

dkw, khk = tkw, khk − Okw, kh

k

( )f ′ netkw , khk

( )(6)

and

DWs, tkj = h

∑H kw( )

kw

∑H kh( )

kh

dkw , khk − Ojw , jh

j (7)

where f′() is the first derivative of the transfer function.The weight update DWu, v

ji

( )of the hidden layers is

calculated using the following equations

djw, jhj =

∑Kk

∑H kw( )

kw

∑H kh( )

kh

dkw, khk −Ws, t

kj

⎛⎝

⎞⎠f ′ net jw , jhj

( )(8)

where s = jw− (kw ×m), t = jh− (kh × n) and

DWu, vji = h

∑H jw( )

jw

∑H jh( )

jh

djw , jhj − Oiw , ih

i (9)

Apart from the differences that result from the matrix indices

124& The Institution of Engineering and Technology 2014

and convolution operations, the CNN formulation is the sameas the feed forward NN formulation.It is known that NN performance is very dependent on the

training parameters and the dataset. There are a series ofprocedures for fine tuning an NN reported in the literaturethat help to improve its success and efficiency [22–25]. Fortraining purposes, some of these tuning procedures are alsoimplemented in our model. By selecting a step size that isequal to two at layer-2 and layer-3, the subsampling isimplemented implicitly [25]. We trained the NN with asingle sample per training epoch (i.e. stochastic training)and shuffled samples to obtain a better trainingperformance. We also shifted the average of the inputsamples to zero and scaled the covariance to one. Selectinggood initial weights (Winit) is an important process thatdirectly affects the resulting converging rates. To initialisethe weights, we use the following equation [20]

−2.4

M,Winit,

2.4

M(10)

where M is the number of inputs that feed the node.

2.2 Chest tube detection

To detect chest tubes in chest radiographs, a CNN architecturecontaining L = 5 layers is used. There are three types of layers:the input layer ℓ= 1, the output layer (ℓ= L) and the hiddenlayers (1 < ℓ < L). The input layer is an abstract layer thatcontains only the input data without any calculations.Different number of node combinations of hidden layers,including 8, 16, 32, 64, 128 and 256, were tested usingdifferent learning rates, such as 0.1, 0.01, 0.001, 0.0001 anda gradient descent with a tanh activation function in thehidden layers. We also tested the stochastic diagonalLevenberg-Marquardt update rule and the cross-entropy(CE) method. Our tests and the results of Simard et al. [25]show that the CE method gives the best performance.According to these findings, we decided to use the gradientdescent and CE algorithms with a sigmoid activationfunction. Our final CNN architecture contains 2, 32, 32,128 and 1 nodes for each of the successive layers. Betweenthe layers, there are 32, 16, 128 and 1 links for each nodein the layers. After a series of tuning tests, the learning ratewas selected as 0.1.To feed the system with a greater input region without

increasing the model complexity, a multi-scale input withtwo scales was used. The input image blocks used by theinput layer, cropped from two whole images of a chestradiograph at multi-scale sizes of 1000 × 1000 and 250 × 250pixels, are used without any registration. The input layer

IET Image Process., 2014, Vol. 8, Iss. 2, pp. 122–129doi: 10.1049/iet-ipr.2013.0239

Page 4: An approach for chest tube detection in chest radiographs

Fig. 3 Stages of our model

The left image is the input, the middle image is the output of the CNN and theright image is the result of the curve fitting process

www.ietdl.org

contains two nodes with two input blocks of 13 × 13 pixelseach. During the training stage, the blocks are selected fromrandom training image sets with a random block position.At three hidden layers, the output sizes are chosen as 5 × 5,1 × 1 and 1 × 1 pixels. Finally, the output layer contains asingle node that gives outputs of 1 × 1 pixels. These outputsare used as a pixel in the resulting image at the properlocation according to the input block position.

Fig. 4 Selected successful results from our test dataset

The backgrounds are input X-raysThe ‘X’s’ show the outputs of the proposed model

IET Image Process., 2014, Vol. 8, Iss. 2, pp. 122–129doi: 10.1049/iet-ipr.2013.0239

The training set is solely constructed from our dataset, andcontains 62 radiographs (Table 1). As seen in Table 1, 13 ofthe images contain two chest tubes, 25 contain a single chesttube and 24 out of the 62 radiographs have no chest tube. Thetesting set is constructed with 21 radiographs with similargroupings. To distinguish the chest tubes from otherartificial objects, our training and testing sets contain 36and 13 radiographs, respectively, which include artificialobjects other than chest tubes.

2.3 NURBS curves

In some cases, the output of the CNN algorithm provides anunclear and interrupted chest tube skeleton that is not practicalfor use. Therefore we introduced an adaptive curve-fittingapproach to the output images of our CNN model to obtaina final continuous chest tube skeleton (Fig. 3). To eradicatethe noise and enhance the output, the algorithm starts withthe strongest (darkest in the image) and largest continuousline segment, and follows the line in both directions.Following the strongest output, it filters noise and providesa clean but discontinuous line. The lines’ pixels can then beused as control points for curve fitting. This line will becontinuous after the NURBS curve fitting.NURBS curves P(t) are used for curve fitting, given by the

expression

P(t) =∑n+1

i=1

BiRi, k (t) (11)

125& The Institution of Engineering and Technology 2014

Page 5: An approach for chest tube detection in chest radiographs

Fig. 5 Pseudocode for selecting the control points of the NURBScurve

www.ietdl.org

where the Bi’s are the control polygon vertices and theRi, k(t)’s are the rational basis functions.

Ri, k(t) =hiNi, k (t)∑n+1i=1 hiNi, k(t)

(12)

where Ni,k(t)’s are the basis functions, and hi’s are thehomogeneous weighting factors [26].To reduce the number of control points that are used for

curve fitting, a control point selection process is conductedby walking over the entire output chest tube curve andselecting control points at intervals of 36 pixels. This ‘36pixels’ is an empirical value. It is maximum interval whichdoes not corrupt the curvature information. When adiscontinuity in the curve is met during the walking

Fig. 6 Selected erroneous results from our test dataset

Backgrounds are input X-rays‘X’s’ show the outputs of the proposed model

126& The Institution of Engineering and Technology 2014

process, it is forced to jump to the point where the nextcontinuous line segment starts. For example, in Fig. 4,NURBS-based piecewise curves are fitted over segmentedlines on the output images of our proposed CNN model andare shown with ‘X’s’. The pseudocode for fitting theNURBS curves over the output image is given in Algorithm1 (see Fig. 5). The main idea behind the curve fittingprocess is to capture the whole chest tube figure on theX-ray image regardless of any mis-interpretation orconfusion.Using NURBS curve fitting eliminates the possibility of

small artefacts in the output images. Sample input–outputmerged images of our proposed model are presented inFig. 4 and Fig. 6.

2.4 Model performance evaluation

We evaluated the performance of the model using apixel-based ROC analysis which is defined very well byFawcett [27]. Each true positive (TP), true negative (TN),FP and false negative (FN) pixel was counted and averageTP (Ntp), TN (Ntn), FP (Nfp) and FN (Nfn) values per imagewere determined. These values were used to calculatethe accuracy (ψ), sensitivity (Sn) and specificity (Sp) usingthe following formulae [27]

c = Ntp + Ntn

Ntp + Ntn + Nfp + Nfn(13)

IET Image Process., 2014, Vol. 8, Iss. 2, pp. 122–129doi: 10.1049/iet-ipr.2013.0239

Page 6: An approach for chest tube detection in chest radiographs

Table 3 Pixel-based average ROC values per image

Our test set JRST database All test sets

image count 21 247 268pixels/image 1 000 000 1 000 000 1 000 000TP/image 1120 0 88TN/image 997 341 999 999 999 790FP/image 765 1 61FN/image 774 0 60accuracy, % 99.85 99.99 99.99sensitivity, % 59.13 NAN 59.46specificity, % 99.92 99.99 99.99

TP: true positive, TN: true negative, FP: false positive, FN: falsenegative.

Table 2 Pixel-based average RMS error values of the rawoutput of the neural network

Radiograph content(s) Our test set JRST database

Rads. RMS err. Rads. RMS err.

no chest tube 8 0.01857 247 0.006238chest tube(s) 13 0.03716 0 NAother artificial object(s) 13 0.03142 4 0.005366total 21 0.03008 247 0.006238

All of the results were obtained using a CNN in five layers (2, 32,32, 128 and 1 nodes per layer; 32, 16, 128 and 1 links per node).

Fig. 7 Selected examples of our model results using the JSRT test data

Since there is no chest tube present, the chest tube markers are not present in thesBackgrounds are X-rays

www.ietdl.org

IET Image Process., 2014, Vol. 8, Iss. 2, pp. 122–129doi: 10.1049/iet-ipr.2013.0239

Sn =Ntp

Ntp + Nfn(14)

and

Sp =Ntn

Ntn + Nfp(15)

3 Results

We tested our model using two datasets (Table 1). The firstdataset contained our test set, including artificial objects andchest tubes, as previously mentioned. The second test setwas created by JRST [19] and contained 247 imageswithout chest tubes, and is publicly available.The average RMS errors of the NN stage are summarised in

Table 2. At this step, the result needs a refinement, becausethe output of the NN contains small artefacts anddiscontinuities. The output of the NN was refined by ourproposed curve fitting process. However, the curve fittingprocess creates new problems which are determining thebeginning and end of the chest tube especially at verybright (dense) zones (Fig. 6).The results of our tests using all 268 test images were

99.99% accuracy, 59% sensitivity and 99.99% specificitywith an average of 61 FP pixels/image and 60 FN pixels/image. The details of test results are given in Table 3.

set results

e images

127& The Institution of Engineering and Technology 2014

Page 7: An approach for chest tube detection in chest radiographs

www.ietdl.org

The majority of the errors came from our database images,

which contain very bright (dense) zones that arise fromoverlapping ribs and/or pleural effusion. The results revealthat there was no confusion between the chest tubedetections and the detection of other artificial objects. TheJRST database results, except for one image with 347 FPpixels, did not have FP or FN pixels. These error valuesshow that the results are well aligned with the input imagesand are acceptable as an output. Some examples of theJRST database results are shown in Fig. 7. It is critical tomention that the output will be automatically obtainedwithout specifying any region of interest on the X-rayimage. It is very important that the proposed model limitsany human intervention in the detection process, whichcould be a potential source of error.

4 Conclusions

It is important to note that the detection of an artificial objectin medical images has critical importance in the medicalimage analysis field. Detecting the existence of an artificialobject and its location on radiographs has three criticallyimportant aspects. Firstly, this capability helps practitionersto detect the locations of these objects. Secondly, artificialobjects can be identified in the images for a PAC system.Finally, taking the required measurements for other CADmethods will not be affected by the existence of an artificialobject.Specifically, the chest radiography of a human contains

vital organs and could have many different artificialcomponents that target these organs. The research thatfocuses on the detection of artificial objects in chestradiography is mostly limited to the tubes in themediastinum. It is critical to know that an artificial objectcan be found anywhere on the whole chest radiograph, andour proposed model is the first study conducted forautomatically detecting artificial objects in the lung region.In our study, the chest tube is chosen as an artificial object

that spans almost the whole chest. By choosing an object thatspans a large area, the area of interest for searching and thedifferentiation of this area are increased. Specifically, ribscan create a pattern that is easily confused with tubepatterns. Moreover, while the density on the radiographimages increases with the overlapping ribs at the sides, atthe same time, there could be zones in which the imagedensity fluctuates with an increase in the rib clearancetowards the centre of the lung. In our work, we found thatthe largest source of error is the high-density zones, wherethe possible location of a chest tube is very difficult totrack. In spite of this difficulty, we obtained promisingresults in our study.In addition, we do not filter regions of the input images. It

is expected that the selection of a region of interest willincrease the performance of the NN and filter somehigh-density zones, which could cause errors.It should be noted that it is important to not only

successfully detect an artificial object but also to correctlyanalyse and evaluate the radiographs that have no chesttube. By adding the JSRT dataset, we obtained a larger testset that included different pathological patterns and chesttube configurations. However, for more robust tests, there isa need for openly accessible, larger datasets that includeevery type of artificial object and diverse pathologicalpatterns.

128& The Institution of Engineering and Technology 2014

5 Acknowledgment

The authors would like to acknowledge the computerresources granted by the High Performance ComputingLaboratory of the Informatics Institute at Istanbul TechnicalUniversity.

6 References

1 Speets, A.M., van der Graaf, Y., Hoes, A.W., et al.: ‘Chest radiographyin general practice: indications, diagnostic yield and consequencesfor patient management’, Br. J. Gen. Pract., 2006, 56, (529),pp. 574–578

2 van Ginneken, B., ter Haar Romeny, B.M., Viergever, M.A.:‘Computer-aided diagnosis in chest radiography: a survey’, IEEETrans. Med. Imaging, 2001, 20, (12), pp. 1228–1241

3 Hardie, R.C., Rogers, S.K., Wilson, T., Rogers, A.: ‘Performanceanalysis of a new computer aided detection system for identifyinglung nodules on chest radiographs’, Med. Image Anal., 2008, 12, (3),pp. 240–258

4 Partain, C.L., Chan, H.P., Gelovani, J.G., et al.: ‘Biomedical imagingresearch opportunities workshop II: report and recommendations1’,Radiology, 2005, 236, (2), pp. 389–403

5 Armato, S.G. III, McLennan, G., McNitt-Gray, M.F., et al.: ‘Lung imagedatabase consortium: developing a resource for the medical imagingresearch community1’, Radiology, 2004, 232, (3), pp. 739–748

6 Speets, A., Kalmijn, S., Hoes, A., Graaf, Y., Smeets, H., Mali, W.:‘Frequency of chest radiography and abdominal ultrasound in theNetherlands: 19992003’, Eur. J. Epidemiol., 2005, 20, (12),pp. 1031–1036

7 Doi, K.: ‘Computer-aided diagnosis in medical imaging: historicalreview, current status and future potential’, Comput. Med. ImageGraph., 2007, 31, (4–5), pp. 198–211

8 Lo, S.C.B., Lin, J.S.J., Freedman, M.T., Mun, S.K.: ‘Application ofartificial neural networks to medical image pattern recognition:detection of clustered microcalcifications on mammograms and lungcancer on chest radiographs’, J. VLSI Signal Proc. Syst., 1998, 18, (3),pp. 263–274

9 Schilham, A.M.R., van Ginneken, B., Loog, M.: ‘A computer-aideddiagnosis system for detection of lung nodules in chest radiographswith an evaluation on a public database’, Med. Image Anal., 2006, 10,(2), pp. 247–258

10 van Ginneken, B., Hogeweg, L., Prokop, M.: ‘Computer-aided diagnosisin chest radiography: beyond nodules’, Eur. J. Radiol., 2009, 72, (2),pp. 226–230

11 Keller, B.M.C.M.H.C.Y.D., Reeves, A.P.: ‘Semi-automated locationidentification of catheters in digital chest radiographs’, SPIE Int. Soc.Opt. Eng., 2007, 10, pp. 651410-1–9

12 Sheng, C., Li, L., Pei, W.: ‘Automatic detection of supporting devicepositioning in intensive care unit radiography’, Int. J. Med. Robot.Comp., 2009, 5, (3), pp. 332–340

13 Ramakrishna, B., Brown, M., Goldin, J., Cagnon, C., Enzmann, D.:‘Catheter detection and classification on chest radiographs: anautomated prototype computer-aided detection (CAD) system forradiologists’, in Summers, R.M., van Ginneken, B. (Eds.): ‘MedicalImaging 2011: Computer-Aided Diagnosis’ (Society of Photo-OpticalInstrumentation Engineers (SPIE), 2011, 1st edn.), vol. 7963,pp. 796333

14 Ramakrishna, B., Brown, M., Goldin, J., Cagnon, C., Enzmann, D.: ‘Animproved automatic computer aided tube detection and labelling systemon chest radiographs’. Society of Photo-Optical InstrumentationEngineers (SPIE) Conf. Series, 2012, vol. 8315, pp. 1–7

15 Mercan, C.A., Celebi, M.S.: ‘Fully automatic chest tube figure removingfrom the postero-anterior chest radiography’. Proc. 11th IASTED Int.Conf. Computer Graphics and Imaging, Innsbruck, Austria, IASTED,2010, pp. 298–302

16 Cernazanu-Glavan, C., Holban, S.: ‘Segmentation of bone structure inX-ray images using convolutional neural network’, Adv. Electr.Comput. Eng., 2013, 13, (1), pp. 87–94

17 Lo, S.C.B., Lou, S.L.A., Lin, J.S., Freedman, M.T., Chien, M.V., Mun,S.K.: ‘Artificial convolution neural network techniques and applicationsfor lung nodule detection’, IEEE Trans. Med. Imaging, 1995, 14, (4),pp. 711–718

18 Lo, S.C.B., Lin, J.S., Freedman, M.T., Mun, S.K.: ‘Computer-assisteddiagnosis of lung nodule detection using artificial convolution neuralnetwork’. Proc SPIE 1898, Medical Imaging, 1993, pp. 859–869

19 Shiraishi, J., Katsuragawa, S., Ikezoe, J., et al.: ‘Development of adigital image database for chest radiographs with and without a lung

IET Image Process., 2014, Vol. 8, Iss. 2, pp. 122–129doi: 10.1049/iet-ipr.2013.0239

Page 8: An approach for chest tube detection in chest radiographs

www.ietdl.org

nodule: receiver operating characteristic analysis of radiologists’detection of pulmonary nodules’, Am. J. Roentgenol., 2000, 174, (1),pp. 71–74

20 LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: ‘Gradient-based learningapplied to document recognition’. Proc. IEEE, November 1998, vol. 86,no 11, pp. 2278–2324

21 Browne, M., Ghidary, S.S.: ‘Convolutional neural networks for imageprocessing: an application in robot vision’. Australian Conf. ArtificialIntelligence, 2003 (LNCS 2903), pp. 641–652

22 LeCun, Y., Bottou, L., Orr, G., Muller, K.: ‘Efficient BackProp’, in Orr,G.K.M. (Ed.): ‘Neural networks: tricks of the trade’ (Springer, 1998, 1stedn.), vol. 1524, pp. 5–50

23 LeCun, Y.: ‘Generalization and network design strategies’, in Pfeifer, R.,Schreter, Z., Fogelman, F., Steels, L. (Eds.): ‘Connectionism inperspective’ (Elsevier, Zurich, Switzerland, 1989, 1st edn.)

IET Image Process., 2014, Vol. 8, Iss. 2, pp. 122–129doi: 10.1049/iet-ipr.2013.0239

24 Lawrence, S., Giles, C.L., Tsoi, A.C., Back, A.D.: ‘Face recognition: aconvolutional neural network approach’, IEEE T. Neural Netw., 1998, 8,(1), pp. 98–113

25 Simard, P., Steinkraus, D., Platt, J.C.: ‘Best practices for convolutionalneural networks applied to visual document analysis’ (ICDAR, IEEEComputer Society, 2003, 1st edn.), pp. 958–962

26 Rogers, D.F.: ‘An introduction to NURBS: with historical perspective’(The Morgan Kaufmann Series in Computer Graphics and GeometricModelling Series, Morgan Kaufmann Publishers, 2001, 1st edn.)

27 Fawcett, T.: ‘An introduction to ROC analysis’, Pattern Recogn. Lett.,2006, 27, pp. 861–874

28 Xu, T., Mandal, M.K., Long, R., Cheng, I., Basu, A.: ‘An edge-regionforce guided active shape approach for automatic lung field detectionin chest radiographs’, Comput. Med. Imag. Graph., 2012, 36, (6),pp. 452–463

129& The Institution of Engineering and Technology 2014