[ieee 2014 10th international conference on communications (comm) - bucharest, romania...

4
978-1-4799-2385-4/14/$31.00 ©2014 IEEE On Reducing the Artifacts of Sonar Images with Image Fusion Technique Dorel Aiordachioaie “Dunarea de Jos” University of Galati, Romania Electronics and Telecommunications Department E-mail: [email protected] Abstract—The work considers the artifacts compensation in the context of airborne ultrasonic image generation. Almost all airborne ultrasonic images used in robotics have artifacts and distortions, when comparing the real objects to those presented or identified in ultrasonic images. The differences between the real and reference images can have various causes, such as asymmetries in the directivity of the ultrasonic transducers, reflections in the explored environment, and nonlinearities of the signal processing blocks. A fusion-based method of reducing artifacts of sonar images is proposed and partially implemented. The object image from the ultrasonic image is fused with a reference image, selected from a database after an identification procedure. The selection rule used in image fusion is considered a supervised process. The results are encouraging and suggest a trade-off between complexity of fusion technique and quality of the result. Keywords-ultrasonic image; sonar; artifacts; fusion. I. INTRODUCTION Ultrasonic images have many applications in various domains, starting with relatively old areas of industrial applications, and continuing with robotics and medical imaging. Ultrasonic image generation could be obtained by using the element-by-element technique, by exploring various directions of the environment, on the directions where the objects are located. Exploring means pulse emission plus waiting time and storage of the received echoes from environment during a fixed time window. Depending on target, the relative positions and the amplitudes of the received echoes are changing in time. The set of recorded frames (in fact, static images) can be described by a 3D function. An element of the image (which could be assimilated as pixel) has an intensity proportional to the amplitude of the received signals, reflected by the surface of the objects from the explored environment. Ultrasonic images have low resolution, i.e. 40 by 40 elements, and 8 bit are used for each element (pixel). More details about image generation and features are available in [1]. In Fig. 1, three raw ultrasonic images are presented, accompanied (in the upper side) by the explored objects: ball, box with an edge oriented towards the sonar head, and box facing the sonar head. Looking to the bottom of each image, some artifacts are observed, i.e. some contours have no link with the explored object. These are only on vertical direction. The work considers the artifacts generated mainly by the ultrasonic transducers (receivers) and investigates the solution to compensate for the unwanted artifacts in ultrasonic images by image fusion approach. The ultrasonic transducers have non- uniformities in the directivity characteristic and – as effect – the obtained ultrasonic images have shape distortions or artifacts. These could generate problems in the “reading” and understanding of the ultrasonic images. Figure 1. Airborne ultrasonic images with artifacts (e.g. shadows) There are many available methods and algorithms of image processing, considering the maturity of the image-processing domain. We refer to the general methods based on filtering as in [2], [3], [4], or to more specialized methods based on fusion, [5], or morphological methods, [6]. In the context of airborne ultrasonic images, in [7] equalization and adaptive filtering techniques for image improvement and artifact removing are considered. Equalization is related to asymmetries in the directivity of transducers. Filtering techniques affect the entire object, not only parts of it. Starting from the global effects of these methods, it seems necessary to use also methods to process only parts of the investigated object. This work introduces a solution based on image fusion paradigm. When discussing about ultrasonic image processing and especially of airborne sonar images, the methods are specialized in and oriented (adapted) towards the generation process of such images, which is also the case of the present work. The work has three main sections. Section II describes the proposed method based on image fusion of two images: the distorted image and a reference image, selected from a database. Section III presents the details of the fusion rules used in fusion. All results are compared and discussed from the distortion point of view in section IV. The obtained results could be also used as references in choosing and designing other artifacts compensation techniques and to compensate for other types of artifacts, not only those generated by transducers.

Upload: dorel

Post on 21-Feb-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2014 10th International Conference on Communications (COMM) - Bucharest, Romania (2014.5.29-2014.5.31)] 2014 10th International Conference on Communications (COMM) - On reducing

978-1-4799-2385-4/14/$31.00 ©2014 IEEE

On Reducing the Artifacts of Sonar Images with Image Fusion Technique

Dorel Aiordachioaie “Dunarea de Jos” University of Galati, Romania Electronics and Telecommunications Department

E-mail: [email protected]

Abstract—The work considers the artifacts compensation in the context of airborne ultrasonic image generation. Almost all airborne ultrasonic images used in robotics have artifacts and distortions, when comparing the real objects to those presented or identified in ultrasonic images. The differences between the real and reference images can have various causes, such as asymmetries in the directivity of the ultrasonic transducers, reflections in the explored environment, and nonlinearities of the signal processing blocks. A fusion-based method of reducing artifacts of sonar images is proposed and partially implemented. The object image from the ultrasonic image is fused with a reference image, selected from a database after an identification procedure. The selection rule used in image fusion is considered a supervised process. The results are encouraging and suggest a trade-off between complexity of fusion technique and quality of the result.

Keywords-ultrasonic image; sonar; artifacts; fusion.

I. INTRODUCTION Ultrasonic images have many applications in various

domains, starting with relatively old areas of industrial applications, and continuing with robotics and medical imaging. Ultrasonic image generation could be obtained by using the element-by-element technique, by exploring various directions of the environment, on the directions where the objects are located. Exploring means pulse emission plus waiting time and storage of the received echoes from environment during a fixed time window.

Depending on target, the relative positions and the amplitudes of the received echoes are changing in time. The set of recorded frames (in fact, static images) can be described by a 3D function. An element of the image (which could be assimilated as pixel) has an intensity proportional to the amplitude of the received signals, reflected by the surface of the objects from the explored environment. Ultrasonic images have low resolution, i.e. 40 by 40 elements, and 8 bit are used for each element (pixel). More details about image generation and features are available in [1]. In Fig. 1, three raw ultrasonic images are presented, accompanied (in the upper side) by the explored objects: ball, box with an edge oriented towards the sonar head, and box facing the sonar head. Looking to the bottom of each image, some artifacts are observed, i.e. some contours have no link with the explored object. These are only on vertical direction.

The work considers the artifacts generated mainly by the ultrasonic transducers (receivers) and investigates the solution to compensate for the unwanted artifacts in ultrasonic images

by image fusion approach. The ultrasonic transducers have non-uniformities in the directivity characteristic and – as effect – the obtained ultrasonic images have shape distortions or artifacts. These could generate problems in the “reading” and understanding of the ultrasonic images.

Figure 1. Airborne ultrasonic images with artifacts (e.g. shadows)

There are many available methods and algorithms of image processing, considering the maturity of the image-processing domain. We refer to the general methods based on filtering as in [2], [3], [4], or to more specialized methods based on fusion, [5], or morphological methods, [6]. In the context of airborne ultrasonic images, in [7] equalization and adaptive filtering techniques for image improvement and artifact removing are considered. Equalization is related to asymmetries in the directivity of transducers. Filtering techniques affect the entire object, not only parts of it. Starting from the global effects of these methods, it seems necessary to use also methods to process only parts of the investigated object. This work introduces a solution based on image fusion paradigm. When discussing about ultrasonic image processing and especially of airborne sonar images, the methods are specialized in and oriented (adapted) towards the generation process of such images, which is also the case of the present work.

The work has three main sections. Section II describes the proposed method based on image fusion of two images: the distorted image and a reference image, selected from a database. Section III presents the details of the fusion rules used in fusion. All results are compared and discussed from the distortion point of view in section IV. The obtained results could be also used as references in choosing and designing other artifacts compensation techniques and to compensate for other types of artifacts, not only those generated by transducers.

Page 2: [IEEE 2014 10th International Conference on Communications (COMM) - Bucharest, Romania (2014.5.29-2014.5.31)] 2014 10th International Conference on Communications (COMM) - On reducing

II. DESCRIPTION OF THE METHOD The structure of the method is presented in Fig. 2. The

unprocessed (raw) ultrasonic image (RUI) is analyzed for object detection task. A finite set of objects is considered. For each detected object, the classifier selects the best matching image from a database of reference object images (with no artifacts). The image fusion process has two inputs, the raw ultrasonic image (RUI) and the reference image from database. The fusion could be at the level of the entire image or at the level of each object from image, depending on the selected fusion rule. In this work, the selection rule is the result of supervision process, which considers the effect of each rule on the distorted image, i.e. the obtained quality of the fused image.

The image fusion block may contain some preliminary processing steps, such as rotation, scaling, re-sampling and registration. These steps are not presented explicitly in Fig. 2 and are not considered here.

Figure 2. The structure of the proposed method

Since the beginning of 1990s data fusion - in general - and image fusion - in particular – have been widely used in solving various problems from engineering, social and military application fields. Over time, image fusion was defined in various ways, [8], e.g.: (i) a process dealing with data and information from multiple sources to achieve improved information for decision making, [9]; (ii) the combination of two or more different images to form a new image by using a certain algorithm, [10]; (iii) the process of combining information from two or more images of a scene into a single composite image that is more informative and more suitable for visual perception or computer processing, [11]; (iv) a process of combining images, obtained by sensors of different wavelengths simultaneously viewing the same scene, to form a composite image. The composite image is formed to improve the image content and to make it easier for the user to detect, recognize, and identify targets and increase his situational awareness, [12].

It seems that the image fusion is a perfect way to combine information from multiple sources or a single source in order to remove defects or artifacts. To achieve image fusion, four stages must be performed on various levels: signal level, pixel level, feature level, and decision level, [8]. The source images are processed separately for information extraction. Then information is combined with applying decision rules.

The literature of image fusion – in general – and – ultrasonic image fusion – in particular is quite generous. General discussions are present e.g. in [13], [14] or [15]. Details and results could be discovered by looking at some sample

references, as e.g. from [16] to [20]. As far as we know, airborne ultrasonic image fusion has not been considered so far, or – as measure of safety – there are a few attempts in this direction. As specified in [21], "there is no best method; the choice of a proper method depends on the images and the fusion purpose."

Limitations of existing fusion methods are presented in [22]. Analyzing the current state it can be seen that, worldwide, a common method is based on wavelets. There are also other methods approached, starting from the wavelet, we can combine to obtain a better quality of image after merger. A simple and robust image fusion method is based on DWT (Discrete Wavelet Transform). For this reason, in this work, we have focused on the wavelet transform. After experiments, in order to evaluate the quality of fused image, after merger, Root Mean Square Error (RMSE) is calculated for each type of wavelet, level and rule.

From the structural point of view, image fusion needs two steps: Fusion Analysis (by fusion rules) and Fusion Synthesis. The applied rule is based on what we want to obtain at the output. In analysis step, the input images are decomposed: choose a wavelet, choose the decomposition level N, run decomposition up to level N. At the output, two sets of coefficients are obtained, approximation and detail, [10]. The level N is chosen according to the desired performance at the output, commonly between 1 and 5. Synthesis means generation of the final (fused) image, which is a construction of the two images into one image: it is computed using the approximation coefficients from level N and detail coefficients from 1 to N.

The combination of the approximation and detail coefficients may be, [10]: Simple: 'max' (the maximum), 'min' (the minimum), 'mean' (the mean), 'img1' (the first element), 'img2' (the second element) or 'rand' (a randomly chosen element); Parameter-dependent: 'linear', 'UD_fusion' (Up-Down fusion), 'DU_fusion' (Down-Up fusion), 'RL_fusion' (Right-Left fusion), 'LR_fusion' (Left-Right fusion), 'UserDEF' (User-DEFined fusion). The task of the ultrasonic image fusion process is to remove the artifacts by having, as a priori information, the type of the artifact (which will impose the fusion rule) and the distorted object.

III. DESCRIPTION OF THE FUSION RULES In a real application fusion process, the decision rule is

selected form a set of available rules, until the minimum distortion criterion is met. The present work highlights only one decision rule, the down-up selection rule, which provides the best results from the set of twelve rules presented above.

Following the structure presented in [23], [24] or [25], the basic steps for a selection rule are presented below.

1: Read the source images, A(reference) and B(with artefacts).2: For each image, apply the wavelet decomposition till level

N, to obtain approximation and detail coefficients, with the level l = 1,2,…, N. For each level and image, the input matrices for fusion are the approximations, the vertical high frequencies, the horizontal high frequencies, and the diagonal high frequencies.

Page 3: [IEEE 2014 10th International Conference on Communications (COMM) - Bucharest, Romania (2014.5.29-2014.5.31)] 2014 10th International Conference on Communications (COMM) - On reducing

3: Apply the selection rule (SR). The approximation coefficients are then

( )lB

lA

lf ,SR LLLLLL = (1)

where lfLL are the fused, l

ALL and lBLL are the input

approximations. Further it is calculated for the remaining coefficients: l

fLH vertical high frequencies, lfHL

horizontal high frequencies, and lfHH diagonal high

frequencies. In this work, the pattern (shape) which must be corrected imposes a Down-Up selection rule (DU), as described in [26] and [27], which is used for all details and approximation coefficients. Let be s = size( l

AC ) = [s1, s2] the size of the coefficients matrix at decomposition level l. A linearly spaced vector x is generated, with values from 0 to 1 as

[ ] [ ]1121 10 xsn ...x...xx ==x (2)

Next, a weight matrix is defined, which gives the up-down and down-up fusion rules:

21

11

1...1

0...0

...

...

xssnn xx

xx

⎥⎥⎥

⎢⎢⎢

⎡=

⎥⎥⎥

⎢⎢⎢

⎡=P (3)

For level l, the Down-Up (DU) image fusion rule is:

( )PCPCC −⋅+⋅= 1lB

lA

lf (4)

This rule corresponds to the transfer of the bottom side of the reference image to the bottom side of the fused image.

4: The arrangement of fused coefficients provides the new coefficient matrix.

5: Implement inverse wavelet transform to reconstruct the image.

IV. RESULTS Results of the fusion of airborne ultrasound images are

presented. The objective of the fusion is to remove the artifacts and to decrease the distortion and noise, in such a way that the fused image is closer to the reference image. In this work, the result of four cases is presented, as it is given in Table I.

The raw ultrasonic images were obtained by a biomimetic sonar head, working on 40 kHz exploring frequency. For all cases, the DU selection rule was used and applied to the first decomposition level only. Consideration of higher decomposition levels does not have any major effect.

For each distorted ultrasonic image, a reference (target) image is considered, which is free of distortions. This reference can be obtained by simulation (a synthetic or artificial approach) (which is the case of the present work) or could be obtained by post processing of the real, distorted images (raw ultrasonic images - RUI).

TABLE I. SOURCE IMAGES USED IN FUSION

Case Raw image name Source Images A & B

#1 box_right Reference box box-right #2 box_left Reference box box-left #3 ball_right Reference ball ball-right #4 ball_left Reference ball ball-left

The reference images are stored in a database and correspond to the objects existing in the explored environment. The main results are: (a). The results of case #1 are presented in Fig. 3 upper side. As it can be seen, the resulting image is better the original one. The defect is almost removed, and the upper part is kept; (b). The results of case #2 are presented in Fig. 3, bottom side. The result is appreciated as good, the same quality as in the previous case. The defect is almost removed, and the upper part is kept; (c). For case #3 the result is shown in Fig. 4 upper side. The result is good, after fusion, the defect is almost removed, and the upper part is kept. A preprocessing step for registration is necessary to align the two objects before fusing; (d). For case #4, the result is shown in Fig. 4 bottom side. The result is considered as good.

A quantitative analysis was made based on RMSE (Root Mean Square Error) between the target and fused images. It confirms the qualitative comments presented above. The best compensation result is obtained in the last case. Selection of the best type of wavelet was made by exploring a set of standard seven wavelets, w={‘haar’, ‘db’, ‘sym’, ‘coif’, ‘bior’, ‘rbio’, ‘dmey’}, which stand for Haar, Daubechies, Symlets, Coiflets, Biorthogonal, Reverse Biorth., and Discrete Approx. of Meyer wavelet. These wavelets are intensively used in signal analysis, [28] and [29]. For the case of sonar imaging, where images are distorted versions of real ones, the quality criteria used to select the best wavelet type are based on Normalized Mean Square Error (NMSE), Root Mean Square Error (RMSE) and Peak Signal to Noise Ratio (PSNR):

[ ] [ ]{ }22 AAA E/ˆENMSE⎭⎬⎫

⎩⎨⎧ −= (5)

[ ]⎭⎬⎫

⎩⎨⎧ −= 2AA ˆERMSE , [ ]dB

RMSElogPSNR 25520 10= (6)

where A is the reference and A is the fused image, E{} is the expectation operator. The results of this quantitative analysis are presented in Table II, with gray background for the best obtained values. For all cases, the symlets wavelet type is the winner.

TABLE II. QUANTITATIVE RESULTS (NMSE / PSNR) OF THE FUSION

Wavelet type

Case ‘haar’ ‘db’ ‘sym’ ‘coif’ ‘bior’ ‘rbio’ ‘dmey’

#1 3.25 / 17.52

2.98 / 17.90

2.75 / 18.24

3.03 / 17.76

3.16 / 17.56

3.16 / 17.56

2.90 / 17.26

#2 8.98 / 18.33

8.24 / 18.70

7.60 / 19.05

8.38 / 18.62

8.75 / 18.44

8.74 / 18.44

8.92 / 18.35

#3 3.59 / 18.20

3.30 / 18.58

3.06 / 18.92

3.46 / 18.43

3.64 / 18.22

3.64 / 18.22

4.50 / 18.16

#4 3.04 / 24.11

2.79 / 24.48

2.58 / 24.82

2.87 / 24.36

3.00 / 24.17

3.00 / 24.17

3.07 / 24.06

Page 4: [IEEE 2014 10th International Conference on Communications (COMM) - Bucharest, Romania (2014.5.29-2014.5.31)] 2014 10th International Conference on Communications (COMM) - On reducing

Figure 3. Source (box_right and box_left) and fused images

Figure 4. Source (ball_left and ball_right) and fused images

CONCLUSION A practical and common problem of ultrasonic images

obtained by sonar was considered, namely artifacts removing. In this work, the artifacts generated by the non-uniform directivity of the ultrasonic transducers were considered. Airborne ultrasonic images were obtained with a biomimetic sonar head. The proposed method is based on image fusion paradigm and is applied under supervised coordination. A set of four case studies were considered, representing images of two objects, from the left and the right receivers. For each object (image), a target (reference) image is selected from a clean database, i.e. with no artifacts. The two images are merged according to the decision rule for the approximation and detail coefficients. The decision rule is chosen according to what we want to get after merger. For the application of image fusion with ultrasonic images, and for the imposed artifact, the chosen Down-Up fusion rule provides the best results. For other types of artifacts or to sharpen more the fused image, other rules of fusing might apply. The quality interpretation of the obtained results depends on application, i.e. the destination of the ultrasonic images. For classification purposes, the results are acceptable. Better results could be obtained, by considering more complex selection rules and optimized rule selection methods. The novelties of the work are on the application of airborne ultrasonic images, in general, and artefacts removing by fusion process, in particular.

REFERENCES [1] Aiordachioaie, D., Frangu, L. and Epure, S., “On Airborne Ultrasonic

Image Generation with Biomimetic Sonar Head”, IET Radar, Sonar & Navigation, 2013, vol. 7, pp. 933-949.

[2] Gonzalez, R. and Woods, R.E., Digital Image Proc., Prentice Hall, 2008. [3] Pratt, W.K., Digital Image Proc., John Wiley & Sons, 2001. [4] Russ, J.C., The Image Processing Handbook, CRC Press LLC, 1999. [5] Mitchell, H.B., Image Fusion: Theories, Techn. & Appl., Springer, 2010. [6] Shih, F.Y., Image Processing and Mathematical Morphology:

Fundamentals and Applications, CRC Press, 2009. [7] Aiordachioaie D. and L. Frangu, “On Uncertainty Sources and Artifacts

Compensation of Airborne Ultrasonic Images”, The 16th IEEE International Conference on System Theory, Control and Computing (ICSSTC-2012), 12-14 October 2012, Sinaia, Romania, paper 33.

[8] Zheng, Y., Image Fusion and its Applications, USA, InTech, 2011. [9] Hall, L. and Llinas, J., “An introduction to multisensor data fusion”,

IEEE Proceedings, 1997, vol. 85, pp. 6-23. [10] van. Genderen, J.L. and Pohl, C., “Image fusion: Issues, techniques and

applications. Intelligent Image Fusion”, Proc. of EARSeL Workshop, Strasbourg, France, Eds. van Genderen and Cappellini, 1994, pp. 18-26.

[11] Goshtasby, A.A. and Nikolov, S., “Image fusion: Advances in the state of the art. Guest editorial”, Inf. Fusion, 2007, vol. 8, no. 2, pp.114-118.

[12] Enhanced-vision-system web page: http://www.hcltech.com/aerospace-and-defense/enhanced-vision-system, 2013.

[13] Stathaki, T., Image Fusion. Algorithms and Applications, A.P., 2008. [14] Pajares, G. and Jesus, M.C., “A wavelet-based image fusion tutorial”,

Pattern Recognition, 2004, vol. 37, pp. 1855–1872. [15] Wang, Z., Ziou, D., et al., “A Comparative Analysis of Image Fusion

Methods”, IEEE Transactions on G&RS, 2005, vol 43, no. 6, pp. 1391-1402.

[16] Waltz, E. and Waltz, T., “Principles and Practice of Image and Spatial Data Fusion”, Chapter 5 in Handbook of Multisensor Data Fusion. Theory and Practice. Martin E. Liggins, David L. Hall, James Llinas (Editors), CRC Press, 2009, pp. 89-114.

[17] Zheng, Y. , Essock, E.A., et al., “A new metric based on extended spatial frequency and its application to DWT based fusion algorithm”, Information Fusion, 2007, vol. 8, no. 2 , pp. 177–192.

[18] Burt, P.J. and Adelson, E.H., “The Laplacian Pyramid as a Compact Image Code”, IEEE Transaction on Communication, 1983, vol. 3l, no. 4, pp. 532-540.

[19] Ukimura, O. (Ed.), Image Fusion, InTech, 2011, pp. 211-236. [20] Guidongu, L., Yi, S., et al, “On the use of data fusion in ultrasonic

imaging”, Proceedings of the 21st IEEE Instrumentation and Measurement Technology Conference, (IMTC 04), 2004, pp. 760-762.

[21] Flusser, J., Sroubek, F., et al., “Image Fusion: Principles, Methods, and Applications”, EUSIPCO, Poznan, Poland, 2007.

[22] Zhang, Y., “Understanding Image Fusion”, Photogrammetric Eng. & Remote Sensing, 2004, vol. 70, no. 6, pp. 657-661.

[23] Deepali, A.G. and Dattatraya, S.B., “Wavelet based image fusion using pixel based maximum selection rule”, International Journal of Engineering Science and Technology, 2011, vol. 3, no. 7, pp. 5572-5578.

[24] Shen, R., et al, “Cross-Scale Coeffient Selection for Volumetric Medical Image Fusion”, IEEE Transaction on Medical Engeneering, vol.60, no.4, 2013, pp. 1-10.

[25] Nicolov, S., et al, Wavelets for Image Fusion, in Wavelets in Signal and Image Analysis, from Theory to Practice, Ed. A. Petrosian and F. Meyer, Kluwer Academic Publishers, 2001.

[26] Misiti, M., Misiti, Y., Oppenheim, G. and Poggi, J.-M. (Eds), Wavelets and their Applications, ISTE, London, UK, 2010.

[27] Misiti, M., et al., Matlab Wavelet Toolbox (Version 4.0): Tutorial and Reference Guide, The Mathworks, Natick, USA, 2007.

[28] Daubechies, I., “Ten lectures on wavelets”, CBMS, SIAM, vol 61, 1994, pp. 198-202 and pp. 254-256.

[29] Abry, P., Ondelettes et turbulence, Diderot Ed., Paris, 1997.