croft, w - computer vision for identification of narrow welding seams for robotic welders
TRANSCRIPT
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
1/139
COMPUTER VISION FOR IDENTIFICATION OF NARROW
WELDING SEAMS FOR ROBOTIC WELDERS
William A. Croft
under the supervision of
Dr Gu Fang
A thesis submitted for partial fulfilment for the degree of
Bachelor of Engineering (Robotics & Mechatronics) (Honours)
School of Engineering
University of Western Sydney
November 2010
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
2/139
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
3/139
Acknowledgements
Dr Gu Fang, my supervisor, for his instruction, assistance and advice in undertaking the
research and during the compilation of this thesis. Enough could not be said of the direction,
purpose, enthusiasm and knowledge conveyed to me by Gu. Without his support, the quality
of all the work conducted in pursuing the honours portion of my degree would not be of the
same standard. Above all else, his approval alone of this thesis determines my personal
measurement of success in this endeavour.
Dr Marie Fellbaum Korpi, thesis writing group instructor, for her instruction, assistance and
advice in undertaking the task of writing a thesis. The ability of Marie to implant the blueprintstructure for a thesis is beyond question. In utilising that knowledge for writing this thesis, the
hope is that it lives up to the quality of the teaching provided by her.
Mitchell Dinham, for taking time out of his Ph.D. research to offer his assistance with
acquiring all of the captured images needed for my research. In addition, utilising his
research, he was also able to determine the depth information and convert some of the weld
path outputs of this thesis for the robotic welder. In this, he demonstrated practically how the
results could be implemented on the robotic welder.
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
4/139
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
5/139
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
6/139
Chapter 4 - Results..................................................................................................................41
4.1 Implementation Of Methods..........................................................................................41
4.1.1 Hardware Set-up.....................................................................................................41
4.1.2 MATLAB Program.................................................................................................41
4.2 Case Studies...................................................................................................................45
4.2.1 Images Captured For Developing The Method......................................................47
4.2.2 Images Captured For Testing The Method.............................................................56
4.2.3 Final Testing For Advanced Seam Configurations.................................................95
4.3 Summary.......................................................................................................................110
Chapter 5 - Conclusion.........................................................................................................112
5.1 Conclusion....................................................................................................................112
5.2 Future Work..................................................................................................................115
References..............................................................................................................................117
Bibliography..........................................................................................................................119
Appendix................................................................................................................................122
Appendix A Contents Of CD...........................................................................................123
ii
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
7/139
List Of Tables
Table 4-1: Case 1 - Welding Seam Deviation...........................................................................49
Table 4-2: Case 2 - Welding Seam Deviation...........................................................................52
Table 4-3: Case 3 - Welding Seam Deviation...........................................................................55
Table 4-4: Case 4 - Welding Seam Deviation...........................................................................58
Table 4-5: Case 5 - Welding Seam Deviation...........................................................................61
Table 4-6: Case 6 - Welding Seam Deviation...........................................................................64
Table 4-7: Case 6 - Welding Seam Deviation After Modification............................................65
Table 4-8: Case 7 - Welding Seam Deviation...........................................................................68
Table 4-9: Case 7 - Welding Seam Deviation After Modification............................................69
Table 4-10: Case 8 - Welding Seam Deviation.........................................................................72
Table 4-11: Case 9 - Welding Seam Deviation.........................................................................75
Table 4-12: Case 9 - Welding Seam Deviation After Modification..........................................76
Table 4-13: Case 10 - Welding Seam Deviation.......................................................................79
Table 4-14: Case 11 - Welding Seam Deviation.......................................................................82
Table 4-15: Case 12 - Welding Seam Deviation.......................................................................85
Table 4-16: Case 13 - Welding Seam Deviation.......................................................................88
Table 4-17: Case 14 - Welding Seam Deviation.......................................................................91
Table 4-18: Case 15 - Welding Seam Deviation.......................................................................94
Table 4-19: Case 16 - Welding Seam Deviation.......................................................................97
Table 4-20: Case 17 - Welding Seam Deviation.....................................................................100
Table 4-21: Case 18 - Welding Seam Deviation.....................................................................103
Table 4-22: Case 19 - Welding Seam Deviation.....................................................................106
Table 4-23: Case 20 - Welding Seam Deviation.....................................................................109
iii
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
8/139
List Of Figures
Figure 2-1: Graphical Histogram................................................................................................8
Figure 2-2: Sobel and Prewitt Edge Detection Masks..............................................................14
Figure 2-3: Roberts Edge Detection Masks..............................................................................15
Figure 2-4: Laplacian Edge Detection Mask............................................................................16
Figure 3-1: Overview of the Method Developed......................................................................23
Figure 3-2: Product and Background Images...........................................................................24
Figure 3-3: Product and Normalised Background Images........................................................27
Figure 3-4: Product and Product After Threshold Masking Images.........................................30
Figure 3-5: Product After Threshold Masking and Edge Detected Product Images.................31
Figure 3-6: Cartesian Plane For The Edge Detected Product Image........................................32
Figure 3-7: Hough Plane For The Accumulator........................................................................33
Figure 3-8: Hough Lines Superimposed Over The Product Image...........................................34
Figure 3-9: Edge Detected Product and Boundary Masked Edge Detected Product Images...35
Figure 3-10: Overview Of The Seam Identification Process....................................................37
Figure 3-11 Identified Welding Seam For The Product............................................................38
Figure 3-12: Weld Path Superimposed Over Product Image....................................................39
Figure 4-1: Case 1 Captured Product.....................................................................................47
Figure 4-2: Case 1 Captured Background..............................................................................47
Figure 4-3: Case 1 Pre-Processing.........................................................................................47
Figure 4-4: Case 1 Threshold Masking..................................................................................47
Figure 4-5: Case 1 Edge Detection........................................................................................48
Figure 4-6: Case 1 Accumulator............................................................................................48
Figure 4-7: Case 1 Hough Lines............................................................................................48
Figure 4-8: Case 1 Boundary Masking..................................................................................48
Figure 4-9: Case 1 Seam Identification..................................................................................48
Figure 4-10: Case 1 Weld Path...............................................................................................48
Figure 4-11: Case 2 Captured Product...................................................................................50
Figure 4-12: Case 2 Captured Background............................................................................50
Figure 4-13: Case 2 Pre-Processing.......................................................................................50
Figure 4-14: Case 2 Threshold Masking................................................................................50
Figure 4-15: Case 2 Edge Detection......................................................................................51
Figure 4-16: Case 2 Accumulator..........................................................................................51Figure 4-17: Case 2 Hough Lines..........................................................................................51
iv
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
9/139
Figure 4-18: Case 2 Boundary Masking................................................................................51
Figure 4-19: Case 2 Seam Identification................................................................................51
Figure 4-20: Case 2 Weld Path...............................................................................................51
Figure 4-21: Case 3 Captured Product...................................................................................53
Figure 4-22: Case 3 Captured Background............................................................................53
Figure 4-23: Case 3 Pre-Processing.......................................................................................53
Figure 4-24: Case 3 Threshold Masking................................................................................53
Figure 4-25: Case 3 Edge Detection......................................................................................54
Figure 4-26: Case 3 Accumulator..........................................................................................54
Figure 4-27: Case 3 Hough Lines..........................................................................................54
Figure 4-28: Case 3 Boundary Masking................................................................................54
Figure 4-29: Case 3 Seam Identification................................................................................54
Figure 4-30: Case 3 Weld Path...............................................................................................54
Figure 4-31: Case 4 Captured Product...................................................................................56
Figure 4-32: Case 4 Captured Background............................................................................56
Figure 4-33: Case 4 Pre-Processing.......................................................................................56
Figure 4-34: Case 4 Threshold Masking................................................................................56
Figure 4-35: Case 4 Edge Detection......................................................................................57
Figure 4-36: Case 4 Accumulator..........................................................................................57
Figure 4-37: Case 4 Hough Lines..........................................................................................57Figure 4-38: Case 4 Boundary Masking................................................................................57
Figure 4-39: Case 4 Seam Identification................................................................................57
Figure 4-40: Case 4 Weld Path...............................................................................................57
Figure 4-41: Case 5 Captured Product...................................................................................59
Figure 4-42: Case 5 Captured Background............................................................................59
Figure 4-43: Case 5 Pre-Processing.......................................................................................59
Figure 4-44: Case 5 Threshold Masking................................................................................59
Figure 4-45: Case 5 Edge Detection......................................................................................60
Figure 4-46: Case 5 Accumulator..........................................................................................60
Figure 4-47: Case 5 Hough Lines..........................................................................................60
Figure 4-48: Case 5 Boundary Masking................................................................................60
Figure 4-49: Case 5 Seam Identification................................................................................60
Figure 4-50: Case 5 Weld Path...............................................................................................60
Figure 4-51: Case 6 Captured Product...................................................................................62
Figure 4-52: Case 6 Captured Background............................................................................62
v
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
10/139
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
11/139
Figure 4-88: Case 9 Accumulator..........................................................................................74
Figure 4-89: Case 9 Hough Lines..........................................................................................74
Figure 4-90: Case 9 Boundary Masking................................................................................74
Figure 4-91: Case 9 Seam Identification................................................................................74
Figure 4-92: Case 9 Weld Path...............................................................................................74
Figure 4-93: Case 9 - Boundary Masking and Weld Path After Modification..........................75
Figure 4-94: Case 10 Captured Product.................................................................................77
Figure 4-95: Case 10 Captured Background..........................................................................77
Figure 4-96: Case 10 Pre-Processing.....................................................................................77
Figure 4-97: Case 10 Threshold Masking..............................................................................77
Figure 4-98: Case 10 Edge Detection....................................................................................78
Figure 4-99: Case 10 Accumulator........................................................................................78
Figure 4-100: Case 10 Hough Lines......................................................................................78
Figure 4-101: Case 10 Boundary Masking............................................................................78
Figure 4-102: Case 10 Seam Identification............................................................................78
Figure 4-103: Case 10 Weld Path...........................................................................................78
Figure 4-104: Case 11 Captured Product...............................................................................80
Figure 4-105: Case 11 Captured Background........................................................................80
Figure 4-106: Case 11 Pre-Processing...................................................................................80
Figure 4-107: Case 11 Threshold Masking............................................................................80Figure 4-108: Case 11 Edge Detection..................................................................................81
Figure 4-109: Case 11 Accumulator.......................................................................................81
Figure 4-110: Case 11 Hough Lines.......................................................................................81
Figure 4-111: Case 11 Boundary Masking.............................................................................81
Figure 4-112: Case 11 Seam Identification............................................................................81
Figure 4-113: Case 11 Weld Path...........................................................................................81
Figure 4-114: Case 12 Captured Product...............................................................................83
Figure 4-115: Case 12 Captured Background........................................................................83
Figure 4-116: Case 12 Pre-Processing...................................................................................83
Figure 4-117: Case 12 Threshold Masking............................................................................83
Figure 4-118: Case 12 Edge Detection..................................................................................84
Figure 4-119: Case 12 Accumulator.......................................................................................84
Figure 4-120: Case 12 Hough Lines......................................................................................84
Figure 4-121: Case 12 Boundary Masking............................................................................84
Figure 4-122: Case 12 Seam Identification............................................................................84
vii
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
12/139
Figure 4-123: Case 12 Weld Path...........................................................................................84
Figure 4-124: Case 13 Captured Product...............................................................................86
Figure 4-125: Case 13 Captured Background........................................................................86
Figure 4-126: Case 13 Pre-Processing...................................................................................86
Figure 4-127: Case 13 Threshold Masking............................................................................86
Figure 4-128: Case 13 Edge Detection..................................................................................87
Figure 4-129: Case 13 Accumulator......................................................................................87
Figure 4-130: Case 13 Hough Lines......................................................................................87
Figure 4-131: Case 13 Boundary Masking............................................................................87
Figure 4-132: Case 13 Seam Identification............................................................................87
Figure 4-133: Case 13 Weld Path...........................................................................................87
Figure 4-134: Case 14 Captured Product...............................................................................89
Figure 4-135: Case 14 Captured Background........................................................................89
Figure 4-136: Case 14 Pre-Processing...................................................................................89
Figure 4-137: Case 14 Threshold Masking............................................................................89
Figure 4-138: Case 14 Edge Detection..................................................................................90
Figure 4-139: Case 14 Accumulator......................................................................................90
Figure 4-140: Case 14 Hough Lines......................................................................................90
Figure 4-141: Case 14 Boundary Masking............................................................................90
Figure 4-142: Case 14 Seam Identification............................................................................90Figure 4-143: Case 14 Weld Path...........................................................................................90
Figure 4-144: Case 15 Captured Product...............................................................................92
Figure 4-145: Case 15 Captured Background........................................................................92
Figure 4-146: Case 15 Pre-Processing...................................................................................92
Figure 4-147: Case 15 Threshold Masking............................................................................92
Figure 4-148: Case 15 Edge Detection..................................................................................93
Figure 4-149: Case 15 Accumulator......................................................................................93
Figure 4-150: Case 15 Hough Lines......................................................................................93
Figure 4-151: Case 15 Boundary Masking............................................................................93
Figure 4-152: Case 15 Seam Identification............................................................................93
Figure 4-153: Case 15 Weld Path...........................................................................................93
Figure 4-154: Case 16 Captured Product...............................................................................95
Figure 4-155: Case 16 Captured Background........................................................................95
Figure 4-156: Case 16 Pre-Processing...................................................................................95
Figure 4-157: Case 16 Threshold Masking............................................................................95
viii
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
13/139
Figure 4-158: Case 16 Edge Detection..................................................................................96
Figure 4-159: Case 16 Accumulator......................................................................................96
Figure 4-160: Case 16 Hough Lines......................................................................................96
Figure 4-161: Case 16 Boundary Masking............................................................................96
Figure 4-162: Case 16 Seam Identification............................................................................96
Figure 4-163: Case 16 Weld Path...........................................................................................96
Figure 4-164: Case 17 Captured Product...............................................................................98
Figure 4-165: Case 17 Captured Background........................................................................98
Figure 4-166: Case 17 Pre-Processing...................................................................................98
Figure 4-167: Case 17 Threshold Masking............................................................................98
Figure 4-168: Case 17 Edge Detection..................................................................................99
Figure 4-169: Case 17 Accumulator......................................................................................99
Figure 4-170: Case 17 Hough Lines......................................................................................99
Figure 4-171: Case 17 Boundary Masking............................................................................99
Figure 4-172: Case 17 Seam Identification............................................................................99
Figure 4-173: Case 17 Weld Path...........................................................................................99
Figure 4-174: Case 18 Captured Product.............................................................................101
Figure 4-175: Case 18 Captured Background......................................................................101
Figure 4-176: Case 18 Pre-Processing.................................................................................101
Figure 4-177: Case 18 Threshold Masking..........................................................................101Figure 4-178: Case 18 Edge Detection................................................................................102
Figure 4-179: Case 18 Accumulator....................................................................................102
Figure 4-180: Case 18 Hough Lines....................................................................................102
Figure 4-181: Case 18 Boundary Masking..........................................................................102
Figure 4-182: Case 18 Seam Identification..........................................................................102
Figure 4-183: Case 18 Weld Path.........................................................................................102
Figure 4-184: Case 19 Captured Product.............................................................................104
Figure 4-185: Case 19 Captured Background......................................................................104
Figure 4-186: Case 19 Pre-Processing.................................................................................104
Figure 4-187: Case 19 Threshold Masking..........................................................................104
Figure 4-188: Case 19 Edge Detection................................................................................105
Figure 4-189: Case 19 Accumulator....................................................................................105
Figure 4-190: Case 19 Hough Lines....................................................................................105
Figure 4-191: Case 19 Boundary Masking..........................................................................105
Figure 4-192: Case 19 Seam Identification..........................................................................105
ix
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
14/139
Figure 4-193: Case 19 Weld Path.........................................................................................105
Figure 4-194: Case 20 Captured Product.............................................................................107
Figure 4-195: Case 20 Captured Background......................................................................107
Figure 4-196: Case 20 Pre-Processing.................................................................................107
Figure 4-197: Case 20 Threshold Masking..........................................................................107
Figure 4-198: Case 20 Edge Detection................................................................................108
Figure 4-199: Case 20 Accumulator....................................................................................108
Figure 4-200: Case 20 Hough Lines....................................................................................108
Figure 4-201: Case 20 Boundary Masking..........................................................................108
Figure 4-202: Case 20 Seam Identification..........................................................................108
Figure 4-203: Case 20 Weld Path.........................................................................................108
x
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
15/139
Abstract
In realising automated robotic welding, visual sensory systems are commonly used to identify
the seam and extrapolate the trajectory path for the welding torch. However, the methods
currently used require heavy limitations on the work pieces or environment. Limitations result
in specialised systems that only work for specific applications. Other applications need
humans to retrain the robots. Current systems lack intelligence to perform autonomous
welding on unknown products and this limits the ability of manufacturers to implement one
off or small scale production. To date, no adequate method for a visual-based robotic welder
to perform autonomous welding exists in the manufacturing industry.
The aim of this study is to develop an autonomous, flexible and effective seam identification
method for a visual-based robotic welder that can be implemented into most environments. In
particular, the method developed needed to be capable of identifying narrow welding seams.
For narrow welding seams, the information of the seam is difficult to extract and often
unintentionally removed during image processing operations. The method developed was also
designed to remove or reduce current limitations on the work pieces and environment to effect
viable one off or small scale production.
In considering the aim, flexibility was seen to be one of the most important elements to embed
into the solution. The method developed using computer vision identifies the narrow welding
seams for the objects tested. Flexibility of the method is demonstrated by accurately
identifying the seam of a number of very different objects with major variance of features in
different environments and lighting conditions. The method is developed and implemented in
this study.
In developing the method; various computer vision techniques are used. Improvements are
made to a number of these techniques. In particular, a threshold masking technique is
developed using a comparison between the product and background images to suppress the
background. Methods are also developed for Hough line selection in the Hough transform,
using the Hough line information and for a combination of techniques used to identify the
seam information.
xi
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
16/139
The Hough transform uses edge information provided by Prewitt's edge detection method. To
develop the method of using the Hough transform information, the Hough line selection
method is improved. Similar lines are grouped and the highest value line selected for each
group. The Hough lines created by this method are unbounded. Intersection of these lines
within the image identify points of interest. These points include the boundary of the objects,
and the seam start, change of angle and end points.
Methods developed to identify the seam combine a number of technologies. In order to
achieve this, the boundary in the edge information is masked to isolate the seam edges. The
seam edges and seam information provided by the Hough transform are correlated. Locations
identified by this correlation are analysed to identify the seam points. The seam identification
process provides an ordered path for the robotic welder in the image frame. This path includes
the start weld point, end weld point and any number of points between to ensure the robotic
welder follows the seam.
Developed methods are tested on 20 cases. Statistical analysis of the results looks for
accuracy to within 1mm of the seam centre. The analysis show that the implementation of the
method developed achieved identification of the seam accurately for all of the targeted
products.
xii
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
17/139
Chapter 1 - Introduction
1.1 Background
Designing any robotic welding system requires a basic understanding of the welding process.
The welding process is defined by the American Welding Society (American Welding Society
2001) as the joining of metals by fusion through generating heat at the seam. This can be
accomplished with or without contact between the welding machine and the object to be
welded. It can also be accomplished with or without using additional metal as filler.
Due to the generic nature of the welding process and for the purpose of this research, it isenough to know that welding is a process of joining metals, the weld seam is the location
between the objects to be joined and different applications used to effect a weld are all
essentially the same process.
Suitable robotic welding tools for the various applications of welding have already been
developed and incorporated into robotic systems. The existing robotic welding automation
enables the focus of this research to be considered with regards to automating the process of
welding rather than restricted by the technology available. Sciaky (1999) describes the spot
welding process sequence and this is common to all applications of welding:
Step 1. clamping or holding the objects to be welded together;
Step 2. welding the seam between the objects;
Step 3. cooling of the welded product;
Step 4. releasing the welded product.
The process to identify the weld seam is performed between step 1 and step 2. To automate
this process requires a system that identifies the seam and provides the robot with a trajectory
plan for the welding torch.
Using a visual-based system for this step is a replication of the sense that a human uses when
welding. As noted in the Handbook of industrial robotics (Handbook of industrial robotics
1999), vision is essential to a human welder in both the planning and welding process. For a
1
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
18/139
human trained in welding, vision is responsible for identifying the objects to be welded
together, identifying the seam between the objects, identifying the starting location of the
weld, to track the seam during welding and also determining the final location of the weld.
Vision provides the human welder with the flexibility to weld virtually any size, shape and
configuration of components. It is this flexibility that is required for true automation (Leavers
1992), as such, vision systems are vital for robotic welders in replicating the flexibility of the
human welder.
Gan, Zhang & Wang (2007) identify that current visual sensor technology is well developed;
however, application development to utilise the current technology is lacking. In robotic
welding, image processing methods are used to identify the seam and extrapolate the
trajectory for the welding torch. These methods require heavy limitations on the work pieces
or environment. Limitations result in specialised systems that only work for specific
applications. To perform another application needs humans to retrain the robots. This limits
the implementation of using a welding robot for one off and small scale production. To date,
no adequate system exists for an automated autonomous robotic welder in the manufacturing
industry capable of addressing this issue.
1.2 Aims
The aim for this thesis is to design a flexible and effective method for detecting welding
seams using information obtained from a visual-based system. This will assist in the
application development for an autonomous and flexible robotic welder capable of being
implemented into most environments with little or no re-programming.
The method designed needs to work for narrow welding seams where the information of the
seam is difficult to extract and can be unintentionally removed during image processing
operations. The method also needs to remove or reduce current limitations on the work pieces
and environment to effect viable one off and small scale production where welding is
required.
In this thesis 'narrow welding seams' are defined as the welding seam formed when objects
are abutted. It is assumed that the weld bead has a width of 2 mm. Therefore, the accuracy
requirement for the welding seam detection is no more than 1 mm from the seam centre.
2
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
19/139
The scope of the design focuses on determining the seam information and providing a weld
path in the Cartesian frame for the welding torch of a welding robot. This is one of the most
important components in achieving a complete autonomous robotic welding system.
1.3 Structure Of The Thesis
The remaining thesis contains 4 chapters.
Chapter 2 incorporates a review of the literature researched focussing on four key areas.
These are robotic welding, computer vision, image processing and analysis techniques, and
the culmination of these areas into vision based robotic welding.
Chapter 3 outlines a method to achieve the aim of the thesis. A core process is identified as a
framework and a structured method incorporating this process is developed.
Chapter 4 describes the implementation and results of testing the method developed for
achieving the aim. For this purpose, the equipment used and the MATLAB program
developed is described, followed by the reporting and discussion of the results for 20 case
studies. Results are presented as both observable and measurable in order to communicate a
visual overview in conjunction with a deeper analysis.
Chapter 5 concludes the thesis, outlining the achievements and notable technologies utilised
for fulfilling the aim, as well as recommendations of future research and development
opportunities identified.
3
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
20/139
Chapter 2 - Literature Review
2.1 Overview
In the introduction it was identified that the information of narrow welding seams is often
difficult to extract and can be unintentionally removed during image processing operations. It
was also identified that current image processing methods using visual sensory systems
require heavy limitations on the work pieces or environment for the robotic welder.
The purpose of the literature review is to provide a background on robotic welding and
computer vision, and to research current methods used to extract welding seam information.The literature review also identifies image processing and analysis techniques. These
techniques are to be included in the development of a flexible and effective method for
processing the images obtained from a visual-based system of a robotic welder.
The outline for the literature review begins with the foundation of robotic welding and
computer vision. Following this, is a focussed research of a variety of image processing and
analysis techniques. Concluding the literature review is the culmination of these areas with an
overview of vision-based robotic welding.
2.2 Robotic Welding
Robotic welding is defined as "welding that is performed and controlled by robotic
equipment" (American Welding Society 2001). The function of the robot welder can be all or
part of the welding process for it to be considered robotic welding. Using robots to perform
welding applications is desirable for its benefits over manual welding. Cary (1994) offers the
benefits as increased productivity, consistent and predictable quality, predictable weld time,
reduced training for human operators, better weld appearance and safety for human operators.
Installation of complete robotic welding systems can be complex and costly. Ceroni (1999)
studies arc welding robot systems and identifies the components as: the robot, controllers for
the robot, welding equipment and suitable grippers, positioners, safety barriers and screens.
Installation of a robotic welding system requires these components to be integrated into a
4
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
21/139
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
22/139
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
23/139
have been developed over time.
Image processing techniques identified during the initial research for potential use included
filtering, region growing, connectivity, thresholding and grayscaling with binary and gray
morphology operations. The decision to include or exclude certain techniques in the research
was considered with regard to the ability of the techniques to enhance the seam information.
This could be accomplished by directly adding information to the seam or indirectly by
removing non-seam information.
Image analysis operations were then considered including edge detection, boundary detection,
corner detection and Hough transform. Identification of the object features, in particular the
seam information, was considered the key to achieving the research aims. Identification of the
physical objects was not. For unknown products, where no prior information is available,
there is no reason for attempting to identify the objects; therefore, it is considered to be
redundant.
The research that follows focuses on some of the image processing techniques that offer the
ability to enhance the seam information directly or indirectly, and some of the image analysis
techniques that identify object features. The image processing and analysis techniques
identified for this include histogram manipulation, filtering, noise reduction, the imagesegmentation techniques, thresholding and edge detection, and the Hough Transform.
2.4.1 Histogram Manipulation
A histogram is a statistical frequency distribution of the intensity level in an image, and is
often represented in a graphical form (Galbiati 1990; Phillips 1994; Sonka 1999). For a black
and white or binary image, the histogram represents the count of black pixels and white pixels
contained in the image. For a grayscale image, the histogram represents a count of the number
of pixels in the image for each shade of gray. In the red, green and blue colour space, an
image has an individual histogram for each of the red, green and blue colour components.
Each histogram for the image represents the number of pixels at each intensity level of that
colour. For the colour image, the three histograms form an overall histogram.
For statistical histograms, counts are divided by the total count. This is performed in order toobtain probability ratios for each count with respect to the total count. For a statistical
7
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
24/139
histogram this can be expressed as:
h i =ni
for i=0, 1, ..., 255 (Equation 2.1)
where: hi is the count of pixels for each intensity level i with respect to total pixel count; ni is
the count of pixels for each intensity level i; andNis the total pixel count.
For images, graphical histograms are preferred for analysis over statistical histograms.
Graphical Histograms
Histograms are represented in a graphical form to visualise a contour formed across the graph
for analysis. An even contour across most of the intensity levels in a histogram is usually the
most desirable for the human observer and this describes a 'normal' histogram. A graphical
histogram is shown in Figure 2-1, where the vertical axis describes the count of pixels for
each intensity level described by the horizontal axis.
8
Figure 2-1: Graphical Histogram
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
25/139
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
26/139
A histogram shift for the graphical histogram in Figure 2-1 can be performed by applying the
following equation:
H2 i =H1 ic (Equation 2.3)
where:H1 is the old histogram; H2 is the new histogram; i is the intensity level; and c is the
constant for the histogram shifting.
Histogram Stretching
Histogram stretching is often used if there is a single narrow peak. This involves selecting the
intensity level at the centre of the peak to redistribute the values of the surrounding pixels.
This requires multiplying a constant gain with the real difference from the peak intensity level
and adds it to the intensity level of the pixels. For stretching the gain used is greater than 1
and this increases the contrast of an image. A gain between 0 and 1 decreases the contrast. If
performed linearly the peak shape is preserved and the frequencies distributed more evenly
across the range of intensities (McAndrew 2004). The stretching process causes a loss of
definition between pixels similar to the histogram shifting process. This loss occurs at both
high and low intensity ranges as they are combined at the range boundaries respectively.
A histogram stretch for the graphical histogram in Figure 2-1 can be performed by applying
the following equation:
H2 i =H1 igpi (Equation 2.4)
where:H1 is the old histogram; H2 is the new histogram; i is the intensity level; gis the gain
constant for the histogram stretching; andp is the peak intensity level.
Histogram Equalisation
Histogram equalisation is performed to obtain an even distribution of intensity levels across
the range of intensities (Sonka 1999). Equalisation involves increasing the contrast of the
peaks in histogram and decreasing the contrast in the valleys. Image intensity values are
integers so this process causes a loss of definition in the valleys. These valleys can be vital tosegmenting image components. Many image analysis techniques rely on separating
10
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
27/139
components of the image. Therefore, histogram equalisation is not considered suitable for use
in a computer vision system.
Histogram manipulation is desirable for the human observer who can not easily distinguish
between similar intensity levels. For image processing, manipulating the histogram is not as
desirable. Each intensity level is distinguishable in a digital system. The use of histogram
manipulation needs to serve a particular defined purpose for image processing in a computer
vision system. All of the histogram manipulations identified cause some loss definition. This
is a loss of information in the image. Before implementing histogram manipulation for a
vision system, this information loss needs to be considered.
2.4.2 Filtering
Filtering is the processing of pixels using a predetermined function, called a mask. A mask is
an example of neighbourhood processing of pixels. The neighbourhood formed by the
immediate pixels surrounding a pixel is a 3x3 neighbourhood. This is the smallest mask that
can be used for image processing.
For a mask, the neighbourhood surrounding a pixel acts as the inputs for the function. The
output calculated is then applied to the pixel modifying its intensity level (McAndrew 2004;
Sonka, Hlavac & Boyle 1999). Filtering an image requires the mask to process each pixel in
the image and apply the same function. Some applications of filters identify an area of the
image as a boundary and apply the filter in that region only.
The majority of image processing techniques apply a filter in some form. As such, many
variants of filters have been developed (Baxes 1994; Blanchet & Charbit 2006; McAndrew
2004; Phillips 1994; Sonka, Hlavac & Boyle 1999). Reasons for applying filters to an image
include sharpening and blurring, edge enhancing, brightening and contrasting. As with
histogram manipulation, most of these reasons are for human perception. For vision systems,
image processing filters are commonly used for noise reduction.
Noise Reduction
Noise, for this study, is considered to be any imperfections acquired during the image
capturing process or unwanted artifacts created whilst applying image processing techniques
11
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
28/139
(McAndrew 2004; Niku 2001). Noise reduction is the process of removing noise from the
image. Two of the simplest filters that exist for removing noise are average and median.
Average Filter
Average filters reduce the noise in an image by applying an averaging mask to the
neighbourhood. The mask for an average filter calculates the average intensity value of a pixel
and its neighbours. The intensity value of the pixel is then changed to this value (McAndrew
2004). Average filters reduce noise by recombining it with the surrounding pixels. Image
information is lost in this way due to the smoothing or blurring effect on the image.
Median Filter
The median for an ordered group of values is the middle value. Median filters work using a
mask that sorts the intensity values of a neighbourhood. The middle value is identified and the
intensity value of the pixel is changed to the median (McAndrew 2004; Phillips 1994). Unlike
an average filter, a median filter preserves more of the sharp variations of the image (Blanchet
& Charbit 2006). Preserving this information improves the accuracy of the results obtained
with image segmentation techniques.
2.4.3 Image Segmentation
Image segmentation is used to separate components of interest contained within an image
(McAndrew 2004; Sonka, Hlavac & Boyle 1999). Segmentation is generally the step prior to
image analysis techniques that identify an object or aspects of an object (Baxes 1994; Niku
2001). Image segmentation methods usually transform a grayscale image into a binary image
where the pixels assigned the value of 1 represent components of interest. All other pixels are
assigned the value of 0. The basic methods for segmentation are thresholding and edge
detection.
Thresholding
Thresholding is a relatively simple image segmentation technique that separates a grayscale
image into two component levels using a threshold value. The application of thresholding
12
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
29/139
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
30/139
Edge detection filters use a mask that identifies a particular gradient change across a
neighbourhood of pixels. If the change across the neighbourhood is equal to or higher than the
mask specified, the pixel is assigned a 1 value. All other pixels are assigned a 0 value. The
change in pixel intensity required is a function of the mask multiplied by a scaling factor. The
result of applying an edge detection filter is a binary image describing edges found in the
image. The accuracy of a particular edge detection method is a function of its ability to
identify the particular edges required when applying the filter.
Numerous edge detection filters and methods have been developed (Baxes 1994; McAndrew
2004; Niku 2001; Phillips 1994; Sonka, Hlavac & Boyle 1999). Edge detection filters can be
implemented in a variety of ways. This is achieved through defining different scaling values
for the mask. Common basic edge detection filters are Sobel, Prewitt and Roberts. Common
advanced edge detections include Canny's edge detection method and Laplacian filters.
Sobel And Prewitt Filters
Sobel and Prewitt edge detection filters are similar in the approach utilised for detecting
edges. The Sobel and Prewitt filters use a 3 by 3 pixel neighbourhood to detect edges in the
image. The filters are directional and depending on the implementation can find vertical,
horizontal or diagonal edges within the image. A particular direction is selected by rotating the
mask (Phillips 1994). The Sobel and Prewitt directional edge detection masks for detecting
vertical and horizontal edges respectively are shown inFigure 2-2.
[1 0 12 0 21 0 1 ]Sobel Vertical
[1 2 1
0 0 0
1 2 1 ]Sobel Horizontal
[1 0 11 0 11 0 1 ]
Prewitt Vertical
[1 1 1
0 0 0
1 1 1 ]Prewitt Horizontal
Figure 2-2: Sobel and Prewitt Edge Detection Masks
The difference between Sobel and Prewitt edge detection filters is in the strength of the edge
detected; the Sobel filter requires stronger edges present in the image and is therefore slightly
less sensitive to noise when compared to the Prewitt filter.
14
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
31/139
Roberts Cross-Gradient Filter
Roberts edge detection filters are similar to the edge detection methods used by Sobel and
Prewitt. The fundamental difference is that Roberts uses a cross-gradient across a two by two
neighbourhood of pixels in the mask as shown in Figure 2-3. Compared to the Sobel and
Prewitt edge detection filters, Roberts method requires less processing; however, the result is
also less accurate (Sonka, Hlavac & Boyle 1999).
[1 0 0
0 1 00 0 0 ] [
0 1 0
1 0 00 0 0 ]
Figure 2-3: Roberts Edge Detection Masks
Canny's Edge Detection Method
Canny's edge detection method was developed with the purposes of more accurately finding
edges, by minimising the distance between the edge detected and the actual edge, and to
detect only one edge for any given edge in the original image (McAndrew 2004; Sonka,
Hlavac & Boyle 1999). Canny's method uses a statistical probability approach and a number
of steps to achieve this purpose. Canny's method is sound in mathematical theory; however, it
is also one of the most complex methods of edge detection.
The Canny edge detection method utilises a Gaussian filter. A Gaussian filter is a smoothing
filter, similar to an averaging filter. This is based on the definition of a Gaussian or normal
probability distribution (McAndrew 2004). For ease of understanding, though not strictly
accurate, this can be considered similar in graphical terms to the more commonly known bell
curve distribution. The value for the standard deviation used for the filter defines the size of
the neighbourhood used by the mask. The mask for a Gaussian filter contains higher
weighting for pixels closer to the centre of the mask.
For the Canny method, the derivative of the Gaussian filter is calculated and multiplied with
the Gaussian filter forming a 'convolution' mask. Applying the convolution mask to the rows
and the columns of the image forms two separate edge detected images.
15
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
32/139
The final step in the Canny method is to combine the separate edge detected images. This is
performed by calculating the relative location of corresponding edge pixels between the
images. These locations are dependant on the parameters used for the mask. This calculation
is used to correlate the detected edges in both images and identify the actual edge location in
the original image.
Canny filters are theoretically ideal for edge detection; however, the application of Canny's
method on real images with imperfections often results in unwanted artifacts. It should be
noted, Canny later evolved his method to include a thresholding filter to achieve better results
(McAndrew 2004).
Laplacian Filter
The previous edge detection filters operate using a gradient or first derivative approach.
Laplacian edge detection filters operate on the second derivative. These filters are not-
directional and as such the primary benefit in using the second derivative is that the image can
be rotated without affecting the detected edge (Sonka, Hlavac & Boyle 1999).
Laplacian filters detect both the rise and fall of the gradient at an edge. To improve the
accuracy of the location of the edge detected by Laplacian edge detection, the edge can be
defined as the point at which the second derivative crosses the zero value, known as the zero
crossing method. The basic mask for a Laplacian edge detection filter is shown inFigure 2-4.
[0 1 0
1 4 10 1 0 ]
Figure 2-4: Laplacian Edge Detection Mask
One major limitation for Laplacian filters is that they are very sensitive to noise and this has
lead to the Marr-Hildreth method for edge detection being developed. The Marr-Hildreth
method uses a number of steps to improve the result of applying Laplacian edge detection
filters. The first steps are to apply a Gaussian filter and convolve the Gaussian filter with the
Laplacian filter to form the mask to be applied to the image. This type of mask is known as a
Laplacian of Gaussian filter. When applying the Laplacian of Gaussian filter to the image the
16
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
33/139
edges are defined using the zero crossing method. The resulting edge detected is often a vast
improvement on the basic Laplacian edge detection filter.
As identified previously, image segmentation techniques are usually performed prior to using
image analysis techniques to identifying an object or certain desirable aspects of an object.
One such image analysis technique is the Hough transform.
2.4.4 Hough Transform
Hough transform is an image analysis technique that offers great flexibility in the area of
feature extraction. The Hough transform was developed to overcome common issues with
image segmentation, such as noise and disconnection between features of the same object
(Leavers 1992; McAndrew 2004). The Hough transform offers a way to reconnect these
features in the binary images produced by the image segmentation processes.
The simplest implementation of the Hough transform is to find disconnected straight lines
within an image. Variance of the method used to find straight lines is capable of identifying
other shapes. Therefore, the Hough transform can be used to detect other regular shapes such
as circles or ellipses. The method for detecting straight lines is discussed below.
The general theory behind the Hough transform is that any regular geometric shape can be
described mathematically in the two dimensional space of a captured image. On this
foundation, when the organised dimensional form of an image matrix is described in the
Cartesian frame, a line in the image matrix can also be described in the same way (Leavers
1992; Sonka, Hlavac & Boyle 1999). A straight line is usually described by a line intersecting
two points, A and B. Known as the general form, a straight line can be expressed as:
= mx b (Equation 2.5)
where: m is the gradient or tangent of the line; and b is the offset along they axis.
There are limitations for using the general form of the straight line expressed in Equation 2.5.
This occurs when the line is vertical and results in the tangent approaching infinity (Sonka,
Hlavac & Boyle 1999). Using a sine and cosine equivalent function to describe the straightline does not suffer this limitation. For the Hough transform it is better to describe the straight
17
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
34/139
line in the equivalent form of:
= cos y sin (Equation 2.6)
where: angle from the origin describes a line that intersects the straight line perpendicularly;
is the distance to the line along angle ; andx andy describe the coordinates of any point
lying on the straight line.
This relationship for straight lines, between the general form inEquation 2.5, and the sine and
cosine equivalent function inEquation 2.6, is illustrated inFigure 2-5.
Figure 2-5: Straight Line Relationship Between Cartesian (x, y) and Polar(, ) Planes
Mathematically, the Hough transform is an analysis of the image space in the Cartesian (x, y)
plane and the transformation of this image information into an accumulator in the Hough or
polar (, ) plane. This is performed by inspecting each pixel of a binary representation of an
image. For every identified pixel with a value of 1 in the binary image, the x-y coordinate
values are solved forEquation 2.6. These calculations are performed for a full 360 degree
range ofvalues. For each value, the corresponding location calculated is increased by 1
in the accumulator. To analyse the accumulator, the highest values in the accumulator are
deemed to be coincident with prominent straight lines in the image (Leavers 1992; McAndrew
2004; Niku 2001; Sonka, Hlavac & Boyle 1999).
18
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
35/139
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
36/139
Computer vision systems are increasingly relied upon to increase performance for welding
robots. Often this is performed for error correction by detecting and adapting to small changes
in the expected position or orientation of the product seam (Ma et al. 2009). However, beyond
basic applications of vision systems, current advanced applications of vision systems do not
adequately perform the tasks required of them.
Seam tracking is one of the more successful and advanced applications of using computer
vision for the robotic welder. This is the method of processing of real-time images to track the
seam as it is welded and correct the path of the welding torch as needed. However, successful
application of this technology requires the use of lasers and filters (Gan, Zhang & Wang 2007)
and prior knowledge of the starting point of the weld.
As outlined in Section 2.3, Gan, Zhang & Wang (2007) have identified the need for
application development in order to advance vision based systems and by extension vision
based robotic welding. The problem of implementing an intelligent vision system for a robotic
welder is due to the processing required of the images obtained. Without advancements in
application development for processing these images, research on autonomous welding robots
can not progress.
2.6 Summary
To effectively utilise current technology, a number of key points were identified in the
research performed in this chapter. The method developed needs to use a vision based system
that identifies the seam and generates a path for a robotic welder to weld the seam; success in
this would directly address the maturity disparity of technology for computer vision outlined
in Section 2.3. It should also realise the entire weld path prior to beginning welding; requiring
persistent updating and controlling of the welding robots motion through the visual-based
sensor creates the limitations identified for seam tracking in Section 2.5.
To achieve this aim for the welding robot, each of the image processing technologies
described in Section 2.4 are considered for their advantages and disadvantages. The resulting
selections based on this research are identified and outlined below.
Threshold masking is useful in providing the image processing method with a targetedapproach. None of the information in the background surrounding the product is required. To
20
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
37/139
include this information during later processing should be considered as a waste of the
resources of the system. Often, inclusion of background information can interfere with later
techniques. The minimal time required to separate the product from the background can be
reclaimed in the streamlining this provides to all later steps in the method.
Median filtering, where required, enables a method for noise reduction with the least loss of
image information. Median filtering does not blur an image and this is vital for accurate
feature identification.
Edge detection provides a binary image identifying the object boundaries and seam. Similar to
threshold masking, this assists with targeting later processing to specific components of the
image. This is also a vital step for the implementation of the Hough transformation.
Hough transform enables the capability of determining specific geometric shapes within the
image depending on the way it is implemented. The most basic geometric shape being a
straight line which can then be recreated. The hough transform is not dependant on the whole
of the shape in the original image still being present. If it detect shapes where none exists,
methods can be developed to determine which shape is real and which shape is not.
These methods provide a foundation to develop an effective seam identification method toachieve the aim for the thesis. In concluding the literature review, consideration into the
flexibility of the method was seen to be the key to autonomous functioning for a robot welder.
For flexibility, where technology is unable to be human, emulating human behaviour is
required. Masking ensures a focus on the object and edge detection furthers this focus on the
areas of interest. Median filtering can be considered as a way to ignore individual pixels that
have acquired noise increasing accuracy. Hough transform is useful in identifying and
reconstructing straight line information that may have become disconnected. These are all
techniques that humans use; thus, each of the techniques selected have a basis in emulating
human behaviour.
21
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
38/139
Chapter 3 - Methodology
3.1 Structuring The Method
As stated in Chapter 1, the aim of this thesis is to develop a flexible and effective method for
identifying narrow welding seams using information obtained from a visual-based system for
a robotic welder. A number of steps are required to process the input from a visual-based
sensor and convert this information to an output weld path.
By performing steps that focus on particular information that is present in all captured images,
it was expected that the resulting method would be able to perform effectively in a muchwider variety of cases. Therefore, in identifying the steps for the method, categorising
information in all captured images of the objects was considered.
To identify the categories of information, each captured product image was analysed for
common information. It was observed that information for the background, object boundaries,
and the welding seam, was present in all product images. Using these identified categories of
information, a core repeatable process of two stages was conceived. This process is:
Stage 1 analyse the information for relevant features;
Stage 2 exclude information that is not required.
These stages can be repeated for each category of information until the seam information is
obtained. In utilising this process as a framework for developing the method, a number of
image analysis techniques are considered for Stage 1 and a number of image processing
techniques are considered for Stage 2.
The overall method developed contains eight steps. These are the image capturing for the
input; pre-processing of the background image; threshold masking to exclude the background
information; edge detection to isolate the product features; Hough transform to identify the
points of interest; boundary masking to isolate the seam information; seam identification to
obtain accurate seam information; and the weld path output. A flowchart of the method
developed is shown inFigure 3-1.
22
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
39/139
Figure 3-1: Overview of the Method Developed
Included in this chapter are details of the steps for the method developed, beginning with the
input image capture.
3.2 Image Capturing
Image capturing requires capturing two images for each product to be used as the input for the
method developed. This is performed using a digital camera mounted on a robotic arm. The
first image is the background image and captures the background environment in which the
product is placed. The second image is the product image and captures the product in place,
ready to be welded by the robot.
The criteria used whilst capturing the images of the object was for the objects to be abutted
and that the robot arm mounted camera would remain stationary. These decisions ensured that
the weld seam was narrow, and that the same perspective was used to capture the backgroundand product images. Maintaining the perspective is a requirement for the threshold masking
23
Image Capturing
Threshold Masking
Edge Detection
Hough Transform
Weld Path
Boundary Masking
Pre-Processing
Seam Identification
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
40/139
step where a comparison between images to isolate the product is performed.
In capturing the images, care was taken to encompass a variety of object and environment
scenarios. Attribute variance of the object and environment was highly regarded during the
image capture phase. By restricting the application of quality control during the stock image
selection phase, it was anticipated that the stock of abutted object images obtained would
represent a wide sample of work pieces and environments. Images captured include marked,
dull and reflective objects in conjunction with plain, high contrast, patterned and reflective
backgrounds.
The images are captured in colour in bitmap format. Bitmap was selected as it does not
compress the image which results in information loss. Each image is a matrix of the pixels
identifying intensity levels for each of the three red, green and blue colour components.
The captured images for the example case can be seen inFigure 3-2.
Product Background
Figure 3-2: Product and Background Images
Whilst capturing the images, it was noticed that the automatic colour balancing functions of
the camera created a disparity in the tonal intensity between the product and background
images. To correct this disparity, pre-processing was added into the method developed.
24
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
41/139
3.3 Pre-Processing
Pre-processing is required for reversing the automatic colour balancing feature of the digital
camera used during image capturing. This feature is common in current digital camera
technology. The need to reverse this feature arises due to the threshold masking technique
developed. The threshold masking technique requires the background image and the
background information in the product image to be similar in intensity levels.
A technique to reverse the automatic camera colour balancing needed to be developed. This
could only be based on the information provided by the product and background images. For
the pre-processing technique developed, 4 steps are performed for each colour component in
the images. These steps are sampling the corners of both images; calculating the average
intensity of the samples in both images; calculating the intensity difference of the averages in
both images; and applying the difference to the background image. Each step is described
below.
Step 1: a 3x3 neighbourhood is sampled from each of the four corners of the product
and background images:
IP = [ [p1,1 p1,2 p1,3p2,1 p2,2 p2,3p3,1 p3,2 p3,3] [
p1,max2 p1, max1 p1, maxp2,max2 p2, max1 p2,maxp3,max2 p3, max1 p3, max]
[pmax2,1 pmax2,2 pmax2,3pmax1,1 pmax1,2 pmax1,3pmax ,1 pmax ,2 pmax , 3
] [pmax2,max2 pmax2,max1 pmax2,maxpmax1, max2 pmax1,max1 pmax1, maxpmax,max2 pmax,max1 pmax,max
]]IBg=
[ [p1,1 p1,2 p1,3
p2,1 p2,2 p2,3p3,1 p3,2 p3,3] [
p1,max2 p1,max1 p1,max
p2, max2 p2,max1 p2,maxp3,max2 p3,max1 p3,max]
[pmax2,1 pmax2,2 pmax2,3pmax1,1 pmax1,2 pmax1,3pmax ,1 pmax , 2 pmax ,3
] [pmax2,max2 pmax2,max1 pmax2,maxpmax1,max2 pmax1,max1 pmax1,maxpmax ,max2 pmax ,max1 pmax ,max
]]where:Iis the sample intensities of the image; Pdenotes the product image; Bgdenotes the
background image; andp
x , y are the intensity values at the pixel locationx,y.
25
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
42/139
Step 2: the average intensity for each colour in these neighbourhoods is then
calculated:
RP ave =RpIP
GP ave =GpIP
BP ave =RpIP
RBg ave=RpIBg
GBg ave=GpIBgN
BBg ave=RpIBg
where:Iis the sample intensities of the image;R is the red intensity values in the sample, G is
the green intensity values in the sample;B is the blue intensity values in the sample;Pdenotes
the product image;Bgdenotes the background image; ave denotes the average; andNis equal
to 36, the number of pixels sampled.
Step 3: the averages are used to calculate the red, green and blue colour intensity
difference between the product and background image:
CR=RP aveRBg aveCG=GP aveGBg aveCB=BP aveBBg ave
where: Cis the difference constant for each colour;R is the red intensity values in the sample,
G is the green intensity values in the sample; B is the blue intensity values in the sample; P
denotes the product image;Bgdenotes the background image; and ave denotes the average.
Step 4: applying this difference constant to the background image:
Im.Bg= {RIm.BgCR, GIm.BgCG, BIm.BgCB}
where:Im is the image; Bgdenotes the background image;R is the red intensity values in the
image, G is the green intensity values in the image;B is the blue intensity values in the image;
and Cis the difference constant for each colour.
Once the difference is applied, the background image is matched to the background
information in the product image.
26
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
43/139
The technique developed is based on histogram manipulation as described in Section 2.4.1.
The result is a histogram shift of the background image that uses a component of the product
image as a reference for a normal background histogram.
The application of this technique to the background image can be seen in Figure 3-3:
Example Product and Normalised Background Images where the background information is
now similar for both images.
Product Normalised Background
Figure 3-3: Product and Normalised Background Images
To be successful in reversing the camera balancing using the method developed, the product
must be contained wholly within the image frame or, where that is not possible, the product
must not extend through the corners of the image. With the background image histogram
normalised to the background information of the product image, the threshold masking can be
performed.
3.4 Threshold Masking
A threshold masking method was developed to achieve the separation of the product and the
background after pre-processing. This requires identifying similar information coincident in
the product image and normalised background image. Threshold masking prepares the
product image for the edge detection. The method developed here is to perform the threshold
masking in the RGB colour space. Thresholding is normally performed only in grayscale,
particularly for the Otsu method; therefore, a method to achieve this aim needed to be
developed.
27
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
44/139
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
45/139
To form the binary threshold mask, the inverted matrices are multiplied element by element.
This operation results in a matrix that is the intersection of the set of similar information for
all three colour components. This ensures the mask only identifies pixels where the original
colour intensities are similar across all three colour components. forms the binary threshold
mask.
Following the threshold masking is the edge detection step that requires a grayscale image;
therefore, the binary mask is converted to a grayscale mask. This is performed by multiplying
the matrix with the maximum 255 grayscale value. A 5x5 neighbourhood median filter is then
applied to the grayscale mask to reduce the possibility of a narrow seam being included in the
mask. Details of median filtering are described in Section 2.4.2.
To apply the threshold mask the product image also needs to be converted to grayscale. To
form the grayscale product image, the colour components of the product image matrix are
combined using a weighted sum in the form:
Im.P Gray = [0.2989 0 0
0 0 0.5870 0
0 0 0.114] [Im.P RIm.P GIm.P B
]where: Im is the image; Pdenotes the product image; Gray denotes grayscale; R is the red
intensity values in the image, G is the green intensity values in the image; and B is the blue
intensity values in the image.
The threshold mask is subtracted from the grayscale product image. A 5x5 neighbourhood
median filter is then applied to assist with reducing product imperfection information for the
subsequent edge detection process.
The result of applying the threshold mask to the example product can be seen in Figure 3-4,
where the background information has been removed.
29
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
46/139
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
47/139
The result of applying the Prewitt edge detection method to the example product can be seen
in Figure 3-5, where the boundaries, seam and some of the major product imperfections are
highlighted.
Product After Threshold Masking Edge Detected Product
Figure 3-5: Product After Threshold Masking and Edge Detected Product Images
Prewitt edge detection method offered the optimum balance between highlighting enough of
the product feature information without including too much information from the product
imperfections. Relevant edge detection information is required for accuracy in the Hough
transform analysis.
3.6 Hough Transform
The Hough transform analysis is used to infer straight lines contained within the edge
detection information called Hough lines. The intersections of these Hough lines are deemed
to be points of interest, such as the corner boundaries of the object, start and end points of the
welding seam, and changes in the direction of the welding seam.
Details for the Hough transform for straight lines can be found in Section 2.4.4 including the
relationship between the Cartesian and Hough planes. This forms the basis for the
implementation of the Hough Transform in the method developed.
To perform the Hough transform requires two steps: generating the accumulator and
identifying Hough lines.
31
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
48/139
3.6.1 Generating The Accumulator
To generate the accumulator for the Hough transform requires describing the edge detected
product image matrix with an origin in a Cartesian (x, y) plane. The described Cartesian plane
for the example product is shown in Figure 3-6. Thex coordinates are the image matrix rows
ranging from 1 to the image height in pixels. The y coordinates are the image matrix columns
ranging from 1 to the image width in pixels.
Figure 3-6: Cartesian Plane For The Edge Detected Product Image
The Hough transform converts each individual pixel identified in the Cartesian edge
information into the accumulator described by the Hough (, ) plane as shown inFigure 3-7.
The transform is performed by inspecting each pixel of the edge detected product. For every
pixel identified by the edge detection, the x-y coordinate values are solved forEquation 2.6
for the range from 0 degrees to 270 degrees. For each of these calculations the
corresponding -coordinate in the accumulator is increased by 1.
The -accumulator coordinate frame and resulting accumulator for the example product can
be seen inFigure 3-7, where the accumulator shows the convergence of lines created during
the transform to the Hough plane for every pixel identified in the edge information. The values range from 0 to 270 degrees. The values range from 1 to a maximum possible pixel
32
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
49/139
distance value based on the image dimensions.
Figure 3-7: Hough Plane For The Accumulator
By observation, the convergence of lines in the accumulator are indicative of straight line
information within the image transformed, called Hough lines. The location that these
convergences occur at correspond with the highest values in the accumulator. Locations near
to the convergences are also likely to have higher values. Analysis of the accumulator is
required to identify the single highest value in the neighbourhood and retrieve the Hough line
information.
3.6.2 Identifying Hough Lines
Hough lines are identified by processing the accumulator for the local maxima of the element
values. These local maxima identify the line information in the polar and coordinates.
Local maxima in the accumulator often identify multiple Hough lines for each line visible by
inspection. To identify the most likely Hough line, a line selection process was developed to
group similar lines. This enabled selection of the highest value line in a group.
33
-
8/8/2019 Croft, W - Computer Vision for Identification of Narrow Welding Seams for Robotic Welders
50/139
The process developed groups lines where the and values are similar. To achieve this, all
identified lines are sorted