the development of mechatronic machine vision system for inspection of ceramic plates

104
Ain Shams University Faculty f Engineering DESIGN AND IMPLEMENTATION OF FLEXIBLE MANUFACTURING CELL FOR THE INSPECTION OF CERAMIC WALL PLATES A Thesis Submitted in Partial Fulfilment of the Requirements for the Degree of MSc. In Mechatronics By Waleed Abd El-Megeed El-Badry BSc. In Mechanical Engineering (Mechatronics Branch) Teaching Assistant, Mechatronics Department College of Engineering Misr University for Science and Technology Supervised By Prof. Farid A. Tolbah Emeritus Professor in Design and Production Engineering Department Ain Shams University Dr. Ahmed M. Aly Assistant Professor in Design and Production Engineering Department Ain Shams University 2012

Upload: waleed-el-badry

Post on 19-Jan-2015

455 views

Category:

Education


5 download

DESCRIPTION

This is my Master final thesis for the conducted research that led to build a mechatronic machine vision system for inspection of garnished wall plates.

TRANSCRIPT

Page 1: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

Ain Shams University

Faculty f Engineering

DESIGN AND IMPLEMENTATION OF FLEXIBLE

MANUFACTURING CELL FOR THE

INSPECTION OF CERAMIC WALL PLATES

A Thesis Submitted in Partial Fulfilment of the Requirements

for the Degree of MSc. In Mechatronics

By

Waleed Abd El-Megeed El-Badry

BSc. In Mechanical Engineering (Mechatronics Branch)

Teaching Assistant, Mechatronics Department

College of Engineering

Misr University for Science and Technology

Supervised By

Prof. Farid A. Tolbah

Emeritus Professor in Design and Production Engineering Department

Ain Shams University

Dr. Ahmed M. Aly

Assistant Professor in Design and Production Engineering Department

Ain Shams University

2012

Page 2: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

II

ABSTRACT

Quality inspection is one of the recurring topics among industrial

community. Garnished wall plates may require inspection in terms of

geometry (height, width, corner radius ...etc), Drawn Objects Orientation

(Angles with respect to reference axis) and colour of drawn objects.

Emulation of human decision in classification is what inspection is all

about.

In this thesis, we implemented a hybrid system of fuzzy logic and

fuzzy c-mean clustering for quality inspection of garnished ceramic wall

plates. This novel approach is designed to inspect colour, geometry, angles

and existence of colour spots. The study is intended to reduce error due to

human visual inspection. A developed Flexible Manufacturing Cell (FMC)

is responsible for inspection of ceramic plates applying Computer Vision

System (CVS) and automation of the cell. The bench marking of the

proposed algorithm is considered promising compared to other published

peers.

Page 3: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

III

SUMMARY

The research effort expended upon the problem of objectively

inspecting, analysing and characterizing garnished ceramic tiles is easily

justified by the commercial and safety benefits to the industry such as

automating obsolete manual inspection procedure, providing higher

homogeneity within sorted grades which consequently would improve the

processing stability of the overall inspection procedure.

Tiredness and lack of concentration are common problems, leading

to errors in grading tiles. Gradual changes are difficult for human

inspectors to detect and it is possible that slight and progressive changes

will not be noticed at an early stage. Additionally, it is particularly difficult

for the human eye to accurately sort tiles into shades, with changing light

conditions in a factory. Human judgment is, as usual, influenced by

expectations and prior knowledge.

Since early 90s, several published papers were concerned with

finding new approaches to detect cracks and colour spots in ceramic tiles.

However, most of them were targeting the inspection algorithm and

neglected the other aspects of machine vision system (variability in

illumination and surface reflectivity, image acquisition, mechanical

handling system and electrical interfacing circuit). Meanwhile, measuring

drawn objects geometry and colour quality has never been discussed prior

to the conducted research which is demanded when inspecting garnished

wall plates.

The proposed thesis offers a novel approach in performing

geometric and colour quality inspection based on a developed hybrid

technique of fuzzy logic and fuzzy c-mean clustering to detect cracks,

colour spots and colour mismatch. It also offers scoring for extracted

geometric and colour features.

Live image acquisition parameters are discussed in details with

different parameters affecting the quality of acquired image such as

gamma correction and frame rate consideration. Detailed steps for live

acquisition are also discussed.

A proposed algorithm for image calibration to minimise lens

distortion is demonstrated. The proposed algorithm proved its validity in

lens and perspective correction.

Page 4: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

IV

To perform colour grouping, fuzzy c-clustering was chosen where

a visual representation of clustering is displayed to offer visual

interpretation of drawn objects isolation rather than dummy clustering

nodes.

Geometric feature extraction takes place by means of edge

detection using tuned Prewit filter with rays of spokes and line fitting for

evaluation of edge length. Moreover, geometric and colour scores are

calculated based on the metrics measured.

A fuzzy classifier having two inputs (geometry and colour score)

and three outputs (acceptable, colour spot and colour mismatch) were built

for garnished ceramic grading.

A test rig was designed and manufactured for evaluation of the

proposed system. The evaluation of the system showed promising results

in terms of detection accuracy of correct ceramic grade. The effect of

ceramic orientation was also studied to demonstrate the efficiency of the

proposed calibration algorithm. Meanwhile, a benchmarking was carried

out to assess the overall performance. A comparative study was performed

to show the effect of utilization of fuzzy c-mean over the nearest neighbour

approach which showed a better influence in terms of correct pattern

detection by using the latter technique.

Parts of the presented work in the submitted thesis were published

in [1-3].

Page 5: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

V

ACKNOWLEDGEMENT

The researcher is obliged to his supervisors, Prof. Farid Tolbah and

Dr. Ahmed Aly for their recurring support and guiding during the research

period, He also carries gratitude to the venerable referees, namely Prof.

Magdy Abdel Hameed and Prof. Mohammed El-Adawy for their kind

hospitality and comments to enhance this research.

The priceless help of Prof. Mustafa Eissa in reviewing and

enhancing this thesis will be memorable for life.

The conducted research would not have been carried out without

the devoting assistance from Eng. Mahmoud Hassan who had major

fingerprints in the mechanical design of the built flexible cell.

Page 6: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

VI

TABLE OF CONTENTS

LITERATURE REVIEW ....................................................................................... 1

1.1 INTRODUCTION .................................................................................................. 1

1.2 MACHINE VISION AND COMPUTER VISION .............................................................. 3

1.3 MACHINE VISION AND HUMAN VISION ................................................................... 6

1.4 INSPECTION OF CERAMIC PLATES ............................................................................ 6

1.5 CHALLENGES IN AUTOMATED INSPECTION ............................................................... 7

1.6 STATEMENT OF THE PROBLEM ............................................................................. 15

1.7 OBJECTIVE OF THE PRESENT WORK ....................................................................... 15

IMAGE ACQUISITION AND PRE-PROCESSING ................................................ 17

2.1 SPATIAL CALIBRATION ....................................................................................... 17

2.1.1 Lens distortion ....................................................................................... 18

2.1.2 Telecentric Lens ..................................................................................... 18

2.1.3 Software based correction .................................................................... 19

2.2 PROPOSED ALGORITHM FOR CORRECTING AND CALIBRATING LENS DISTORTION ............. 20

2.2.1 Colour plane extraction ......................................................................... 20

2.2.2 Automatic Thresholding ........................................................................ 21

2.2.3 Particle grouping ................................................................................... 23

2.2.4 Particles and distance measurements................................................... 24

2.2.5 Performing correction on each pixel ..................................................... 26

2.3 IMAGE ACQUISITION .......................................................................................... 29

2.3.1 Analogue cameras................................................................................. 29

2.3.2 Digital Cameras ..................................................................................... 30 2.3.2.1 Camera Link ........................................................................................ 31 2.3.2.2 USB cameras ....................................................................................... 31 2.3.2.3 IEEE 1394 (Firewire cameras) .............................................................. 31

2.4 ACQUISITION PARAMETERS ................................................................................. 32

2.4.1 Image format ........................................................................................ 32

2.4.2 Speed ..................................................................................................... 33

2.4.3 Bayer colour filter .................................................................................. 33

2.4.4 Brightness and contrast ........................................................................ 35

2.4.5 Gain ....................................................................................................... 36

2.5 GAMMA CORRECTION ....................................................................................... 37

SOFT PARTITIONING AND COLOUR GROUPING ............................................. 39

3.1 CLUSTERING .................................................................................................... 39

3.2 PROBLEM OF CLUSTERING .................................................................................. 40

3.3 ADVANTAGES OF USING CLUSTERING .................................................................... 40

3.4 CONSTRAINTS OF USING CLUSTERING .................................................................... 40

Page 7: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

VII

3.5 METHODOLOGIES OF DISTANCE MEASUREMENTS .................................................... 41

3.5.1 Common distance functions .................................................................. 41 3.5.1.1 The Euclidean distance ....................................................................... 41 3.5.1.2 The Manhattan distance (aka taxicab norm or 1-norm) ..................... 41 3.5.1.3 The Hamming distance ....................................................................... 42

3.6 CLUSTERING TECHNIQUES .................................................................................. 43

3.7 EFFECTIVENESS OF COLOUR CLUSTERING ............................................................... 44

3.7.1 The need for fuzzification in colour grouping ........................................ 44

3.7.2 Challenges in fuzzy clustering ............................................................... 44

3.7.3 Proposed soft partitioning and group visualization using fuzzy

clustering (fuzzy c-mean) ................................................................................... 45

GEOMETRIC FEATURE EXTRACTION .............................................................. 49

4.1 DEFINITION OF FEATURE VECTOR ......................................................................... 49

4.2 FEATURES OF INTEREST IN GARNISHED WALL PLATES ................................................ 50

4.3 APPLIED ALGORITHM FOR PRELIMINARY LOCATING OF DRAWN OBJECTS ....................... 50

Methodology of edge detection ............................................................ 50

Formulating edge detectors .................................................................. 51

Tuned Prewitt Filter ............................................................................... 53

4.4 COMPUTING FEATURE SCORE ............................................................................. 55

Calculating the score for colour quality ................................................ 55

Calculating the accuracy of geometric features .................................... 55

CLASSIFICATION OF CERAMIC PLATES USING FUZZY LOGIC ........................... 58

FUZZY LOGIC AND FUZZY LOGIC SYSTEMS ............................................................... 58

MAMDANI’S FUZZY LOGIC SYSTEM ....................................................................... 60

5.2.1 Fuzzification .......................................................................................... 60

5.2.2 Rules ...................................................................................................... 61 5.2.2.1 Fuzzy operations ................................................................................. 61 5.2.2.2 Implication .......................................................................................... 63 5.2.2.3 Aggregation ......................................................................................... 63

5.2.3 Defuzzification ....................................................................................... 64 5.2.3.1 Defuzzification techniques .................................................................. 65

METHODOLOGIES UTILIZED IN BUILDING A FUZZY INFERENCE SYSTEM .......................... 65

FORMULATING THE PROPOSED FUZZY CLASSIFIER FOR THE ASSESSMENT OF GARNISHED

WALL PLATES ................................................................................................................ 67

5.4.1 Fuzzification of input and output .......................................................... 68

5.4.2 Rules for classification ........................................................................... 69

5.4.3 Proposed modified defuziification method ........................................... 70

RESULTS AND DISCUSSION ........................................................................... 71

6.1 IMPLEMENTATION OF THE PROPOSED SYSTEM ............................................................... 71

Page 8: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

VIII

6.1.1 Mechanical Design .................................................................................... 71

6.1.2 Electrical Circuitry ..................................................................................... 72

6.1.3 Software Architecture ............................................................................... 74

6.2 RESULTS ................................................................................................................ 75

6.3 DISCUSSION ........................................................................................................... 79

CONCLUSION ................................................................................................ 81

FUTURE WORK .............................................................................................. 82

REFERENCES ................................................................................................. 83

PUBLISHED PAPERS ....................................................................................... 89

Page 9: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

IX

LIST OF FIGURES

FIGURE 1.1 MECHATRONICS ENGINEERING DISCIPLINE .................................................................. 1

FIGURE 1.2 ARCHETYPAL MACHINE VISION SYSTEMS APPLICABLE TO A WIDE ..................................... 2

FIGURE 2.1 NOTION OF IMAGE CALIBRATION ............................................................................ 17

FIGURE 2.2 TYPES OF LENS DISTORTION ................................................................................... 18

FIGURE 2.3 IN A TELECENTRIC SYSTEM RAYS GET INTO THE OPTICS ONLY ......................................... 19

FIGURE 2.4 THE PROPOSED ALGORITHM FOR PERSPECTIVE CORRECTION ......................................... 20

FIGURE 2.5 SCREENSHOT OF CALIBRATION GRID USED IN CORRECTION AFTER RED PLANE EXTRACTION .. 21

FIGURE 2.6 ADAPTIVE THRESHOLDING USING CLUSTERING TECHNIQUE ........................................... 22

FIGURE 2.7 RESULTED IMAGE OF BOUNDARY TRACKING OF GRID DOTS ........................................... 23

FIGURE 2.8 DETECTING CENTRES TO CHECK FOR SKEWNESS OF EACH DOT ....................................... 25

FIGURE 2.9 VISUALIZATION OF DETECTED DOTS IN THE GRID ........................................................ 25

FIGURE 2.10 PIXEL MAPPING TO RESTORE EXPECTED GRIDS LOCATIONS .......................................... 26

FIGURE 2.11 ACQUIRED IMAGE OF CERAMIC PLATE .................................................................... 27

FIGURE 2.12 THE CORRECTED IMAGE AFTER APPLYING THE ALGORITHM ......................................... 28

FIGURE 2.13 SCREENSHOT OF THE DEVELOPED SOFTWARE FOR CALIBRATION OF CERAMIC PLATE ......... 28

FIGURE 2.14 CCD ARRAY CONFIGURATION .............................................................................. 30

FIGURE 2.15 IMAGE FORMAT AND DELIVERED SPEED RATE .......................................................... 33

FIGURE 2.16 BAYER FILTER (CONCEPT AND CONFIGURATION)....................................................... 34

FIGURE 2.17 BRIGHTNESS ADJUSTMENT .................................................................................. 35

FIGURE 2.18 GAIN CONTROL ................................................................................................. 36

FIGURE 2.19 GAMMA CORRECTION OF THE ACQUIRED CERAMIC IMAGE (Γ=0.7) .............................. 37

FIGURE 2.20 IMPLEMENTATION OF GAMMA CORRECTION .......................................................... 38

FIGURE 3.1 CLUSTERING IS PERFORMED BY GROUPING EACH SIMILAR DATA BASED ON COMMON CRITERIA

........................................................................................................................... 39

FIGURE 3.2 TAXICAB GEOMETRY VS. EUCLIDEAN DISTANCE .......................................................... 42

FIGURE 3.3 EXAMPLE OF HAMMING DISTANCE .......................................................................... 42

FIGURE 3.4 VISUAL REPRESENTATION OF CLUSTERING RESULT ON THE GARNISHED WALL PLATES .......... 46

FIGURE 3.5 THE DEVELOPED SOFTWARE SHOWING FUZZY CLUSTERING EFFECT IN HORIZONTAL

ORIENTATION ......................................................................................................... 47

FIGURE 3.6 THE DEVELOPED SOFTWARE SHOWING FUZZY CLUSTERING EFFECT IN VERTICAL ORIENTATION

........................................................................................................................... 48

FIGURE 4.1 EXTRACTING GEOMETRIC FEATURES ........................................................................ 49

FIGURE 4.2 FEATURE VECTOR AS METRICS FOR INSPECTION.......................................................... 49

FIGURE 4.3 PROCESSED PIXEL AND ITS NEIGHBOURS ................................................................... 51

FIGURE 4.4 KERNEL MULTIPLIED BY THE IMAGE WINDOW ............................................................ 51

FIGURE 4.5 EDGE DETECTION AND EXTRACTING GEOMETRY OF DRAWN OBJECTS .............................. 53

FIGURE 4.6 FITTING DETECTED POINTS ALONG THE PROFILE INTO LINES .......................................... 54

FIGURE 4.7 SCREENSHOT OF THE DEVELOPED SOFTWARE SHOWING THE FEATURE EXTRACTION STAGE .. 56

FIGURE 5.1 AN EAMPLE OF MEMBERSHIP FUNCRION FOR EVALUATION OF A PROGRAM ..................... 59

Page 10: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

X

FIGURE 5.2 FUZZIFYING CERAMIC PLATES FEATURES VECTOR ........................................................ 61

FIGURE 5.3 EXAMPLE OF IMPLEMENTING FUZZY LOGICAL OPERATIONS ........................................... 62

FIGURE 5.4 AN IMPLICATION EXAMPLE FROM THE PROPOSED RULES .............................................. 63

FIGURE 5.5 EXAMPLE OF AGGREGATING EFFECT ON THE POSSIBILITY OF ACCEPTING THE CERAMIC PLATE

........................................................................................................................... 64

FIGURE 5.6 RESULT OF DIFFERENT DEFUZZIFICATION TECHNIQUES ................................................ 66

FIGURE 5.7 FUZZY LOGIC SYSTEM .......................................................................................... 67

FIGURE 5.8 SCREENSHOT OF THE DEVELOPED MAMDANI’S INFERENCE ENGINE FOR CERAMIC ASSESSMENT

........................................................................................................................... 67

FIGURE 5.9 COLOUR SCORE MEMBERSHIP FUNCTION .................................................................. 68

FIGURE 5.10 GEOMETRIC SCORE MEMBERSHIP FUNCTION ........................................................... 68

FIGURE 5.11 THE OUTPUT MEMBERSHIP FUNCTIONS OF CERAMIC GRADING .................................... 69

FIGURE 6.1 THE TEST RIG OF THE PROPOSED SYSTEM .................................................................. 71

FIGURE 6.2 THE H-BRIDGE CIRCUIT ......................................................................................... 72

FIGURE 6.3 SCHEMATIC DIAGRAM OF THE USB INTERFACING CARD ............................................... 73

FIGURE 6.4 PCB LAYOUT OF THE INTERFACING CARD .................................................................. 73

FIGURE 6.5 ARCHITECTURAL DESIGN OF THE DEVELOPED SOFTWARE .............................................. 74

FIGURE 6.6 ROTATED GARNISHED CERAMIC PLATE ..................................................................... 75

FIGURE 6.7 CERAMIC PLATE WITH COLOUR MISMATCH ............................................................... 75

FIGURE 6.8 CERAMIC PLATE WITH COLOUR SPOTS ...................................................................... 76

FIGURE 6.9 THE EFFECT OF ROTATING CERAMIC PLATES OVER CORRECT DETECTION OF ITS CLASS ......... 77

FIGURE 6.10 DETECTION PERCENTAGE OF CERAMIC GRADE (NEAREST NEIGBOUR VS. FUZZY C-MEAN .... 78

Page 11: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

XI

LIST OF TABLES

TABLE 1.1 THE DIFFERENCE BETWEEN MACHINE VISION AND COMPUTER VISION ................................ 5

TABLE 2.1 COMPARISON OF AVAILABLE CAMERAS UTILIZED IN MACHINE VISION APPLICATION ............. 32

TABLE 5.1 RANGE OF COLOUR SCORE ...................................................................................... 68

TABLE 5.2 RANGE OF GEOMETRIC SCORE ................................................................................. 68

TABLE 5.3 RULES FOR CLASSIFYING GARNISHED PLATES ............................................................... 69

TABLE 6.1 RESULT OF THE PROPOSED CLASSIFICATION SYSTEM FOR THE STATED FIGURES ................... 76

TABLE 6.2 ACCURACY OF AUTOMATIC DETECTION OF CORRECT CLASSES ......................................... 77

TABLE 6.3 SELECTED MTERICS FOR SOFTWARE PERFORMANCE ASSESSMENT .................................... 79

TABLE 6.4 SYSTEM CONFIGURATION ....................................................................................... 79

Page 12: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

XII

LIST OF PSEUDO CODE

LIST 2.1 COLOUR PLANE EXTRACTION ALGORITHM ..................................................................... 21

LIST 2.2 PSEUDO OF THRESHOLDING USING CLUSTERING ALGORITHM ............................................ 23

LIST 2.3 BOUNDARY EXTRACTION OF GRIDS CONTOUR ................................................................ 24

LIST 2.4 RESTORATION OF GRIDS CIRCULAR SHAPE ..................................................................... 26

LIST 2.5 RETRIEVAL OF GRIDS CORRECTED LOCATIONS ................................................................. 26

LIST 3.1 COLOUR SORTING WITH FUZZY C-MEAN ........................................................................ 46

LIST 3.2 THE ALGORITHM FOR GROUPING OBJECTS BY COLOUR AND LOCATION ................................ 47

Page 13: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

XIII

LIST OF ABBREVIATIONS AND ACRONYMS

Aka Also Known As

API Application Programming

Interfacing

ASSIST Automatic System for Surface

Inspection and Sorting of Tiles

BMU Best Matching Unit

CCD Charge Coupled Device

CFA Colour Filter Array

CS Colour Score

CV Computer Vision

DOM Degree Of Membership

DS Dimension Score

DTS Distance Score

FCM Fuzzy C-Mean

FIS Fuzzy Inference System

FIS Fuzzy Inference System

FL Fuzzy Logic

FLS Fuzzy Logic System

FMC Flexible Manufacturing Cell

FMS Flexible Manufacturing System

IPT Interactive Prototype Toolkit

LOM Largest Of Maximum

MF Membership Function

MOM Middle Of Maximum

MV Machine Vision

OGS Overall Geometry Score

OS Orientation Score

RGB Red ,Green and Blue colour planes

ROI Region Of Interest

SOM Self-Organising Maps.

SOM Smallest Of Maximum

TM Target Machine

TMPLAR Template Learning from Atomic

Representations

VB Visual Basic

Vs Versus

Page 14: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

1

Literature Review

1.1 Introduction

Mechatronics is an integration of mechanical engineering,

electrical engineering, automatic control and computer science[4]. This

discipline gains momentum among industrial community since researchers

in this field became involved in robotics, automated measurements and

machine vision (Figure 1.1).

Figure 1.1 Mechatronics engineering discipline

Machine vision (MV) as shown in Figure 1.2 is concerned with the

engineering of integrated mechanical-optical-electronic-software systems

for examining objects and materials, human artefacts and manufacturing

processes, in order to detect defects and improve quality, operating

efficiency and the safety of both products and processes.

In addition to inspection, machine vision can be used to control the

machines used in manufacturing and material processing[5]. These may

perform such operations as cutting, trimming, grasping, manipulating,

packing, assembling, painting, decorating, coating, welding…. etc [6].

•Software engineering

• Code reusability

• Hardware interfacing APIs.

•Dynamic response

• Intelligent control

• System stability

•Interfacing with elctronic preipherals

• Signal conditioninig

• Signal transmission

• Handling Units

• Conveyors

• Stress Analysis

Mechanical Engineering

Electrical Engineering

Computer Science

Automatic Control

Page 15: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

2

Automated visual inspection systems allow manufacturers to

monitor and control product quality, thus maintaining/enhancing their

competitive position. Machine vision is also being used to ensure greater

safety and reliability of manufacturing processes.

Figure 1.2 Archetypal Machine Vision systems applicable to a wide

The confidence being gained by applying machine vision to

engineering manufacturing is now spilling over into industries such as

food processing, agriculture, horticulture, textile manufacturing, etc.,

where product variability is intrinsically higher.

Page 16: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

3

Thus, we are at the threshold of what we predict will be a period of

rapid growth of interest in machine vision aided flexible manufacturing

cell (FMC) in the Egyptian market.

Implicit in the preceded figure is the fact that machine vision is a

multi-disciplinary subject and necessarily involves designers in

mechanical, optical, electrical, electronic (analogue and digital), and

software engineering and mathematical analysis (of image processing

procedures) [7].

Less obviously, several aspects of “software engineering” are also

required, including human–computer interfacing, work management, and

quality assurance procedures. Integrating such varied technologies to

create a harmonious, unified system is of paramount importance; failure to

do so properly will inevitably result in an unreliable and inefficient

machine.

We may observe that machine vision is a strong candidate in

Mechatronics applications since both terms dig into same areas [1].

1.2 Machine Vision and Computer Vision

Machine vision is not synonymous to Computer Vision (CV).

Computer vision is a branch of computer science where manipulation of

image is the primary concern. Computer vision is also known as digital

image processing where it involves image manipulation, analysis,

compression and presentation. On the other hand, machine vision is an

engineering discipline where software and hardware are tied to each other.

Several topics comprise machine vision systems such as

illumination, selection of cameras, interfacing circuitry and mechanical

actuation systems [1]. Less obviously, several aspects of “multi-domain

knowledge” are also required, including human–computer interfacing,

work management, and quality assurance procedures.

Hence, a team of engineers with a range of skills is needed to

design a successful Machine Vision system. Integrating such varied

technologies to create a harmonious, unified system is of paramount

importance; failure to do so properly will inevitably result in an unreliable

and inefficient machine.

Page 17: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

4

This point cannot be over-emphasised. This is why we insisted in

the very first pages of this thesis that Machine Vision is quite distinct from

Computer Vision.

The point is made more forcibly in Table 1.1. The distinction

between machine vision and computer vision reflects the diversity that

exists between engineering and science. Entries in the central column

relate to the factory-floor Target Machine (TM), unless otherwise stated,

in which case they refer to an Interactive Prototyping Toolkit (IPT).

The following table (Table 1.1) depicts the major observations

between Machine Vision and Computer Vision.

Vision systems are currently being used extensively in

manufacturing industry, where they perform a very wide variety of

inspection, monitoring and control functions. Those areas of

manufacturing that have benefited most in the past include electronics,

automobiles, aircraft and domestic products from furniture polish and

tooth-paste to refrigerators and washing machines.

Vision systems have also been used in the food industry,

agriculture and horticulture, although to a smaller extent. Machine vision

is a part of many inspection processes nowadays, these may include:

Analysing the shape of whole products as a prelude to processing them

using robotic manipulators.

Analysing texture.

Counting.

Detecting foreign bodies.

Grading.

Measuring linear dimension

Page 18: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

5

Table 1.1 The difference between machine vision and computer vision[8]

Feature Machine Vision Computer Vision

Motivation Practical Academic

Advanced in

theoretical sense

Unlikely. (Practical issues

are likely to dominate)

Yes. Many academic

papers contain a lot of

“deep” mathematics

Cost Critical Likely to be of

secondary importance

Dedicated

electronic

hardware

Possibly needed to

achieve high-speed

processing No (by definition)

Use integrated

solutions

Yes (e.g., systems are

likely to benefit from

careful lighting)

No. There is a strong

emphasis on proven

algorithmic methods

Data source A piece of metal, plastic,

glass, wood, etc. Computer file

Most important

criteria by which

a vision system is

judged

a. easy to use;

b. cost-effective;

c. consistent and reliable;

d. fast

Performance

Multi-disciplinary Yes No

Criterion for good

solution

Satisfactory

performance Optimal performance

Nature of subject Systems Engineering

(practical)

Computer Science,

academic (theoretical)

Human

interaction

IPT: vision engineer

TM: Low skill level

during set-up.

Autonomous in inspection

mode

Often relies on user

having specialist skills

(e.g., medical

background)

Operator skill

level required

a. IPT medium/high

b. TM must be able to

cope with low skill level

May rely on user

having specialist skills

(e.g., medical

background)

Output data Simple signal, to control

external equipment

Complex signal, for

human being

Principal factor

determining

processing speed

IPT: human interaction

TM: speed of production

Human interaction.

Often of secondary

importance

Page 19: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

6

1.3 Machine Vision and Human Vision

Machine Vision does not set out to emulate human vision. At some

time in the future, it may be both possible and convenient to build a

machine that can “see” like a human being. At the moment, it is not. Today,

an industrial Machine Vision engineer is likely to regard any new

understanding that biologists or psychologists obtain about human or

animal vision as interesting but largely useless.

The reason is simple: the “computing machinery” used in the

natural world (networks of neurons) is quite different from that used by

electronic computers [9-11]. Certainly, no person can properly look

introspectively at his/her own thought processes in order to understand

how he/she analyses visual data. Moreover, there is no need to build

machines that see the world as we do. Even, if we could decide exactly

how a person sees the world, it is not necessary to build a machine to

perform that task in the same way.

In the natural world, there are clearly several different and

successful paradigms for visual processing including: insects, fish, and

mammals. In any case, such a machine would have the same weaknesses

and be susceptible to the same optical illusions as we do.

1.4 Inspection of ceramic plates

The ceramic tiles industrial sector is a relatively young industry

which has taken significant advantage of the strong evolution in the world

of automation in recent years. All production phases have been addressed

through various technical innovations, with the exception of the final stage

of the manufacturing process. This is concerned with visual surface

inspection in order to sort tiles into distinct categories or to reject those

found with defects and patterns faults. The generally accepted manual

method of tile inspection is labour intensive, slow and subjective.

Page 20: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

7

Automated sorting and packing lines have been in existence for a

number of years, however, the complexity of inspecting tiles for damage

and selecting them against the individually set quality criteria of a

manufacturer has meant that, until recently, automated tile inspection has

not been possible.

1.5 Challenges in Automated Inspection

The research effort expended upon the problem of objectively

inspecting, analysing and characterizing ceramic tiles is easily justified by

the commercial and safety benefits to the industry:

Automation of a currently obsolete and subjective manual inspection

procedure

Significant reduction for the need of human presence in hazardous and

unhealthy environments

More robust and less costly inspection

Higher homogeneity within sorted classes of products

Tiredness and lack of concentration are common problems, leading

to errors in grading tiles. Gradual changes are difficult for human

inspectors to detect and it is possible that slight and progressive changes

will not be noticed at an early stage. Additionally, it is particularly difficult

for the human eye to accurately sort tiles into shades, with changing light

conditions in a factory [12]. Human decision is, as usual, influenced by

expectations and prior knowledge.

However this problem is not specific to structural defects. In many

detection tasks for example, edge detection, there is a gradual transition

from presence to absence. On the other hand, in “obvious” cases most

naïve observers agree that the defect is there, even when they cannot

identify the structure.

The goal of the inspection is not to give a statistical analysis of the

production but to classify every tile into quality-constant batches. These

tasks are often referred to as visual inspection.

Page 21: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

8

In 1976, “H. Blossel” [13] developed the first successful model for

sorting ceramic tiles. The system inspects images captured by means of

pixel comparison of master plate and the plate under inspection. Although

the system was genuine in implementation, it lacks consideration of

mechanical handling system and interfacing with suggested controller

(Microprocessor). Moreover, it assumed that image is taken with no noise

which is hardly achieved up to this moment [2, 14, 15]. Meanwhile, to

inspect any plate, same pattern distribution must be retained to perform the

pixel comparison successfully.

“Desoli” developed system was concerned with corner defects in

ceramic tiles [16] by converting the captured image into binary image and

searching thereafter for edges by comparing each pixel to its surrounding

neighbours. He performed a comparison of edge detection filters and

showed that Sobel filter is the proper one to use in his application. The

system showed promising results (97.3 % correct classification). However,

the algorithm was affected by illumination that degrades edge detection

drastically [17]. Specification of practical implementation was not

mentioned too.

The research carried out in [17, 18] was divided into two parts; the

first part [18] proposed a group of inclined mirrors for indirect illumination

as a proposed approach to minimize reflectivity. They also proved that if

an image of the tile were taken from an oblique angle, cracks can be

detected easily. In The second part [17], Computer software was

implemented to perform online inspection. The program runs under DOS

and was written in C language. No information about the hardware used

was provided in this publication.

“Boukouvalas” added colour inspection capabilities to his system by

utilization of fuzzy logic to transfer the knowledge of domain expert into

the developed software [19]. The system was constrained by 6 different

colours to distinguish from. Colour matching yielded good results,

however, “Marques” [20] proposed a better rules by adding the change of

error during the inspection operation and therefore his system became

strong candidate to perform trend analysis. A year later, “Boukouvalas”

developed a system he named it “ASSIST” as an acronym of (Automatic

System for Surface Inspection and Sorting of Tiles) [21].

Page 22: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

9

The system was performing inspection by utilization of 3 stages, a

Charge Coupled Device (CCD) colour camera for primary colour grading,

a monochrome camera for cracks detection and a line scan camera for

precision colour grading. The system has the ability to distribute its

computation over local area network connected to two servers. However,

the system failed to inspect plates in real time due to excessive

computation required by each process.

The oblique orientation of capturing camera were used again by

“Massen” and “Franz” [22]. They illustrated that although dimension

measurements is almost impossible from the camera perspective,

mounting a system of mirrors to have an indirect perpendicular view of the

ceramic plate would yield a better cracks detection and less distortion of

the view. They were arguing the possibility of such a system to produce

better results than visual inspection of humans. The developed system was

able to detect cracks based on a portion of the proposed system in [21]

(cracks detection).

They tackled the issue of intensive computation by estimating the

presence of cracks after performing a Fast Fourier Transform (FFT) of the

image and match it to a master plate to detect existence of cracks. The

system is limited to inspect cracks only without colour or shape

verification.

The developed technique by “Costa” and “Paretou” was based on

mounting 4 cameras at different angles [23]. A master template is captured

by each camera. Thereafter, a feature based technique is used to perform

image registration by means of linear transformation to calculate

translation, rotation and scaling. During the inspection stage, ceramic plate

is captured simultaneously by all cameras. The image is then compared to

a master template.

Page 23: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

10

The technique is meant to combine views from different angles to

enhance image features that may not be an intuitive task to be obtained by

utilizing a single camera. The system detects cracks; however, it lacks the

capability to perform accurate measurements due to global nature of linear

transformation method. The system could be enhanced if an intensity

based method is used with correlation matrix to find similarity between

captured images. Authors pointed out that the limitation of the system is

the memory intensive calculations which are considered the bottleneck of

the system.

“Smith” and “Stamp” in [24] were motivated to investigate the 3D

surface of ceramic plates as conventional applications of vision-based

surface inspection. The system relied upon the analysis of abstract

projected features at the image plane for which, in general, is not possible

to reliably distinguish between three-dimensional topographic, and two-

dimensional chromatic features. Limitations in established techniques

generally follow as a consequence of what might be termed a conventional

‘viewer-cantered’ approach to object surface representation. They stated

whenever inspection takes place; the analysis of projected and abstracted

two-dimensional features within the image domain is restricted. Difficulty

arises as the appearance of these features will be closely dependent upon

the viewing and lighting configuration, and as a consequence these aspects

will normally be highly constrained.

The developed system is based on capturing a sequence of images

using a camera fixed in one position. Each image is captured with

controlled illumination synchronously which is also known as photometric

stereo approach [25] . 3D surface of ceramic plates is then constructed

from the intensity variation of captured image. An experimental work was

performed using a CCD camera with resolution 512 X 512 pixels and the

software was developed using C++ language. The system lacks accuracy

although it succeeded to draw the general shape of 3D ceramic surface.

Page 24: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

11

By means of Self Organizing Maps (SOM) also known as Kohonen

feature map, “Kukkonen” and his colleagues built a system for colour

measurements for sorting five classes of brown tiles [26] . SOM utilizes

Euclidean distance for computing weights to weight vectors to neurons.

Meanwhile, the neuron with weight vector most similar to the input is

called the Best Matching Unit (BMU). The weights of the BMU and

neurons close to it in the SOM lattice are adjusted towards the input vector

which is the colour of each pixel and its position with respect to the top

left of the image. Thereafter, a K-Nearest Neighbour clustering is used for

fine tuning results. They compared results obtained from a trained 2D

(colour and position) SOM. Authors in [25] demonstrated that the

proposed system emulates human visual inspection to detect the preceded

tiles classes with a success rate of 97.6 %. However, they didn’t provide

any practical information regarding their experimental setup, which makes

comparing results to other methods impossible since their experiment

cannot be reproduced.

As a further development to the work published by “Kukkonen”, a

hybrid system of auto-associative and probabilistic neural network to

detect surface defects surface cracks on ceramic tiles [27] was carried out

by “Hocenski” and “Nyarko” . They explained their usage of the first

model to select the best features of the plate, and the latter one (the

probabilistic neural network) which is involved in classification.

They stated that utilization of Hopfield network minimized

computation since calculating weights is a one step process. Meanwhile,

probabilistic neural network which behaves like K-Nearest Neighbour

yielded better results compared to multilayer perceptron neural network.

Gaussian function was selected as the radial basis function for computing

the weight of each neighbour point. To perform an experimental work,

they used a CCD analogue camera with resolution of 400 X 240 pixels and

speed of 20 frames per second. Illumination technique was not discussed

which hardened the process of regenerating the experiment elsewhere.

Page 25: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

12

The computation time and processing power were principal

motivations for “Elbehiery” et. al. to propose an algorithm based on the

principles of image processing and morphological operations on the

ceramic tile images[12] . The algorithm converts the captured coloured

image into grey scale to be able to perform intensity adjustment and

histogram equalization. Thereafter, thresholding the image to convert it to

binary format and applying edge detection algorithm by comparing each

pixel to its surrounding neighbours. Filling gaps between discontinued

lines is done as a final stage before searching for irregular shapes using

SDC morphological operation which are morphology operations

specialized for grey scale images that exists in MATLAB [28]. Eventually,

Inversing each pixel value in the binary image to enhance the visual

appearance of the crack detected is carried out.

Although the proposed system is quite easy to implement, however, it

requires a good deal of manual tuning. Furthermore, it totally ignored the

colour defects which are critical in ceramic industry[26]. Regardless the

latter point, the major advantage of such an algorithm is the ability to be

implemented as a low cost solution.

By the aid of wavelet analysis [29], “Elbehiery” et. al. proposed a

more robust technique than their previous work in [12]. They also

expanded their work to cover cracks and spots. The selection of wavelet

analysis as they illustrated in their paper enabled them to inspect the plate

regardless of variability in scale and resolution. Since calculating digital

wavelet transform yielded large amount of data, authors chose only a

subset of scales and positions. They employed TMPLAR, which stands for

Template Learning from Atomic Representations with different techniques

like Haar, Daubechies and Coiflets wavelet families.

They trained the system using only five iterations to reach

convergence. Thereafter, pattern classification was carried out using

measurements of variance between the template and ceramic plate under

inspection. The proposed algorithm was able to find cracks in textured

plates; however, it is observed that no experiments were performed on

ceramic plates with more than two different colours.

Page 26: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

13

The published results showed only cracks without demonstrating the

detection of colour spots or any other geometric metrics.

“Novak” and “Hocenski” [15] investigated methods used to analyze

texture. They were categorized in two main categories: The first one,

called the statistical or stochastic approach, treats textures as statistical

phenomena. The formation of a texture is described with the statistical

properties of the intensities and positions of pixels. The second category,

called the structural approach, introduces the concept of texture primitives,

often called texels or textons.

To describe a texture, a vocabulary of texels and a description of their

relationships are needed. The goal is to describe complex structures with

simpler primitives, for example via graphs. They proposed the local binary

operator method for feature extraction since it is a monotonic process and

thus efficient under variable illumination methods.

The method uses two matrices, the first one is the weight of each

neighbour, and the second one represents the pixel of interest and its

surrounded neighbours. This method was originally used as a

complementary measure of local image contrast.

Since measuring the length of consecutive pixels and comparing it to

the master template, researcher proposed the method to be a part of

automated inspection system for ceramic tiles. The proposed algorithm

doesn’t cover colour inspection.

“Novak” extended his work in [30] by adding a pixel pair difference

method to search the image for surface defects as the method is mainly

used to encrypt the image. The method is simply commenced by flattening

the image into a vector holding the values of all pixels scanned in zigzag

manner. Thereafter, the difference in grey scale value of every 2 successive

no overlapping pixels were calculated. “Novak” used this method to get a

feature vector based on the encryption results by means of the difference

in histogram.

Page 27: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

14

If the defected image has a different value other than expected , a

deeper investigation using the local binary operator he proposed in [15] is

to be performed consequently. The key advantage of such a method is

minimization of calculation power requirements.

The technique is much slower when implemented on colour images

by comparing all three colour planes Red, Green and Blue (RGB). The

system is constrained to cracks and doesn’t investigate geometry, angles

or colour spots.

“Ghita” et al [31] had successfully built a complete automated visual

inspection system for painted slates. The system employed edge detection

using median and Sobel edge detector, morphological analysis and

filtering objects by area to classify defects if exists. The major issue

confronted him was the surface reflectivity, which he and his colleagues

endeavoured to tackle using partial lighting for illumination while using

oblique angle for camera acquisition. The system was able to yield

promising results. However, in regard to accurate geometry metrics,

acquiring images from oblique angle may not provide satisfactory results.

Moreover, evaluation of defects was the primary concern while colour

homogeneity and drawing geometry were ignored. The system was

published in details which is a major advantage for researchers carrying

out further development.

In 2006, “Vasilic” et. al. conducted a survey for edge detection

methods used to inspect ceramic plates [31] . The study was investigating

Prewitt, Sobel, Median and Canny edge detectors. The study was

performed over 23 different types of ceramic plates and researchers

concluded that Sobel filter sustained variability in shape and illumination

and thus recommended in this type of applications.

This conclusion comes in agreement with later publication by

“Vincent” and “Folorunso” [32] as they affirmed that Sobel filter

improves the quality of images taken under extremely unfavourable

conditions in several ways: brightness and contrast adjustment, edge

detection, noise reduction, focus adjustment, motion blur reduction.

Page 28: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

15

Meanwhile, the Sobel operator is based on convolving the image with

a small, separable, and integer valued filter in horizontal and vertical

direction and is therefore relatively inexpensive in terms of computation

power.

1.6 Statement of the problem

The present problem can be summarized in the following points:

Most of the published work related to ceramic inspection is focusing

on finding defects without considering practical aspects of

implementation such as camera types, lens distortion, illumination,

electrical circuitry and mechanical actuation system.

Since garnished ceramic plates industry is a fast growing field, new

inspection demands became mandatory. This involves wider

inspection procedure covering drawing geometry, angles, orientation

and colour of each drawn object.

Practical implementation may be discussed in more details to allow

future work to be carried out in an easier way.

1.7 Objective of the present work

Since garnished wall plates require more inspection procedures

than conventional peer covered in the preceded survey, present work is

aimed to:

Develop an algorithm to inspect geometry, angles, orientation and

colour of each drawn object found in the garnished plate. The

developed system is required to maintain the inspection process

performed without stopping the production line.

Design and implement a mechatronic system as a Flexible

Manufacturing Cell (FMC) to implement the proposed algorithm.

The system will comprise a software program to perform the

inspection and control the FMC, a mechanical actuation system for

sorting the ceramic plates and an electronic circuit which will serve

as an interfacing layer between software and mechanical sorting

system.

Page 29: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

16

Studying the impact of developed system on productivity.

The suggested algorithm may be structurally distilled into five pillars:

I. Image acquisition and pre-processing: for removing noise

and correcting the lens distortion by means of proposed

calibration algorithm.

II. Soft partitioning and colour grouping: for creating boundary

region among each drawn object in the ceramic plate based on

the degree of membership of all its inbound pixels to expected

colour patterns.

III. Geometric feature extraction: Whereby, dimensions of

drawn objects are extracted via edge detection and fitting the

points detected by means of an array of spokes.

IV. Classification using fuzzy logic: The final stage which yields

the ceramic grade (acceptable, colour spot or colour mismatch)

based on proposed fuzzy inference system.

Figure 1.3 Proposed inspection algorithm for garnished ceramic wall plates

Image acquisition and pre-processing

(Chapter 1)

Soft partitioning and colour grouping

(Chapter 2)

Geometric feature extraction

(Chapter 3)

Fuzzy classification

(Chapter 4)

Page 30: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

17

Image Acquisition and Pre-processing

This chapter depicts the first step in the proposed algorithm aiming

to prepare the image for processing stage. A proposed algorithm for image

calibration is discussed. Thereafter, the experimental setup for image

acquisition, gamma correction and colour adjustment is shown in details.

2.1 Spatial Calibration

Spatial calibration refers to the process of correlating the pixels of

an acquired image to real features in the image. This process can be used

to make accurate measurements in real-world units like millimetres instead

of pixels, and to correct for camera perspective and lens distortion [33]. As

illustrated in Figure 2.1, a grid of equally distant dots is placed in the

camera field of view. Thereafter, an image is taken for the calibration grid

and compared to the expected distribution of dots in order to perform

image correction.

Spatial calibration produces a mapping to depict how each pixel

relates to a real-world location. The image data itself is unaffected by this

process. Image correction actually applies the learned calibration

information to remove camera perspective or lens distortion effects from

an image. This process is computationally intensive. So in many cases it is

preferred to store the calibration results with the image and apply it as

necessary.

Figure 2.1 Notion of image calibration

Page 31: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

18

a) Barrel distortion b) Pincushion distortion

2.1.1 Lens distortion

Distortion is one of the worst problems limiting measurement

accuracy; even the best performing optics are affected by some grade of

distortion, while often even a single pixel of difference between the real

image and the expected image could be critical. Distortion is simply

defined as the percentage difference between the distance of an image

point from the image centre and the same distance as it would be measured

in a distortion-free image; it can be thought of as a deviation between the

imaged object and the real dimensions of that object.

Figure 2.2 Types of lens distortion

Although distortion can be irregular or follow many patterns, the

most commonly encountered distortions are radially symmetric, or

approximately so, arising from the symmetry of a photographic lens. The

radial distortion can usually be classified as one of two main types. In

"barrel distortion", image magnification decreases with distance from the

optical axis. The apparent effect is that of an image which has been mapped

around a sphere (or barrel) as shown in Figure 2.2a.

While In "pincushion distortion" (Figure 2.2b), image

magnification increases with the distance from the optical axis. The visible

effect is that lines that do not go through the centre of the image are bowed

inwards, towards the centre of the image, like a pincushion. To correct

such distortion, two approaches are used.

2.1.2 Telecentric Lens

Common lenses give different magnifications at different

conjugates: as such, when the object is displaced, the size of its image

changes almost proportionally with the object-to-lens distance. This is

something anybody can easily experience in everyday life, for example

when taking pictures with a camera equipped with a standard photographic

lens [5].

Page 32: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

19

With telecentric lenses, the image size is left unchanged with object

displacement, provided that the object stays within a certain range often

referred to as Depth of Field or Telecentric Range. This is due to the

particular path of the rays within the optical system: only ray cones whose

principal ray is parallel to the opto-mechanical main axis are collected by

the objective. For this reason, the front lens diameter must be at least as

large as the object field diagonal.

This optical behaviour is obtained by positioning the stop aperture

exactly on the focal plane of the front optical group (Figure 2.3): the

incoming rays aim at the entrance pupil which appears as being virtually

placed at the infinity. The name telecentric is derived from the two

syllables “tele” which means “far” in ancient Greek and “centre” which

accounts for the pupil aperture, the actual centre of an optical system [34].

Figure 2.3 In a telecentric system rays get into the optics only

The teleentric lens yields a high level correction of lens distortion;

however, the cost of such a lens is extremely expensive to be used in

prototyping in moderately funded projects. Therefore, software based lens

error correction is considered a much economic approach.

2.1.3 Software based correction

Correcting lens distortion by means of computer software has been

challenging problem in the last decade. The key advantage of utilization

of software is the reduction of cost compared to usage of hardware

telecentric lens. The challenge emerges from the complicated image

processing techniques combined to reduce such an error which will be

discussed in the next sections.

Page 33: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

20

2.2 Proposed algorithm for correcting and calibrating lens

distortion

Software aided lens distortion correction and calibration has been

developed to overcome the problem of distorted barrel shaped ceramic

plate image by means of image processing.

Figure 2.4 The proposed algorithm for perspective correction

The proposed algorithm offers a low cost solution by implementing

software manipulation of the image as shown in Figure 2.4. The key

objective is to estimate the original position of each pixel which corrects

the grid spacing and dots radii. The grid used has circles with 2mm radii

and 5mm distant from each other.

2.2.1 Colour plane extraction

Every digital coloured image is composed of rows and columns of

pixels. Each pixel colour is formed by a combination of red, green and blue

components called planes (8-bits for each colour plane). Depending on the

value stored in each colour plane, the pixel gains its visual colour. The

screenshot annotated by Figure 2.5 shows the resulted image after

extracting the red component of each pixel which produced the shown grey

scale image where pixels shown are resampled down from the original

coloured image (24 bit depth) to have 256 shades due to its 8 bits red colour

portion. List 2.1 illustrates the pseudo code for extracting the red

component from each pixel in the calibration grid. The red plane

experimentally proved to yield higher contrast and hence better correction.

Image

Acquisition

Colour Plane

Extraction

Automatic

Thresholding Particle

Grouping

Particle

Measurements

Displaying

corrected image

Distance

Measurements

Mapping particles

to its expected place

Page 34: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

21

Figure 2.5 Screenshot of calibration grid used in correction after red plane extraction

List 2.1 Colour plane extraction algorithm

2.2.2 Automatic Thresholding

Thresholding an image is meant to reduce the pixel representation

from 8 bits which is known as grey image to a single bit or binary image.

This method can be achieved using several techniques. The chosen method

for performing automatic thresholding was by means of clustering

algorithm.

Clustering is considered one of the effective adaptive algorithms in

binary thresholding. It sorts the histogram of the image within a discrete

number of classes corresponding to the number of phases perceived in an

image [35].

1. Start at first row and first column in the image.

2. Get the red component of the current pixel located in

current row and column

3. Save the new pixel value in same index in grey

image format.

4. Increment row number

5. If row count reaches the last row of the image,

increment column count by one. Else go to step2

6. If column count reaches the last column of the

image, stop. Else, go to step 2

Page 35: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

22

The grey values are determined, and a barycentre is determined for

each class. This process is repeated until it reaches a value that represents

the centre of mass for each phase or class is obtained.

Let p(x,y) be the pixel value of pixel located at xth row and yth

column. The threshold value is the pixel value k for which the following

condition is true:

𝑘 =𝜇1+𝜇2

2 2.1

Where μ1 is the mean of all pixel values that lie between 0 and k,

and μ2 is the mean of all pixel values that lie between k + 1 and 255. In

other words:

𝜇1 =∑ 𝑝(𝑥,𝑦)𝑘

0

𝑁𝑜.𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠 ℎ𝑎𝑣𝑖𝑛𝑔 𝑣𝑎𝑙𝑢𝑒 ≤𝑘 ; 0 ≤ 𝑝(𝑥, 𝑦) ≤ 𝑘 2.2

and 𝜇2 =∑ 𝑝(𝑥,𝑦)255

𝑘+1

𝑁𝑜.𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠 ℎ𝑎𝑣𝑖𝑛𝑔 𝑣𝑎𝑙𝑢𝑒 ≥(𝑘+1) ; 𝑘 + 1 ≤ 𝑝(𝑥, 𝑦) ≤ 255 2.3

Figure 2.6 Adaptive thresholding using clustering technique

After this stage (Figure 2.6), the grid is divided into two classes,

the white pixels representing dots and the black pixels forming the

background and hence separating the grid dots is achieved as implemented

using the algorithm depicted in List 2.2.

Page 36: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

23

List 2.2 Pseudo of thresholding using clustering algorithm

2.2.3 Particle grouping

After thresholding the image, it is necessary to find all grids (dots)

appeared past thresholding. To do so, it is required to group neighboured

particles that forming a complete dot (circle).

The easiest way is to track boundaries of each particle and track its

neighbours until returning to the starting point (aka seeding point). If

tracking reached an end point which differs from the starting one, it means

incomplete circle and it is excluded later from calibration calculation.

Figure 2.7 Resulted image of boundary tracking of grid dots

Each Circle is stored in an array to be treated independently. To

correct the geometry and location of each dot, few measurements are

needed. It is observable that incomplete circles exist around the grid border

since the field of view of the camera doesn’t cover all the whole grid area.

1. Initialize k=1

2. Calculate 𝜇1 𝑎𝑛𝑑 𝜇2

3. If 𝑘 =𝜇1+𝜇2

2 then replace each pixel value less than

k with zero, and one to every pixel with grey value

equal or above k.

4. If step 3 is false, increment the value of k and go to

step 3 again.

Page 37: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

24

Figure 2.7 shows the detected boundaries of each grid while the algorithm for achieving

such a task is illustrated in

List 2.3 by means of finding seeding point of each boundary and

follow the boundary until reaching the seeding point again.

List 2.3 Boundary extraction of grids contour

2.2.4 Particles and distance measurements

After boundaries are found, two measurements are carried out to

correct the image:

Area and perimeter of each dot: to correct distorted dots, area

is measured by counting number of pixels inside each boundary

region. Meanwhile, perimeter is calculated as the number of

pixels of the boundary.

Compactness factor: This is known as the ratio of the

perimeter squared over area. The circle has a minimum

compactness factor (4π). The compactness of each particle

determines if skewness takes place in order to be corrected.

Distance between centres of adjacent particles: There are

several methods for measuring distance between pixels, the

most known one is Euclidean which is defined as:

𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒 𝑖𝑛 𝑝𝑖𝑥𝑒𝑙𝑠 = √(𝑥2 − 𝑥1)2 − (𝑦2 − 𝑦1)2 2.4

Where x1, y1 and x2, y2 are coordinates of two adjacent centres

respectively. As shown in Figure 2.8, incomplete circles have odd centres

and thus were excluded from correction process (Figure 2.9).

1. Start at first row and first column in image.

2. Search for first non-zero pixel and label it S.

3. Search for the 8 neighbored pixels till finding

another pixel.

4. Make the last found pixel C and start searching for

non-zero pixels in its neighbors.

5. If all 8 neighbors are black, store it in N array.

6. Repeat steps 3 and 4 until returning to pixel S.

7. Increment search after pixel S

8. Go to step 2 until image is totally searched

Page 38: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

25

Figure 2.8 Detecting centres to check for skewness of each dot

Centres can be determined by measuring distance between two

horizontal pixels on boundary of any object and finding the intersecting

point with any other 2 pixels on the same boundary.

These centres serve to correct the location of each dot with respect

to its neighboured peers. Meanwhile, the centre location accompanied by

compactness factor of each dot will be employed to correct the pixel

location of each pixel in this dot.

Figure 2.9 Visualization of detected dots in the grid

Barrel distortion

Page 39: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

26

2.2.5 Performing correction on each pixel

By evaluating the area, perimeter and skewness, it is possible to

correct the distorted image by restoring each circle area and position. This

process is not difficult since previous knowledge about the original grid

paper is known in terms of radii and location of each dot.

Figure 2.10 Pixel mapping to restore expected grids locations

List 2.4 Restoration of grids circular shape

List 2.5 Retrieval of grids corrected locations

1. Locate first dot in the image.

2. Check for compactness.

3. Move pixels till compactness factor is minimized.

4. Copy pixels around boundary till reaching the

expected area.

5. Store new pixel locations and create the new pixels

correction lookup table.

6. Repeat for each object until finishing all objects in

the image.

1. Locate first object center in the top left of the image.

2. Measure distance between nearest horizontal and

vertical objects

3. Move objects and its surrounding objects to the

expected distance.

4. Locate the next object.

5. Repeat step 2 to 4 until all objects are processed.

6. Store new pixels locations

Page 40: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

27

The procedure simply rearranges pixels to restore the expected grid

distribution as shown in Figure 2.10. In List 2.4, the algorithm for

correcting the distorted shape of every grid is stated. Meanwhile,

relocating the corrected grids to its original location is detailed in List 2.5.

After this stage, any acquired image is mapped to the new

correction map (aka lookup table) and thus measuring errors is minimized.

As a result, Figure 2.11 forms the acquired image which appears distorted

as the acquired gird. Such a distortion causes drastic skewness which is

observed in the drawn objects near the image border.

Figure 2.11 Acquired image of ceramic plate

Thereafter, the correction pixel map extracted from the calibration

stage is applied to correct each pixel location as shown in Figure 2.12. The

minimization of skewness could be seen in the corrected image.

Observable

skewness

Page 41: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

28

Figure 2.12 The Corrected image after applying the algorithm

Figure 2.13 Screenshot of the developed software for calibration of ceramic plate

Corrected

image

Page 42: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

29

Since the calibration process performs pixel relocation, when

moving dots near the border few places with empty dots appear (black

area) of about 1 cm wide and have a side length of the ceramic plate. This

area is excluded from the inspection stage.

A screenshot shown (Figure 2.13) depicts the developed software

carried out where image is loaded offline, thereafter, the proposed

algorithm is executed and an overall average execution time is displayed

to measure the execution speed of the proposed algorithm.

2.3 Image acquisition

The first stage of any vision system is the image acquisition stage.

After the image has been obtained, various methods of processing can be

applied to the image to perform many different vision tasks required today

[6]. However, if the image has not been acquired satisfactorily then the

intended tasks may not be achievable, even with the aid of some form of

image enhancement. The conducted survey covered the commonly used

cameras in machine vision systems:

2.3.1 Analogue cameras

Analogue cameras are cameras that generate a video signal in

analogue format. The analogue signal is digitized by an image acquisition

device. The video signal is based on the television standard, making

analogue the most common standard for representing video signals.

A charge-coupled device (CCD) is an array of hundreds of

thousands of interconnected semiconductors. Each pixel is a solid-state,

photosensitive element that generates and stores an electric charge when it

is illuminated. The pixel is the building block for the CCD imager, a

rectangular array of pixels on which an image of the scene is focused. In

most configurations, the sensor includes the circuitry that stores and

transfers its charge to a shift register, which converts the spatial array of

charges in the CCD imager into a time-varying video signal. Timing

information for the vertical and horizontal positions and the sensor value

combine to form the video signal [36] .

Page 43: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

30

Figure 2.14 CCD array configuration

For standard analogue cameras, the lines of the CCD are interlaced

to increase the perceived image update rate. This means that the odd-

numbered rows (the odd field) are scanned first. Then the even-numbered

fields (the even field) are scanned.

2.3.2 Digital Cameras

Digital cameras have several advantages over analogue cameras.

Analogue video is more susceptible to noise during transmission than

digital video. By digitizing at the camera level rather than at the image

acquisition device, the signal-to-noise ratio is typically higher, resulting in

better accuracy. Because digital cameras are not required to conform to

television standards, they can offer larger image sizes and faster frame

rates, as well as higher pixel resolutions. Digital cameras come with 10 to

16-bit gray levels of resolution as a standard for machine vision,

astronomy, microscopy, and thermal imaging applications. Digital

cameras use the same CCD type devices for acquiring images as analogue;

they simply digitize the video before sending it to the frame grabber.

Sensors Vertical Shift Register

Output

Horizontal Shift Register

Page 44: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

31

2.3.2.1 Camera Link

Camera Link is an interface specification for cables that connect

digital cameras to image acquisition devices. It preserves the benefits of

digital cameras – such as flexibility for many types of sensors, yet it has

only a small connector and one or two identical cables, which work with

all Camera Link image acquisition devices.

Camera Link greatly simplifies cabling, which can be a complex

task when working with standard digital cameras. The constraint of using

such a type is the expensive cost which may not suit prototyping stage in

proposed research.

2.3.2.2 USB cameras

USB cameras have a major advantage over all other types,

portability. In other words, it requires no special hardware other than

dedicated USB port. Although USB cameras offered in market supports

high resolution (up to 6 MP), the acquisition speed is poor.

2.3.2.3 IEEE 1394 (Firewire cameras)

IEEE 1394 is a serial bus standard used by many PC peripherals,

including digital cameras. IEEE 1394 cameras use a simple, flexible, 4 or

6-wire power cable; and in some cases, the bus can supply power to the

camera. However, because IEEE 1394 is a shared bus, there is a bandwidth

limitation of approximately 250 MB/s when no other device is connected

to the bus.

IEEE 1394 cameras also require processor control to move the

image data, which limits available processor bandwidth for image

processing. IEEE 1394 is a standard that also includes functions for

enumerating and setting up the camera capabilities.

A firewire camera with 640 X 480 pixels was selected in the

present work for the following reasons:

It has a high frame rate per second (60 fps) which is sufficient to

acquire images online without the need to stop the conveyor

carrying ceramic plates under inspection.

Page 45: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

32

The cost is relatively moderate compared to other high speed

acquisition peers. For instance, it costs nearly 30% of the price of

the camera link camera.

Table 2.1 Comparison of available cameras utilized in machine vision application

Analogue

Cameras

Parallel

Digital

Cameras

Camera

Link

Cameras

IEEE 1394

Cameras

Data Rate Up to 4

MB/S

Up to 120

MB/S

Up to 510

MB/S

Up to 250

MB/S

Spatial

Resolution Low High High Medium

Functionality Simple and

easy Advanced Advanced

Simple and

easy

Pixel Depth (per colour plane)

8-bit to 10-

bit

Up to 16-

bit

Up to 16-

bit

Typically

8-bit

Cabling

Simple

BNC

cabling

Thicker,

custom

cabling

Simple,

standard

cabling

Simple,

standard

cabling

2.4 Acquisition parameters

The image acquisition has several attributes to be set .These

attributes are affected by external parameters like the surrounding

illumination and object reflectivity. Firewire cameras is accompanied with

software Application Programming Interfacing functions (APIs) to allow

software programming languages such as Visual Basic and MATLAB to

adjust such parameters. When performing parameter adjustment, the rule

of thumb is to use colourful objects for obtaining satisfactory results. The

following subsections demonstrate the key attributes and experimental

setup of the developed flexible cell for acquisition from the chosen firewire

camera.

2.4.1 Image format

The utilized firewire camera has several built in formats. Each

format comprises pre-processing functions such as the image size in pixels

and the colour depth as shown in Figure 2.15. The setup was commenced

using 656 X 490 and then optimized to 640 X 480 pixels.

Page 46: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

33

2.4.2 Speed

The speed is the representation of the data rate mentioned earlier

in the comparison table. The speed affects the maximum image size and

the delivered frames per second. The camera has a speed of 100 mb/s.

Figure 2.15 Image format and delivered speed rate

2.4.3 Bayer colour filter

A Bayer filter is a Colour Filter Array (CFA) for arranging RGB

colour filters on a square grid of photo-sensors. Its particular arrangement

of colour filters is used in most single-chip digital image sensors used in

digital cameras, camcorders, and scanners to create a colour image.

Page 47: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

34

The filter pattern is 50% green, 25% red and 25% blue, hence is

also called GRGB or other permutation such as RGGB as in Figure 2.16a.

a) The Bayer arrangement of colour filters on the pixel array of an image sensor

b) Configuring Bayer pattern and RGB colour gain of the camera mounted on the test

rig

Figure 2.16 Bayer filter (concept and configuration)

Page 48: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

35

This behaviour mimics the physiology of the human eye. The retina

has more rod cells than cone cells and rod cells are most sensitive to green

light. These elements are referred to as sensor elements, sensels, pixel

sensors, or simply pixels; sample values sensed by them, after

interpolation, become image pixels.

2.4.4 Brightness and contrast

Brightness and contrast represent a way to adjust an image. They

come from the display technology, being common controls in all monitors.

The colour brightness/contrast is similar to the grey scale counterparts, in

most cases being applied to all RGB channels. For a grey scale image,

brightness represents an image adjustment where a constant value is added

to all pixel values. The contrast adjustment is a multiplication of the pixel

values with a constant. The value was using trials and error as depicted in

Figure 2.17.

Figure 2.17 Brightness adjustment

Page 49: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

36

2.4.5 Gain

Since the camera used in the conducted research activity has an on-

board 10 bits A/D converter, the camera offers gain control where mapping

the raw 10 bits of each colour plane can be reduced to 8 bits. The gain

controls which A/D range (0-1024) is mapped to the 8 bits counterparts

(Figure 2.18).

Figure 2.18 Gain control

A glossy cover was chosen with different colours to serve two

objectives:

To check the colour quality of the acquired image.

To adjust the illumination level to minimize reflectivity without

affecting the colour adjustment.

Page 50: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

37

a- Response of gamma correction b- Pseudo code of gamma correction

2.5 Gamma Correction

Gamma correction, gamma nonlinearity, gamma encoding, or

often simply gamma, is the name of a nonlinear operation used to code and

decode luminance values in video or still image systems. Gamma

correction is defined by the following power-law expression:

𝑉𝑜𝑢𝑡 = 𝑉𝑖𝑛𝛾

2.5

Where the input voltage from the acquisition camera (Vin) and output

values displayed on the screen (Vout) are non-negative real values, typically

in a predetermined range such as 0 to 1[36]. To implement gamma

correction digitally as shown in Figure 2.20, each pixel value in every

colour plane (Red, Green and Blue) is raised to correction factor (γ) using

the following formula:

𝑃𝑜𝑢𝑡 = 255 × (𝑃𝑖𝑛

255)𝛾

2.6

The range of values used for gamma will depend on the application.

The selected value for correcting the acquired ceramic image was found to

be 0.7 based on trial and error in the three colour planes of the image.

Figure 2.19 Gamma correction of the acquired ceramic image (γ=0.7)

For i=0 to Max. Row Count

For j=0 to Max. Row Count

GetPixelColour(i,j)

newRed = 255 * (Red(colour) / 255) ^ gammaCorrection

newGreen = 255 * (Green(colour) / 255) ^

gammaCorrection

newBlue = 255 * (Blue(colour) / 255) ^ gammaCorrection

Next j

Nect i

PutPixelColour(x, y) = RGB(newRed, newGreen, newBlue)

Page 51: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

38

Figure 2.20 Implementation of Gamma Correction

As shown in Figure 2.20, the gamma correction drastically

enhanced the acquired image which appeared to be washed out due to

gamma effect.

a-Raw image b-Gamma correction

Page 52: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

39

Soft Partitioning and Colour Grouping

In this chapter, the method of fuzzy c-mean is introduced. The

proposed algorithm for colour sorting and grouping is also discussed in

details.

3.1 Clustering

Cluster Analysis encompasses a number of different algorithms and

methods for grouping objects of similar type into respective categories. A

general question facing researchers in many areas of inquiry is how to

organize observed data into meaningful structures, that is, to develop

taxonomies or classes[37]. In other words cluster analysis is an exploratory

data analysis tool which aims at sorting different objects into groups in a

way that the degree of association between two objects is maximal if they

belong to the same group and minimal otherwise as shown in Figure 3.1.

Cluster analysis can be used to discover structures in data without

providing an explanation or interpretation. In other words, cluster analysis

simply discovers structures in data without explaining why they exist.

Figure 3.1 Clustering is performed by grouping each similar data based on common

criteria

There are a countless number of examples in where clustering plays

an important role. For instance, biologists have to organize the different

species of animals before a meaningful description of the differences

between animals is possible.

Dataset Clustering

Page 53: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

40

According to the modern system employed in biology, man

belongs to the primates, the mammals, the amniotes, the vertebrates, and

the animals. The higher the level of aggregation the less similar are the

members in the respective class.

Clustering is one of the effective methods for classifications

nowadays. It has been employed in coal classification used in thermal

power stations[38], for Electroencephalogram signal classification[39],

even as an efficient algorithm for estimation of insects invasion[40] and

for bearing fault detection[41].

3.2 Problem of clustering

Let’s consider a dataset 𝑋 consisting of data points 𝑥𝑖 (1 ≤ 𝑖 ≤𝑁)representing objects, patterns, pixels….etc and 𝑥𝑖 = {𝑎𝑖1, 𝑎𝑖1, … . . 𝑎𝑖𝑑} where each element in 𝑥𝑖 denotes a nominal attribute (colour, surface

roughness, shape…. etc.

The problem is to find adequate criteria called nodes to group data

points around it to evaluate the overall classification of each data point.

3.3 Advantages of using clustering

Clustering can be beneficial solution whenever scalability is

required. The algorithm is also capable of dealing with different types of

data which in conclusion offers high dimensionality [32, 37].

3.4 Constraints of using clustering

One of the major restrictions in using clustering techniques is a

time consuming process when dealing with large set of data. This

behaviour can be prohibited in time critical applications like web search

engines. Meanwhile, the effectiveness of classification depends on the

proper definition of similarity [42].

Page 54: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

41

3.5 Methodologies of distance measurements

An important step in most clustering is to select a distance measure,

which will determine how the similarity of two elements is calculated. This

will influence the shape of the clusters, as some elements may be close to

one another according to one distance and farther away according to

another.

3.5.1 Common distance functions

The following review covers the most commonly used distance

measures techniques for distance measure assisted clustering:

3.5.1.1 The Euclidean distance

It is also called 2-norm distance. A review of cluster analysis in

pattern classification research found that the most common distance

measure in published studies in that research area is the Euclidean distance

or the squared Euclidean distance [43].

The Euclidean distance between point p and point q is the length

of the line segment 𝑝𝑞̅̅ ̅. In Cartesian coordinates, if 𝑝 = (𝑝1, 𝑝2, . . . , 𝑝𝑛)

and 𝑞 = (𝑞1, 𝑞2, . . . , 𝑞𝑛) are two points in Euclidean n-space, then the

distance from p to q is given by:

𝑑(𝑝, 𝑞) = √∑ (𝑝𝑖 − 𝑞𝑖)2𝑛𝑖=1 3.1

According to this, in a 2-dimensional space, the distance between

the point (x = 1, y = 0) and the origin (x = 0, y = 0) is always 1

according to the usual norms, but the distance between the point (x =

1, y = 1) and the origin will be √2.

3.5.1.2 The Manhattan distance (aka taxicab norm or 1-norm)

This method was derived by “Hermann Minkowski” in the 19th

century. In comparison to Euclidean geometry, the usual distance function

or metric is replaced by a new metric in which the distance between two

points is the sum of the absolute differences of their coordinates.

Page 55: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

42

Figure 3.2 Taxicab geometry vs. Euclidean distance

In other words:

𝑑(𝑝, 𝑞) = ∑ |𝑝𝑖 − 𝑞𝑖|𝑛𝑖=1 3.2

Recalculating the example illustrated in the previous section

(section 3.5.1.1), the distance measured using Manhattan technique will be

2. As shown in Figure 3.2, the red, blue, and yellow lines have the same

length (𝟏𝟐) in taxicab geometry for the same route. In Euclidean

geometry, the green line has length of 𝟔 × √𝟐 ≈ 𝟖. 𝟒𝟖, and is the unique

shortest path.

3.5.1.3 The Hamming distance

Figure 3.3 Example of hamming distance

Page 56: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

43

This method measures the minimum number of substitutions

required to change one member into another. For instance, the hamming

distance between 2173896 and 2233796 is 3. In Figure 3.3, two examples

are given to calculate the hamming distance from 0110 to 1110 which is 1

(blue path) and from 0100 to 1001 which is 3 (red path).

3.6 Clustering Techniques

Hierarchical algorithms find successive clusters using previously

established clusters. These algorithms usually are either agglomerative

("bottom-up") or divisive ("top-down"). Agglomerative algorithms begin

with each element as a separate cluster and merge them into successively

larger clusters. Divisive algorithms begin with the whole set and proceed

to divide it into successively smaller clusters.

Partitioning algorithms typically determine all clusters at once, but

can also be used as divisive algorithms in the hierarchical clustering.

Density-based clustering algorithms are devised to discover

arbitrary-shaped clusters. In this approach, a cluster is regarded as a region

in which the density of data objects exceeds a threshold.

Subspace clustering methods look for clusters that can only be seen

in a particular projection (subspace, manifold) of the data. These methods

thus can ignore irrelevant attributes. The general problem is also known as

correlation clustering while the special case of axis-parallel subspaces is

also known as two-way clustering, co-clustering or bi-clustering.

In these methods not only the objects are clustered but also the

features of the objects, i.e., if the data is represented in a data matrix, the

rows and columns are clustered simultaneously. They usually do not

however work with arbitrary feature combinations as in general subspace

methods. But this special case deserves attention due to its applications in

bioinformatics.

Many clustering algorithms require the specification of the number

of clusters to produce in the input data set, prior to execution of the

algorithm. Barring knowledge of the proper value beforehand, the

appropriate value must be determined, a problem on its own for which a

number of techniques have been developed.

Page 57: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

44

3.7 Effectiveness of colour clustering

The objective of colour clustering is to divide a colour set into

homogeneous colour clusters. Colour clustering is used in a variety of

applications, such as colour image segmentation and recognition. Fuzzy

clustering models have proved a particularly promising solution to the

colour clustering problem [44]. Such unsupervised models can be used

with any number of features and clusters.

In addition, they distribute membership values across the clusters

based on natural groupings in feature space of the fuzzy clustering

algorithms proposed to date. The fuzzy c-means (FCM) algorithm is the

most widely used [45].

3.7.1 The need for fuzzification in colour grouping

Colour clustering is an inherently ambiguous task because colour

boundaries are often blurred. For example, consider the task of dividing a

colour image into colour objects. In colour images, the boundaries between

objects are blurred and distorted due to the imaging acquisition process.

Furthermore, object definitions are not always crisp, and knowledge about

the objects in a scene may be vague [46].

Fuzzy set theory and fuzzy logic are ideally suited to deal with such

uncertainties. In fuzzy clustering, the uncertainty inherent in a system is

preserved as long as possible before decisions are made. As a result of this

approach, fuzzy clustering is less prone to falling into local optima than

most of other crisp clustering algorithms [47].

3.7.2 Challenges in fuzzy clustering

There are three major difficulties in fuzzy clustering:

Determining the optimal number of clusters to be created (most

algorithms require the user to specify the number of clusters).

Choosing the initial cluster centroids (most algorithms choose a

random selection because such a selection affects the duration of

the iterative process).

Handling data characterized by large variability in cluster shape,

cluster density, and the number of points in different clusters.

Page 58: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

45

3.7.3 Proposed soft partitioning and group visualization using fuzzy

clustering (fuzzy c-mean)

Colour grouping is meant to isolate objects with similar colours to

be prepared for feature extraction. Since pixels of same object may have

variability in colours, thus partitioning could be achieved by iterative fuzzy

c-mean where each centre depicts the RGB colour components of painted

object. This information is taken manually by human operator.

Finding all regions that share the same colour with degree of

membership is carried out by finding the local minimum of the objective

function. To inspect the quality of garnished wall plates, the colour of each

drawn object is utilised for segmentation using fuzzy C-Mean (aka fuzzy

k-Mean). Fuzzy C-mean was developed by J.C. Bezdek[48].

Let P=(𝐶1, 𝐶2, 𝐶3 …𝐶𝑖) be the colour clusters observed in the image

where 𝑖 ≠ 0, and 𝜇𝑐𝑖be the degree of membership of each pixel RGB value

𝑥𝑖 = (𝑥𝑖𝑟 ,𝑥𝑖𝑔 , 𝑥𝑖𝑏) or colour feature vector) that belong to cluster 𝐶𝑖

where 0 ≤ 𝜇𝑐𝑖≤ 1. X is the total number of pixels in the image.

Finding all regions that share the same colour with degree of

membership is carried out by finding the local minimum of the objective

function. This is implemented by applying the formulas below[1]:

μci(x) =

1

∑ (‖x−vi‖

2

‖x−vj‖2)k

j=1

, 1 ≤ i ≤ k, x ∈ X Eq. 3.3

𝒗𝒊 =∑ (𝜇𝑐𝑖

(𝑥))𝑚

× 𝑥𝑥∈𝑋

∑ (𝜇𝑐𝑖(𝑥))

𝑚

𝑥∈𝑋

, 1 ≤ 𝑖 ≤ 𝑘 Eq. 3.4

𝒗𝒊𝒕+𝟏 − 𝒗𝒊

𝒕 ≤ 𝜀 Eq. 3.5

Where 𝑣𝑖 is the prototype of ith cluster that minimizes the objective

function, m was chosen to be 2 and 𝜖 is the acceptable error before stopping

the iteration process. The symbol t refers to the previous iteration. Each

cluster represents distinctive colour in the ceramic plate.

Page 59: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

46

Figure 3.4 visual representation of clustering result on the garnished wall plates

After this step, all pixels in the template image are mapped to its

appropriate cluster, isolated pixels among large group of pixels belong to

different clusters are treated as part of the surrounding object features.

Figure 3.4 illustrates the impact of clustering in terms of separating

features (hollow rectangles) from the background (flat surface). Colour

nodes RGB values were set using the average vote of the blue, red and

mild white colour rating.

List 3.1 Colour sorting with fuzzy c-mean

1. Load master ceramic image template.

2. Select each distinctive drawn object RGB components by

operator.

3. Store each distinctive colour in the master image in a

structure=( C1, C2…. Cn) comprising 3 bytes X1, X2 and

X3(for red, green and blue components) where Cn

represents the nth observed colour in the image.

4. Enter the ϵ=0.01.

5. Store index of processed pixel.

6. Calculate μci(x) =

1

∑ (‖x−vi‖

2

‖x−vj‖2)k

j=1

7. Calculate 𝒗𝒊 =∑ (𝜇𝑐𝑖

(𝑥))𝑚

× 𝑥𝑥∈𝑋

∑ (𝜇𝑐𝑖(𝑥))

𝑚

𝑥∈𝑋

8. If 𝒗𝒊𝒕+𝟏 − 𝒗𝒊

𝒕 ≥ 𝜀 then go to update step 6, else stop.

Page 60: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

47

List 3.2 The algorithm for grouping objects by colour and location

List 3.1 shows the implemented algorithm of the fuzzy c-mean

clustering while the latter list (List 3.2) illustrates the proposed grouping

technique for better visualization of clustered objects. After this step, each

pixel is stored in a structure with its current index and the degree of

membership to each distinctive drawn coloured object.

Figure 3.5 The developed software showing fuzzy clustering effect in horizontal

orientation

1. Start with First Pixel in first row and first column.

2. Check each pixel RGB value in the 8 pixels neighbours

if having the same colour with 85% or above degree of

membership.

3. If true, group the pixel with its matched neighbours.

4. Increment the row count.

5. If the row count reached last row in the image,

increment the column number by 1

6. If column count reached last column in the image, goto

step 8.

7. Go to step 2.

8. Store each grouped pixels indices in an array.

Page 61: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

48

Figure 3.6 The developed software showing fuzzy clustering effect in vertical

orientation

In Figure 3.5 and Figure 3.6, screenshots of the developed software

are presented which comprise two phases, the learning phase where

clustering iterations is performed, and the inspection phase where colour

grouping based on collected data is performed. The execution time is

calculated and displayed as well.

Page 62: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

49

Features

Vector for

Line

s

Arcs Angle

Geometry Orientation

Pixel RGB

Colour

Space

Geometric Feature Extraction

4.1 Definition of feature vector

A feature vector is an n-dimensional vector of numerical features

that represent some object. Many algorithms in machine learning require

a numerical representation of objects, since such representations facilitate

processing and statistical analysis. When representing images, the feature

values might correspond to the pixels of an image, when representing texts

perhaps to term occurrence frequencies. In checking the quality of a drawn

object r, several features take place such as height (Hr), width (Wr),

orientation with respect to a predetermined reference (Ør) and colour

components in RGB format if the object is coloured. Figure 4.1 depicts the

visual representation of geometric features while Figure 4.2 illustrates the

classification of it.

Figure 4.1 Extracting geometric features

Figure 4.2 Feature vector as metrics for inspection

ØI

HI

WI

RGB

(Ri,Gi,Bi)

Page 63: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

50

4.2 Features of interest in garnished wall plates

As shown in Figure 4.1 and Figure 4.2, inspecting garnished wall

plates involves measuring several features such as geometry (lines and

arcs), orientation of drawn objects with respect to reference (figure 4.1a)

and the colour of drawn object (discussed earlier in chapter 3). This

process in metrology is known as gaging. Selection of the appropriate

features has a drastic impact on the quality of the assessment [49] and

eventually the proper class of each ceramic plate.

For online inspection, due to the need of carrying out such a task

with the minimum time to avoid considering it as the bottleneck in the

production process, recent proposed systems carried its inspection only on

certain areas named Region Of Interest (ROI) [50].

4.3 Applied algorithm for preliminary locating of drawn

objects

To extract the feature needed to evaluate the geometry and

orientation of the studied ceramic plate, an edge detection process is

required which facilitates locating boundaries of drawn objects.

Methodology of edge detection

Edge detection is a fundamental tool in image processing and

computer vision, particularly in the areas of feature detection and feature

extraction, which aim at identifying points in a digital image at which the

image brightness changes sharply or more formally have discontinuities.

There are many methods for edge detection [51], but most of them

can be grouped into two categories. These are search-based and zero-

crossing based. The search-based methods detect edges by first computing

a measure of edge strength, usually a first-order derivative expression such

as the gradient magnitude, then searching for a local directional maximum

of the gradient magnitude using a computed estimate of the local

orientation of the edge, usually the gradient direction.

Page 64: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

51

The zero-crossing based methods search for zero crossings in a

second-order derivative expression computed from the image in order to

find edges, usually the zero-crossings of the Laplacian or the zero-

crossings of a non-linear differential expression. As a pre-processing step

to edge detection, a smoothing stage, typically Gaussian smoothing, is

almost always applied.

Formulating edge detectors

If P(i, j) represents the intensity of the pixel P with the coordinates

(i, j), the pixels surrounding P(i, j) can be indexed as in Figure 4.3 (in case

of a 3 × 3 matrix):

P(i-1,j-1) P(i-1,j) P(i-1,j+1)

P(i,j-1) P(i,j) P(i,j+1)

P(i+1,j-1) P(i+1,j) P(i+1-1,j+1)

Figure 4.3 Processed pixel and its neighbours

A linear filter assigns to P(i, j) a value that is a linear combination

of its surrounding values. For example:

𝑷(𝒊,𝒋) = 𝑷(𝒊,𝒋−𝟏) + 𝑷(𝒊−𝟏,𝒋) + 𝟐𝑷(𝒊,𝒋) + 𝑷(𝒊+𝟏,𝒋) + 𝑷(𝒊,𝒋+𝟏) 4.1

A nonlinear filter assigns to P(i, j) a value that is not a linear

combination of the surrounding values. For example:

𝑷(𝒊,𝒋) = 𝒎𝒂𝒙(𝑷(𝒊−𝟏,𝒋−𝟏)

, 𝑷(𝒊+𝟏,𝒋−𝟏), 𝑷(𝒊−𝟏,𝒋+𝟏), 𝑷(𝒊+𝟏,𝒋+𝟏)) 4.2

So, For each pixel P(i, j) in an image, the convolution kernel

(Figure 4.4) is centred on P(i, j). Each pixel masked by the kernel is

multiplied by the coefficient placed on top of it. P(i, j) becomes either the

sum of these products divided by the sum of the coefficient or 1, depending

on which is greater [51].

K(i-1,j-1) K(i-1,j) K(i-1,j+1)

K(i,j-1) K(i,j) K(i,j+1)

K(i+1,j-1) K(i+1,j) K(i+1-1,j+1)

Figure 4.4 Kernel multiplied by the image window

Page 65: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

52

The pixel P(i, j) is given the value :

(1

𝑁)∑𝐾(𝑎,𝑏)𝑃(𝑎,𝑏) 4.3

Where a is ranging from (i – 1) to (i + 1), and b is ranging from (j

– 1) to (j + 1). N is the normalization factor, equal to ∑ K(a, b) or 1,

whichever is greater.

If the new value P(i, j) is negative, it is set to 0. If the new value P(i,

j) is greater than 255, it is set to 255 (in the case of 8-bit resolution).

The greater the absolute value of a coefficient K(a, b), the more the

pixel P(a, b) contributes to the new value of P(i, j). If a coefficient K(a, b) is 0,

the neighbour P(a, b) does not contribute to the new value of P(i, j).

For example, if the convolution kernel is [0 0 0

−2 1 20 0 0

] , then:

𝑷(𝒊,𝒋) = −𝟐𝑷(𝒊−𝟏,𝒋) + 𝑷(𝒊,𝒋) + 𝟐𝑷(𝒊+𝟏,𝒋) 4.4

Filters are mainly used in edge detection, image blurring and

sharpening. Edge detectors also fall into two categories:

Highpass filters emphasize significant variations of the light intensity

usually found at the boundary of objects. Highpass filters help isolating

abruptly varying patterns that correspond to sharp edges, details, and

noise.

Lowpass filters attenuate variations of the light intensity. Lowpass

filters help emphasize gradually varying patterns such as objects and

the background. They have the tendency to smooth images by

eliminating details and blurring edges.

The edge detection methods that have been published mainly differ

in the types of smoothing filters that are applied and the way the measures

of edge strength are computed.

Page 66: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

53

As many edge detection methods rely on the computation of image

gradients, they also differ in the types of filters used for computing

gradient estimates in x- and y-directions.

Tuned Prewitt Filter

The Prewitt edge detector is an appropriate way to estimate the

magnitude and orientation of an edge [51]. Although differential gradient

edge detection needs a rather time-consuming calculation to estimate the

orientation from the magnitudes in x- and y-directions, the Prewitt edge

detection obtains the orientation directly from the kernel with the

maximum response.

(a) Original Image (b) Prewitt Filter

(c) Feature Extraction

Figure 4.5 Edge detection and extracting geometry of drawn objects

Let 𝑃(𝑖,𝑗) be the pixel located at ith row and jth column, moving a

mask of size 3X3 pixels along the image and calculate the new pixel value

in centre where mask window is applied using the formula:

𝑷(𝒊,𝒋) = 𝒎𝒂𝒙

[ |𝑷(𝒊+𝟏,𝒋−𝟏) − 𝑷(𝒊−𝟏,𝒋−𝟏) + 𝑷(𝒊+𝟏,𝒋) −

𝑷(𝒊−𝟏,𝒋) + 𝑷(𝒊+𝟏,𝒋+𝟏) − 𝑷(𝒊−𝟏,𝒋+𝟏)| ,

|𝑷(𝒊−𝟏,𝒋+𝟏) − 𝑷(𝒊−𝟏,𝒋−𝟏) + 𝑷(𝒊,𝒋+𝟏) −

𝑷(𝒊,𝒋−𝟏) + 𝑷(𝒊+𝟏,𝒋+𝟏) − 𝑷(𝒊+𝟏,𝒋−𝟏)|]

4.5

The acquired image is transformed to grey scale image as discussed

in section 2.2.1. Thereafter, a mask of 3𝑥3 aka kernel is applied around

each drawn object where its centre initially located in the colour grouping

stage using the fuzzy clustering.

Page 67: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

54

This way, most of the time needed to locate edges by moving the

mask all over the image is saved which is meant to tune the usage of such

a filter.

On the contrary of the time consuming correlation technique that

requires intensive computations, edge detection can serve as a preliminary

locator for each shape. By comparing each pixel with higher grey value

(edge) to its neighbours, Points at each edge of the drawn object can be

found. A rake of parallel lines is used to measure the linear dimensions of

each side.

Figure 4.5b shows the ceramic plate after applying Prewitt filter.

Searching for the four edges is fairly trivial by starting at any point in the

boundary of any edge and searching for points where 2 points of its

surrounding 4 adjacent points is below a chosen threshold of 120 on the

grey scale levelling based trials and error. Edges on the same line are used

to determine the orientation of the rake used to retrieve the dimension and

linearity of that line (Figure 4.5c). This is done as the rake consists of

parallel lines which once intersect with any edge, a point is drawn.

The points collected from each line can evaluate the linearity of the

line by finding the best fit and calculating the error along the fitting

equation as shown in Figure 4.6. Fitting plays a major rule in noise

rejection characterised by small difference values (less than 2).

Figure 4.6 Fitting detected points along the profile into lines

Profile length

Edg

e D

iffe

ren

ce

Page 68: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

55

4.4 Computing Feature Score

After feature extraction stage, it is demanded to evaluate the

parameters extracted which are summarized in two main categories:

Colour Quality.

Accuracy of geometry (dimension, distance from reference

and orientation).

Calculating the score for colour quality

Let the drawn object detected by the fuzzy c-clustering

(section 3.7.3) be annotated by Oi where i represents the index of the drawn

object in the ceramic plate under inspection. The drawn object has Pm

pixels within its boundaries where every pixel has μm degree of

membership to the expected colour of the drawn object. The colour score

(CS) of the drawn object i is calculated as follows:

𝐶𝑆𝑂𝑖=

∑ 𝜇𝑚𝑚𝑘=1

𝑚 4.6

Where m is the total number of pixels in every drawn object. The

overall score is (CSoverall) is then calculated by evaluating the average of

the colour scores:

𝐶𝑆𝑜𝑣𝑒𝑟𝑎𝑙𝑙 =∑ 𝐶𝑆𝑘

𝑖𝑘=1

𝑖 4.7

Calculating the accuracy of geometric features

As mentioned earlier (Section 4.2), the geometric feature

extraction involves checking for dimension, distance from the reference

point (shown in Figure 4.7) and expected orientation with respect to this

reference.

Once every dimension is retrieved after fitting operation, the

Dimension Score (DS) of each object is calculated as:

𝐷𝑆𝑂𝑖=

∑ 1−|𝐷𝑡𝑘

−𝐷𝑚𝑘|

𝐷𝑡𝑘

𝑛𝑘=1

𝑛 4.8

Page 69: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

56

Where n is the total number of extracted dimensions. The overall

score is then calculated as:

𝐷𝑆𝑜𝑣𝑒𝑟𝑎𝑙𝑙 =∑ 𝐷𝑆𝑘

𝑖𝑘=1

𝑖 4.9

The same steps are carried out to evaluate the score of distance of

each object’s centre from the reference which is annotated by (LS) and the

orientation of each object (OS). The overall geometry score (OGS) is then

calculated as:

𝑂𝐺𝑆 =𝐷𝑆𝑜𝑣𝑒𝑟𝑎𝑙𝑙+𝐿𝑆𝑜𝑣𝑒𝑟𝑎𝑙𝑙+𝑂𝑆𝑜𝑣𝑒𝑟𝑎𝑙𝑙

3 4.10

Figure 4.7 Screenshot of the developed software showing the feature extraction stage

The screenshot in Figure 4.7 shows the developed software which

highlights the boundaries of each drawn object, calculates its dimension

and orientation with respect to predetermined reference shown in the lower

left corner.

Page 70: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

57

Highlighting is made using three layers:

Red layer: for highlighting geometry of each drawn object.

Green layer: for connecring the reference point to the centre

of each drawn object to calculate the orientation.

Yellow layer: for pointing to the rejected objects that doesn’t

match expectd dimension, colour or orientation. These objects

mainly reside near borders (incomplete shape).

The performance of the algorithm in terms of time consumed to

extract features covering geometry and orientation is also displayed.

Page 71: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

58

Classification of ceramic plates using fuzzy logic

After extracting the geometric features and collecting colour

information of each drawn object, classification stage comes to place by

means of stating the rules for assessment based on fuzzy logic.

Fuzzy logic and fuzzy logic systems

Fuzzy logic is a form of multi-valued logic derived from fuzzy set

theory to deal with reasoning that is approximate rather than accurate. In

contrast with "crisp logic", where binary sets have binary logic, fuzzy logic

variables may have a truth value that ranges between 0 and 1 and is not

constrained to the two truth values of classic propositional logic.

Furthermore, when linguistic variables are used, these degrees may be

managed by specific functions [52].

Fuzzy logic (FL) was first proposed by Zadeh in 1965 [53]. Though

fuzzy logic has been applied to many fields, from control theory to

artificial intelligence, it still remains controversial among most

statisticians, who prefer Bayesian logic, and some control engineers, who

prefer traditional two-valued logic. The term given to a system of

mathematics developed to model the human brain’s curious way of

processing words. The main motivation behind FL was the existence of

imprecision in the measurement process. Zadeh explains that ‘‘As

complexity rises, precise statements lose meaning and meaningful

statements lose precision”.

FL provides capabilities that allow handling both quantitative and

qualitative information within one model. It is a form of multivalued logic

derived from fuzzy set theory to deal with reasoning that is approximate

rather than precise. Fuzzy sets are sets whose elements have degrees of

membership. In classical set theory, the membership of elements in a set

is assessed in binary terms according to a Boolean condition—an element

either belongs or does not belong to the set. By contrast, fuzzy set theory

permits the gradual assessment of the membership of elements in a set; this

is described with the aid of a membership function valued in the real unit

interval [0, 1].

Page 72: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

59

Fuzzy sets generalize classical sets, since the indicator functions of

classical sets are special cases of the membership functions of fuzzy sets,

if the latter only take values 0 or 1 [54]. Formally, a fuzzy set A in U is

expressed as a set of ordered pairs:

𝐴 = (𝑥, 𝜇𝐴(𝑥))|𝑥 𝑖𝑛 𝑈} 5.1

Where U is the universe of discourse, 𝜇𝐴(𝑥) is the membership

function (MF) that portrays the degree of membership of x; it indicates the

degree to which x belongs in set A. The list of common MFs that are used

often includes triangular MFs, trapezoidal MFs, Gaussian MFs, and

generalized bell MFs [55].

Just as in fuzzy set theory the set membership values can range

(inclusively) between 0 and 1, in fuzzy logic the degree of truth of a

statement can range between 0 and 1 and is not constrained to the two truth

values (true, false) as in classic predicate logic. When qualitative values

are used, these degrees may be managed by specific inferential procedures.

Fuzzy logic allows variables to take on qualitative values which are words.

For example, the size variable may assume qualitative values of, e.g., very

large, large, small, and very small. Each qualitative value is associated

with a fuzzy set, each of which has a defined MF.

Figure 5.1 An eample of membership funcrion for evaluation of a program

Figure 5.1 shows an example for evaluating the developed software

in terms of the number of lines written in the program. The figure

illustrates the linguistic values used in assessment namely ”low”, medium”

and ‘‘high”. Each of these linguistic values is associated with a fuzzy set

defined by a corresponding membership function.

Page 73: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

60

A singleton fuzzy set is a fuzzy set with exactly one element with

full membership of 1. For example, the set 𝐴 = (13, 1) is a singleton.

Fuzzy Logic System (FLS) is the name given to any system that has a

direct relationship with fuzzy concepts (e.g., fuzzy sets and quantitative

values) and fuzzy logic. The most popular fuzzy logic systems in the

literature may be classified into three types, pure fuzzy logic systems,

Takagi and Sugeno’s fuzzy system, and fuzzy logic system with fuzzifier

and defuzzifier.

As most of the engineering applications use crisp data as input and

produce crisp data as output, the last type is the most widely used one

where the fuzzifier maps crisp inputs into fuzzy sets and the defuzzifier

maps fuzzy sets into crisp outputs. This type of fuzzy logic system was

first proposed by Mamdani [56]. It has been successfully applied to a

variety of industrial processes and consumer products.

Mamdani’s fuzzy logic system

The four main components of Mamdani’s FLS are briefly described

as follows:

5.2.1 Fuzzification

Fuzzification is the process of converting inputs into their fuzzy

representations.

For example, it converts crisp inputs into fuzzy sets such as

converting a colour score of 0.1 into [Bad, 0.8] and [Good, 0.2] where the

first parameter (Good, Bad) depicts the linguistic value and the latter

(0.1,0.9) are the degree of membership to that value.

Figure 5.2 demonstrates the process of fuzzifying the extracted

features (colour and geometry) in order to be assessed by the proposed

fuzzy rules.

Page 74: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

61

Figure 5.2 Fuzzifying ceramic plates features vector

5.2.2 Rules

Rules are the heart of the FLS, and may be provided by experts or

can be extracted from numerical data. In either case, the rules are typically

expressed as a collection of fuzzy IF-THEN rules. The IF-part of a rule is

its antecedent, and the THEN-part of the rule is its consequent. Fuzzy sets

are associated with terms that appear in the antecedents or consequents of

rules, and with inputs to and output of the FLS. A fuzzy IF THEN rule is

of the form

𝐼𝐹 𝑋1 = 𝐴1 𝑎𝑛𝑑 𝑋2 = 𝐴2 𝑎𝑛𝑑 … . 𝑋𝑛 = 𝐴𝑛 𝑇𝐻𝐸𝑁 𝑌 = 𝐵 , 𝑛 = [0,1] 5.2

𝐼𝐹 𝑋1 = 𝐴1 𝑜𝑟 𝑋2 = 𝐴2 𝑜𝑟 … . 𝑋𝑛 = 𝐴𝑛 𝑇𝐻𝐸𝑁 𝑌 = 𝐵 , 𝑛 = [0,1] 5.3

Where Xn and Yn are linguistic variables, A and B are linguistic

values represented by fuzzy sets. The IF part is the antecedent or premise,

while the THEN part is the consequent or conclusion. An example of a

fuzzy IF-THEN rule is:

𝐼𝐹 𝑆𝑖𝑧𝑒 = 𝐿𝑜𝑤 𝑇𝐻𝐸𝑁 𝐸𝑓𝑓𝑜𝑟𝑡 = 𝐻𝑖𝑔ℎ 5.4

In a fuzzy logic system, the collection of fuzzy IF-THEN rules is

known as fuzzy rule base.

5.2.2.1 Fuzzy operations

Since fuzzy logic has a degree of membership varied between zero

and one rather than crisp value representing total truth or total false, using

the logical operators (AND, OR and NOT) has a more generic rule in

implementation stated in the following points:

Bad=0.8

Good=0.2

Geometric

Score=0.7

Tolerated=0.6

Accurate=0.4

Colour

Score=0.1

(a) (b)

Page 75: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

62

AND operator: which is evaluated as either the minimum or

the product of the given two linguistic values.

OR operator: which is calculated as the maximum of the given

two linguistic values. Another rarely used technique is the

probabilistic OR given as A+B-A.B.

NOT operator: which is a unary operator where it extracted

the complement of the linguistic value.

Figure 5.3 depicts an example of using the various fuzzy logical

operations based on the extracted colour and geometric scores obtained

from the assessment of the garnished ceramic plate under investigation.

Figure 5.3 Example of implementing fuzzy logical operations

Colour Score =0.7

Geometric Score =0.3

IF Colour Score=A1 and Geometric Score=A2

ANDmin. Result=0.3

Colour Score =0.7

Geometric Score =0.3

ANDprod. Result=0.21

IF Colour Score=A1 or Geometric Score=A2

Colour Score =0.7

ORmax. Result=0.7

Colour Score =0.7

Geometric Score =0.3

ORprob. Result=0.79

Geometric Score =0.3

IF NOT Colour Score=A1

Colour Score =0.7 NOT Result=0.3

Page 76: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

63

The selection of a method over another is either inferred from

expert system of the domain knowledge or by means of trials and error.

5.2.2.2 Implication

Implication is meant to propagate the result of the antecedent (the

IF part) to the consequent (the THEN part). In other words, it is the process

of reshaping the output membership function. Meanwhile, rules can be

given a weight (from 0 to 1) which may affect the implication process,

however, it is rarely used [57].

Figure 5.4 shows an example of the implication process used to

evaluate one of the proposed rules in the ceramic classification system

(will be stated later in section 5.4). The colour score and geometry score

(discussed earlier in section 4.4) are first fuzzified to represent a degree of

membership.

Figure 5.4 An implication example from the proposed rules

The Antecedent part is then evaluated using the product technique

of AND operation and the result eventually propagate to the colour spot

membership function to reshape it.

5.2.2.3 Aggregation

Because decisions are based on the testing of all of the rules in a

Fuzzy Inference System (FIS), the rules must be combined in some manner

in order to make a decision.

Aggregation is the process by which the fuzzy sets that represent

the outputs of each rule are combined into a single fuzzy set. Aggregation

only occurs once for each output variable, just prior to the final step,

defuzzification.

Antecedent

Colour

Score=0.279

(DOM=0.558)

Consequent

Possibility of Colour

Spot=0.187

Geometry

Score=0.168

(DOM=0.336)

IF (Colour is Bad) AND (Geometry is Inaccurate) THEN

(Ceramic Plate has Color Spot)

Page 77: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

64

The input of the aggregation process is the list of truncated output

functions returned by the implication process for each rule. The output of

the aggregation process is one fuzzy set for each output variable [57].

As long as the aggregation method is commutative, then the order

in which the rules are executed is unimportant. Three common methods

are used:

Maximum

Probabilistic OR

Summation of each rule’s output set.

Figure 5.5 Example of aggregating effect on the possibility of accepting the ceramic

plate

5.2.3 Defuzzification

The defuzzifier converts the output of the inference engine into

crisp output. It is needed in many applications where crisp numbers must

be obtained as output.

The input for the defuzzification process is a fuzzy set (the

aggregate output fuzzy set) and the output is a single number. As much as

fuzziness helps the rule evaluation during the intermediate steps, the final

desired output for each variable is generally a single number.

Ag

gre

gat

ion

Implication

Colour Score=0.8 Geom. Score=0.75 Acceptable=0.33

Page 78: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

65

However, the aggregate of a fuzzy set encompasses a range of

output values, and so must be defuzzified in order to resolve a single output

value from the set [57].

5.2.3.1 Defuzzification techniques

Three techniques are used for diffuzification:

5.2.3.1.1 Centroid

Centroid defuzzification returns the centre of area under the curve.

If the area is considered as a plate of equal density, the centroid is the point

along the x axis about which this shape would balance. This is considered

the most used technique [54, 58-61].

5.2.3.1.2 Bisector

The bisector is the vertical line that will divide the region into two

sub-regions of equal area. It is sometimes, but not always coincident with

the centroid line.

5.2.3.1.3 Middle, Largest and Smallest of Maximum

MOM, SOM, and LOM stand for Middle, Smallest, and Largest of

Maximum, respectively. These three methods key off the maximum value

assumed by the aggregate membership function. If the aggregate

membership function has a unique maximum, then MOM, SOM, and LOM

all take on the same value.

A visual evaluation is shown in Figure 5.6 where the outcome of

each technique (crisp value annotated by a line and the name of the

method) is displayed. The selection of the appropriate method depends on

the application and expected result obtained from the domain expert as

stated in [55, 62, 63].

Methodologies utilized in building a fuzzy inference

system

Fuzzy logic forms an effective technique of combining linguistic

and numerical information, which can be done in two ways:

Page 79: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

66

Use of linguistic information (experts’ knowledge) to develop an initial

fuzzy logic system, and then adjust the parameters of the initial fuzzy

logic system by using on numerical information.

Use of numerical information and linguistic information to develop

two separate fuzzy logic systems, and then final fuzzy logic system is

obtained by averaging them.

Figure 5.6 Result of different defuzzification techniques

In the first case, components of the FLS are set based on the

expert’s opinion. These components include number of rules, shape and

position of the membership functions and the shape and position of the

consequents etc.

Historical data is then used to further tune the parameters of the

fuzzy logic system. The second way of coming up with adaptive FLS is

straight forward in the sense that once the parameters of the individual

systems are available, one can average these parameters to produce a final

FLS. Eventually, the overall fuzzy system would comprise all components

illustrated in Figure 5.7.

.

Aggregated value

Deg

ree

of

mem

ber

ship

Page 80: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

67

Figure 5.7 Fuzzy Logic System

Formulating the proposed fuzzy classifier for the

assessment of garnished wall plates

Mamdani’s inference engine was chosen to be the core inference engine.

Figure 5.8 Screenshot of the developed Mamdani’s inference engine for ceramic

assessment

Fuzzification

Crisp input values

Fuzzy Inference Engine

Defuzzification

Crisp output values

Fuzzy Rule

Base

Page 81: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

68

The following sections describe the flow of the proposed FIS.

MATLAB version 2011a has a fuzzy logic toolbox with visual designer

for implementation of the fuzzy classifier as shown in Figure 5.8

The proposed fuzzy classifier has two input variables (colour and

geometric score) used in evaluating the possibility of three output grades

(acceptable, colour mismatch and colour spot) in order to determine the

final grading of the garnished ceramic wall plate.

5.4.1 Fuzzification of input and output

Two input parameters were selected to be the input of the fuzzifier,

namely colour and geometric scores obtained from the feature extraction

process (section 4.4). Each score spans from zero to one which offers

normalized range at no extra calculations. For colour score, the distribution

of each linguistic value is shown in Figure 5.9 while the range is tabulated

in Table 5.1. The same is stated in Figure 5.10 and Table 5.2 for the

developed geometric score membership function.

Table 5.1 Range of colour score

Figure 5.9 Colour score membership function

Table 5.2 Range of geometric score

Figure 5.10 Geometric score membership function

Linguistic Value Range

Bad 0-0.5

Good 0-1

Shiny 0.5-1

Linguistic Value Range

Inaccurate 0-0.5

Tolerated 0-1

Accurate 0.5-1

Page 82: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

69

The proposed output linguistic variables were taken from the common

problems encountered in ceramic inspection which are colour spots and

colour mismatch. They are represented by ramp line to reflect the

possibility of grading from zero to hundred percent as presented in

Figure 5.11. Grading was normalized (from zero to one) where zero sands

for zero percent and one stands for hundred percent.

Figure 5.11 The output membership functions of ceramic grading

5.4.2 Rules for classification

Nine rules were modelled and inferred from the domain expert for

human classification of the garnished plates. These rules are based on the

fuzzified inputs of colour and geometric scores.

Rules are arranged as shown in Table 5.3. The intersected cell

between colour score linguistic values and the geometric score linguistic

peers forms the overall classification results. For instance:

𝐼𝐹 𝐶𝑆 𝑖𝑠 𝐺𝑜𝑜𝑑 𝑎𝑛𝑑 𝐺𝑆 𝑖𝑠 𝐴𝑐𝑐𝑢𝑟𝑎𝑡𝑒 𝑇𝐻𝐸𝑁 𝐶𝑙𝑎𝑠𝑠𝑖𝑓𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑖𝑠 𝐴𝑐𝑐𝑒𝑝𝑡𝑎𝑏𝑙𝑒 𝐼𝐹 𝐶𝑆 𝑖𝑠 𝐵𝑎𝑑 𝑎𝑛𝑑 𝐺𝑆 𝑖𝑠 𝑇𝑜𝑙𝑒𝑟𝑎𝑡𝑒𝑑 𝑇𝐻𝐸𝑁 𝐶𝑙𝑎𝑠𝑠𝑖𝑓𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑖𝑠 𝐶𝑜𝑙𝑜𝑢𝑟 𝑀𝑖𝑠𝑚𝑎𝑡𝑐ℎ

Table 5.3 Rules for classifying garnished plates

Geometric Score

Inaccurate Tolerated Accurate

Colour

Score

Bad Colour

Spot

Colour

Mismatch

Colour

mismatch

Good Colour

Spot Colour Spot Acceptable

Shiny Colour

Spot Acceptable Acceptable

Page 83: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

70

The three linguistic values (bad, good and shiny) describing the

quality of colour is combined with the geometric score values (inaccurate,

tolerated and accurate) to conclude the classification grade of the garnished

ceramic plate (acceptable, colour spot and colour mismatch) based on the

specified rules.

Following the trials and error for selecting the appropriate

parameters of the fuzzy classifier, the AND operation in applying rules

was carried out using the product technique, implication was applied based

on minimum method.

5.4.3 Proposed modified defuziification method

The smallest of the maximum (SOM) method proved to yield better

results during the tuning stage of the fuzzy classifier. Since three

possibilities are given after the process of defuzzification (acceptable,

colour spot and colour mismatch), the final grade of the ceramic plate is

brought in by finding the maximum among the three crisp values. In other

words:

𝐶𝑒𝑟𝑎𝑚𝑖𝑐 𝐺𝑟𝑎𝑑𝑒 = max(0≤𝐶𝑉≥1)

(𝐶𝑉1𝐶𝑉2, 𝐶𝑉3) 5.5

Where (CV1, CV2 and CV3) stands for the crisp value gained after

defuzzifying the acceptable, colour spot and colour mismatch membership

functions respectively.

Page 84: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

71

Results and Discussion

In this chapter, results of the conducted research is discussed where

several tests including statistical analysis were carried out to study the

impact of the proposed inspection system on productivity and to evaluate

the overall efficiency.

6.1 Implementation of the proposed system

A prototype was designed in campus at Misr University for Science

and Technology to emulate an industrial vision system. A Charge Coupled

Device (CCD) firewire colour camera is used for image acquisition with

resolution of 640X480 and shutter speed of 60 fps. Two flexible arms are

used to adjust illumination to minimise reflectivity due to nature of

ceramic surface. Black curtain is utilised to isolate inspection environment

from external illumination along the day to study the illumination

variability based on the two flexible arms carrying the light source. The

prototype is controlled by a USB interfacing circuit. An offline experiment

was performed by placing manually each plate.

6.1.1 Mechanical Design

The design procedure was carried out using the commonly used

Computer Aided Design (CAD) software SolidWorks 2009. The design

was meant to create a generic flexible cell for quality inspection of various

products to serve as a pillar for further research activities in automated

inspection. An iterative design model has been applied to modify design

drafts after manufacturing.

a) Mechanical drawing b) Implantation

Figure 6.1 The test rig of the proposed system

Page 85: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

72

As shown in Figure 6.1, a carriage is transferred to each station via

ball screw. The carriage acts as an automatic feeder using series of

successive rollers.

6.1.2 Electrical Circuitry

Two DC motors took place in the process of the FMC control

approach. The first is responsible for feeding the ceramic plates under

inspection, and the second one transfers the carriage to each classification

station (three stations).

a) Schematic design b) Circuit Layout

c) Testing motors direction

Figure 6.2 the H-Bridge circuit

Page 86: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

73

An H-Bridge circuit is responsible for controlling the direction of

each motor by changing the polarity of terminal DC voltage applied to each

motor. Eagle PCB software was chosen to design the schematic diagram

and PCB layout.

Figure 6.3 Schematic diagram of the USB interfacing card

Figure 6.4 PCB layout of the interfacing card

Page 87: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

74

The chosen Microcontroller is Microchip PIC16F877 and the USB

functionality was added by utilization of FTD485 First in First out (FIFO)

USB to Parallel chip as shown in Figure 6.3 and Figure 6.4.

6.1.3 Software Architecture

Reusability and rapid application development (RAD) were the

cornerstones in software design phase. The key pillars of software entities

comprises: Fuzzy C-Mean calculations, Fuzzy Logic Inference System,

Image Processing and FMC control.

Mathworks MATLAB 2011a software has a Fuzzy logic toolbox

(as presented in chapter 5) provides a very efficient way for fuzzy

clustering and Fuzzy logic implementation. Meanwhile, Microsoft Visual

Basic.NET and .NET framework were used to build robust classes for

FMC control and image processing. MATLAB developed functions

(fuzzy c-mean and fuzzy logic) and Visual Basic.NET was linked by

means of MATLAB for .NET builder provided in MATLAB as

deployment tools to allow interoperability with other software vendors.

The procedure is simply after developing the function (called m-

file in MATLAB), the MATLAB for .NET builder produces a DLL file

with wrapper class for the mentioned MATLAB functions that can be

integrated easily with any language based on Microsoft .NET framework

(like C# or VB). MATLAB Component Runtime handles the

communication between MATLAB and VB.NET.

Figure 6.5 Architectural design of the developed software

MATLAB

Fuzzy Logic

Classifier

Fuzzy C-Mean

Clustering

Visual Basic.NET

Communication with

interfacing circuit

Image Processing

User Interface .NET builder

MATLAB

Component Runtime

(MCR)

Page 88: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

75

6.2 Results

Images acquired using Basler A601fc CCD firewire camera was

subjected to variability in illumination. Ceramic size of 200mm X 150mm

was used during experiment. Results obtained from automated system

were compared to knowledge of experienced worker.

Figure 6.6 Rotated garnished ceramic plate

Figure 6.7 Ceramic plate with colour mismatch

Page 89: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

76

Figure 6.8 Ceramic plate with colour spots

The above figures (Figure 6.6, Figure 6.7 and Figure 6.8) show an

example of the acceptable plate, plate with colour mismatch and a plate

with colour spot respectively. Table 6.1 shows the results of the

classification algorithm proposed in the thesis.

Table 6.1 Result of the proposed classification system for the stated figures

Fig.

No.

Colour

Quality

Accuracy

of

Geometry

Acceptable Colour

Mismatch

Colour

Spot

6.6 0.61 0.68 0.29 0.0 0.5

6.7 0.12 0.8 0.15 0.46 0.1

6.8 0.92 0.85 0.59 0.0 0.05

The system was tested over 35 images with prior knowledge of

their classification. An aggregated result depicting the overall assessment

of the proposed system presented in Table 6.2. The accuracy is calculated

as follows:

𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 (%)𝑐𝑙𝑎𝑠𝑠 = 1 − (𝐸𝑥𝑝𝑒𝑐𝑡𝑒𝑑 𝐹𝑜𝑢𝑛𝑑 𝑃𝑎𝑡𝑡𝑒𝑟𝑛𝑠−𝐴𝑐𝑡𝑢𝑎𝑙 𝐹𝑜𝑢𝑛𝑑 𝑃𝑎𝑡𝑡𝑒𝑟𝑛𝑠

𝐸𝑥𝑝𝑒𝑐𝑡𝑒𝑑 𝐹𝑜𝑢𝑛𝑑 𝑃𝑎𝑡𝑡𝑒𝑟𝑛𝑠) 6.1

Page 90: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

77

Table 6.2 Accuracy of automatic detection of correct classes

An experiment had been conducted to study the effect of the

rotation over accuracy of ceramic classification. Results are charted in

Figure 6.9. The horizontal axis represents the orientation of the drawn

object with respect to the master ceramic plate orientation. The vertical

axis shows the detection rate in percentage.

Figure 6.9 The effect of rotating ceramic plates over correct detection of its class

A comparative study was also carried out to investigate the impact

of utilization of fuzzy c-mean over the traditional k-nearest neighbour

clustering to perform colour grouping (chapter 3).

In Nearest Neighbour classification, the distance of an input colour

vector X = [𝑥𝑖𝑟 , 𝑥𝑖𝑔 , 𝑥𝑖𝑏] (as discussed in section 3.6) of unknown class to

a class Cj is defined as the distance to the closest sample that is used to

represent the class. In other words:

d(X, Cj) = mini

d(X, Xij) 6.2

Type Automatic Detection

Acceptable 95.3 %

Colour Spot 97.3%

Colour Mismatch 91.6 %

Total Accuracy 94.7 %

0 50 100 150 200 250 30050

60

70

80

90

Drawn Objects Orientation (in degrees)

Co

rerc

ted

Det

ecti

on

Page 91: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

78

Fuzzy C-Means

Correct Match(94%)

Incorrect Match(6%)

Nearest Neighbour

Correct Match(67%)

Incorrect Match(33%)

Where 𝑑(𝑋, 𝑋𝑖𝑗) is the distance between X and 𝑋𝑖

𝑗. Distance is

calculated by means of Euclidean Distance:

d(X, Y) = √(∑ (Xi − Yini=1 )2 6.3

Each cluster is determined by the averaging the colour dominating

each drawn object. The below figure depicts results of colour based

learning using both fuzzy c-means and nearest neighbour.

a) Correct classification using nearest neighbour colour grouping approach

b) Correct classification using fuzzy c-mean colour grouping approach

Figure 6.10 Detection percentage of ceramic grade (nearest neigbour vs. fuzzy c-mean

One of the worthy observations is although several papers came

with an interesting approach for tiles inspection [12, 27, 29, 30, 64, 65];

most of it ignored the execution speed which holds a paramount effect in

implementation.

Since one of the challenges facing such a system is performing the

inspection process while conveyor is moving at its nominal speed to avoid

delaying production time. As stated earlier, most of ceramic production

lines operate at high speed.

Page 92: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

79

There are key factors that can be good candidates as metrics for

ceramic inspection systems such as software decision speed, signal

transmission rate between host controller and the Mechatronic sorting

system and mechanical actuation speed which in other words are

interpreted as how fast the actuation system responds to decision obtained

from vision software.

The Below performance metrics were recorded using tower PC

with Pentium 4 processor and 1 GB RAM running windows XP:

Table 6.3 Selected mterics for software performance assessment

Table 6.4 System configuration

6.3 Discussion

Developing an inspection system for garnished ceramic plates has

many challenges to tackle. Image acquisition may affect results in several

ways. One of the common problems is lens distortion. Distorted images

yields erroneous measurements as it provides false information in regard

to angles and geometry.

To correct the acquired image, a hardware solution commonly

takes place via utilisation of telecentric lenses.

Criteria Average Time

Image acquisition 16.67 ms

Gamma correction 23 ms

Image calibration 86 ms

Prewitt filter 15 ms

Geometric feature extraction 120 ms

Fuzzy clustering 15 Sec.

Evaluation of fuzzy rules 69 ms

Criteria Specification

USB actual transfer speed 1MB / S

Inspection Area 20 X 20 cm2

Page 93: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

80

Even though telecentric lenses may add an extra cost to the imaging

system since it may cost more than the imaging device itself, it is

considered a perfect choice for small features measurements and detection

such as cracks that may require high resolution cameras with error free

lenses.

The second remedy for lens distortion is by using software

calibration, which involves mapping each pixel to a new location based on

calibrated grid. Therefore, as illustrated in section 2.2 , a proposed soft

calibration algorithm was introduced as an attempt to remove or at least to

minimise that distortion using a calibration grid and the illustrated

developed algorithm. A visual representation of the corrected image of the

garnished plate showed an enhanced image as a proof of geometric shape

restoration of drawn objects.

Even though such a proposed algorithm is considered an economic

solution with promising results as demonstrated in thesis, tradeoffs in

measurements due to small deviation based on interpolation between grids

to estimate the correct image profile should be put into consideration.

Observing the variability in correct pattern classification can be

justified if the effect of rotation ceramic plates was carefully considered as

the soft calibration won’t be able to provide same accuracy as the

telecentric length. The other factor is the illumination which is hard to be

uniform due to severe reflectivity of ceramic surfaces. The overall

accuracy of the inspection system is summarised in Table 6.2.

.

Based on experimentation carried out, Fuzzy c-mean yielded better

results than nearest neighbour technique. The result is rational since the

centre of each cluster is modified in every iteration. Nearest Neighbour

would be adequate whenever one of the features of each class has no

chance to overlap with the other. Keeping this notion in mind, the choice

of fuzzy logic classifier to mimic human reasoning was superior over other

techniques in classification based on geometry and colour.

Page 94: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

81

Conclusion

The presented work was aimed to develop and implement a flexible

manufacturing cell to inspect garnished ceramic wall plates. We may

summarize the work carried out as follows:

1. A proposed calibration algorithm was to correct the lens

distortion by means of software.

2. A new visualization approach for colour grouping was also

proposed using fuzzy c-mean clustering technique.

3. A fuzzy inference engine was built to classify the garnished

ceramic plates into three common categories (acceptable,

ceramic plate having colour spots, and ceramic plate having

colour mismatched drawn objects) based on calculated

geometric and colour score from a feature extraction process.

4. A test rig was developed to emulate the production

environment.

5. The system shows promising results in terms of accuracy in

correct classification and withstanding against variability in

illumination distribution.

6. The system is considered novel compared to other published

work since it is the only work which considered geometric

features of drawn objects up to the time of submitting this

thesis.

Page 95: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

82

Future work

The conducted thesis could be expanded in the future to cover:

The 3D inspection of surface roughness using stereoscopic

vision systems.

Utilization of parallelism and multicore technology in pattern

concurrency.

The effect of classification by other methodologies such as

Support Vector Machine, Particle Swarm Optimization and

modern published algorithms.

An improved mechanical mechanism for speeding up the

sorting process.

Page 96: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

83

References

[1] F. A. Tolbah, A. M. Aly, and W. A. El-Badry, "Automated grading

system for garnished wall plates: A mechatronic approach,"

presented at the 8th International Conference on Production

Engineering and Design for Development, Cairo, Egypt, 2010.

[2] F. A. Tolba, A. M. Aly, and W. A. El-Badry, "An Enhanced Vision

System for Sorting Ceramic Plates Based on Hybrid Algorithm and

USB Interfacing Circuitry," presented at the 27th National Radio

Science Conference, Menoufia, Egypt, 2010.

[3] A. M. Aly and W. A. El-Badry, "Design and Implementation of

Flexible Manufacturing Cell for Quality Inspection of Garnished

Ceramic Wall Plates," in 19th Conference of French Congress of

Mechanics Marseille, France, 2009.

[4] M. Grimheden and M. Hanson, "Mechatronics--the evolution of an

academic discipline in engineering education," Mechatronics, vol.

15, pp. 179-192, 2005.

[5] C. Connolly, "Machine vision advances and applications,"

Assembly Automation, vol. 29, pp. 106-11, 2009.

[6] H. Golnabi and A. Asadpour, "Design and application of industrial

machine vision systems," Robotics and Computer Integrated

Manufacturing, vol. 23, pp. 630-7, 2007.

[7] S. Gomathinayagam, "Machine vision systems in automated

quality inspection," Control Engineering Europe, vol. 6, pp. 36-9,

2005.

[8] M. Graves and B. G. Batchelor, Machine vision for the inspection

of natural products. London ; New York: Springer, 2003.

[9] L. Yi, C. Tie Qi, C. Jie, Z. JianXin, and A. Tisler, "Machine vision

systems using machine learning for industrial product inspection,"

in Machine Vision and Three-Dimensional Imaging Systems for

Inspection and Metrology II, 29-30 Oct. 2001, USA, 2002, pp. 161-

70.

[10] B. G. Batchelor and J. R. Charlier, "Machine Vision is Not

Computer Vision, Keynote Paper," presented at the SPIE

Conference Proceedings, Machine Vision Systems for Inspection

and Metrology VII, Boston, MA., USA, 1998.

[11] "Machine Vision Applications, Architectures, and Systems

Integration V," in Machine Vision Applications, Architectures, and

Systems Integration V, 18-19 Nov. 1996, USA, 1996.

[12] H. Elbehiery, A. Hefnawy, and M. Elewa, "Surface Defects

Detection for Ceramic Tiles Using Image Processing and

Page 97: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

84

Morphological Techniques," Proceedings Of World Academy Of

Science, Engineering And Technology, vol. 5, pp. 158-162, April

2005 2005.

[13] H. Bloessl, "A fully automatic production line for decorating and

firing wall tiles," Berichte der Deutschen Keramischen

Gesellschaft, vol. 53, pp. 60-1, 1976.

[14] Z. Hocenski, T. Keser, and A. Baumgartner, "A simple and

efficient method for ceramic tile surface defects detection," in 2007

IEEE International Symposium on Industrial Electronics, 4-7 June

2007, Piscataway, NJ, USA, 2007, pp. 1606-11.

[15] I. Novak and Z. Hocenski, "Texture feature extraction for a visual

inspection of ceramic tiles," in Proceedings of the IEEE

International Symposium on Industrial Electronics, 20-23 June

2005, Piscataway, NJ, USA, 2005, pp. 1279-83.

[16] G. S. Desoli, S. Fioravanti, R. Fioravanti, and D. Corso, "A system

for automated visual inspection of ceramic tiles," in Proceedings

of IECON '93 - 19th Annual Conference of IEEE Industrial

Electronics, 15-19 Nov. 1993, New York, NY, USA, 1993, pp.

1871-6.

[17] C. Boukouvalas, F. De Natale, G. De Toni, J. Kittler, R. Marik, M.

Mirmehdi, et al., "An integrated system for quality inspection of

tiles," in QCAV 97 1997 International Conference on Quality

Control by Artificial Vision, 28-30 May 1997, Toulouse, France,

1997, pp. 49-54.

[18] C. Boukouvalas, J. Kittler, R. Marik, and M. Petrou, "Automatic

grading of textured ceramic tiles," in Machine Vision Applications

in Industrial Inspection III, Febrary 8, 1995 - Febrary 9, 1995, San

Jose, CA, USA, 1995, pp. 248-256.

[19] C. Boukouvalas, J. Kittler, R. Marik, and M. Petrou, "Automatic

color grading of ceramic tiles using machine vision," IEEE

Transactions on Industrial Electronics, vol. 44, pp. 132-5, 1997.

[20] J. A. Penaranda Marques, L. Briones, and J. Florez, "Color

machine vision system for process control in the ceramics

industry," in New Image Processing Techniques and Applications:

Algorithms, Methods, and Components II, June 18, 1997 - June 19,

1997, Munich, Ger, 1997, pp. 182-192.

[21] C. Boukouvalas, F. De Natale, G. De Toni, J. Kittler, R. Marik, M.

Mirmehdi, et al., "ASSIST: Automatic system for surface

inspection and sorting of tiles," Journal of Materials Processing

Technology, vol. 82, pp. 179-188, 1998.

[22] R. Massen and T. Franz, "Automatic ceramic tile inspection: better

than humans?," Keramische Zeitschrift, vol. 50, pp. 4pp-4pp, 1998.

Page 98: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

85

[23] C. E. Costa and M. Petrou, "Automatic registration of ceramic tiles

for the purpose of fault detection," Machine Vision and

Applications, vol. 11, pp. 225-30, 2000.

[24] M. L. Smith and R. J. Stamp, "Automated inspection of textured

ceramic tiles," Computers in Industry, vol. 43, pp. 73-82, 2000.

[25] M. L. Smith and L. N. Smith, "Dynamic photometric stereo--a new

technique for moving surface analysis," Image and Vision

Computing, vol. 23, pp. 841-852, 2005.

[26] S. Kukkonen, H. Kalviainen, and J. Parkkinen, "Color features for

quality control in ceramic tile industry," Optical Engineering, vol.

40, pp. 170-7, 2001.

[27] Z. F. Hocenski and E. K. Nyarko, "Surface quality control of

ceramic tiles using neural networks approach," in ISIE 2002.

Proceedings of the 2002 IEEE International Symposium on

Industrial Electronics, 8-11 July 2002, Piscataway, NJ, USA,

2002, pp. 657-60.

[28] KAGI. (2008). Morphological Image Processing for MATLAB.

Available: http://www.mmorph.com/

[29] H. M. Elbehiery, A. Hefnawy, and M. T. Elewa, "Visual Inspection

for Fired Ceramic Tile’s Surface Defects using Wavelet Analysis,"

Artificial Intelligence and Machine Learning, vol. 3, pp. 28-36,

2005.

[30] I. Novak, Z. Hocenski, and D. Sliskovic, "Using pixel pair

difference for visual inspection of ceramic tiles," Tehnicki Vjesnik,

vol. 12, pp. 3-9, 2005.

[31] S. Vasilic and Z. Hocenski, "The edge detecting methods in

ceramic tiles defects detection," in International Symposium on

Industrial Electronics 2006, ISIE 2006, July 9, 2006 - July 13,

2006, Montreal, QC, Canada, 2006, pp. 469-472.

[32] O. R. Vincent and O. Folorunso, "A Descriptive Algorithm for

Sobel Image Edge Detection," presented at the Informing Science

& IT Education Conference (InSITE), Georgia, USA, 2009.

[33] S.-W. Jeng and W.-H. Tsai, "Analytic image unwarping by a

systematic calibration method for omni-directional cameras with

hyperbolic-shaped mirrors," Image and Vision Computing, vol. 26,

pp. 690-701, 2008.

[34] Opto-Engineering. (2010, January 2010). Telecentric Lenses:

Basic Information and Working Principles. Available:

http://www.opto-engineering.com/telecentric-lenses-tutorial.html

[35] S. H. Kwon, "Threshold selection based on cluster analysis,"

Pattern Recogn. Lett., vol. 25, pp. 1045-1050, 2004.

Page 99: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

86

[36] N. Instruments. (2006, 11/06). Image Acquisition. Available:

http://zone.ni.com/devzone/cda/tut/p/id/2808

[37] M. Lee and W. Pedrycz, "The fuzzy C-means algorithm with fuzzy

P-mode prototypes for clustering objects having mixed features,"

Fuzzy Sets and Systems, vol. 160, pp. 3590-3600, 2009.

[38] Y. P. Pandit, Y. P. Badhe, B. K. Sharma, S. S. Tambe, and B. D.

Kulkarni, "Classification of Indian power coals using K-means

clustering and Self Organizing Map neural network," Fuel, vol. 90,

pp. 339-347, 2011.

[39] U. Orhan, M. Hekim, and M. Ozer, "EEG signals classification

using the K-means clustering and a multilayer perceptron neural

network model," Expert Systems with Applications, vol. 38, pp.

13475-13481, 2011.

[40] M. J. Watts and S. P. Worner, "Estimating the risk of insect species

invasion: Kohonen self-organising maps versus k-means

clustering," Ecological Modelling, vol. 220, pp. 821-829, 2009.

[41] C. T. Yiakopoulos, K. C. Gryllias, and I. A. Antoniadis, "Rolling

element bearing fault detection in industrial environments based on

a K-means clustering approach," Expert Systems with Applications,

vol. 38, pp. 2888-2911, 2011.

[42] L. Peihua, "A clustering-based color model and integral images for

fast object tracking," Signal Processing: Image Communication,

vol. 21, pp. 676-687, 2006.

[43] G. Schaefer and H. Zhou, "Fuzzy clustering for colour reduction in

images," Telecommunication Systems, vol. 40, pp. 17-25, 2009.

[44] D.-W. Kim, K. H. Lee, and D. Lee, "A novel initialization scheme

for the fuzzy c-means algorithm for color clustering," Pattern

Recognition Letters, vol. 25, pp. 227-237, 2004.

[45] N. H. Reyes, A. L. Barczak, and C. H. Messom. (2006, Fast Colour

Classification for Real-time Colour Object Identification:

Adaboost training of Classifiers. Institute of Information and

Mathematical Sciences,Massey University, Auckland, New

Zealand.

[46] M. A. Jaffar, B. Ahmed, N. Naveed, A. Hussain, and A. M. Mizra,

"Color video segmentation using fuzzy C-mean clustering with

spatial information," WSEAS Transactions on Signal Processing,

vol. 5, pp. 198-207, 2009.

[47] C. M. Emre, "Improving the performance of k-means for color

quantization," Image and Vision Computing, vol. 29, pp. 260-271,

2011.

[48] B. J.C, "Corrections for “FCM: the fuzzy c-means clustering

algorithm,”: Bezdek, J. C., Erilch, R., and Full, W. E., 1984,

Page 100: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

87

Computers & Geosciences, v. 10, no. 2–3, p. 191–203,"

Computers & Geosciences, vol. 11, p. 660, 1985.

[49] Z. Feng, B. Yang, Y. Chen, Y. Zheng, T. Xu, Y. Li, et al., "Features

extraction from hand images based on new detection operators,"

Pattern Recognition, vol. 44, pp. 1089-1105, 2011.

[50] H. I. Bozma and H. Yalçın, "Visual processing and classification

of items on a moving conveyor: a selective perception approach,"

Robotics and Computer-Integrated Manufacturing, vol. 18, pp.

125-133, 2002.

[51] V. Baglodi, "Edge detection comparison study and discussion of a

new methodology," in IEEE SoutheastCon 2009, 5-8 March 2009,

Piscataway, NJ, USA, 2009, p. 1 pp.

[52] M. Nikravesh, "Evolution of fuzzy logic: from intelligent systems

and computation to human mind," Soft Computing, vol. 12, pp.

207-14, 2008.

[53] L. A. Zadeh, "Fuzzy logic and the calculus of fuzzy if-then rules,"

in Proceedings of the 22nd International Symposium on Multiple-

Valued Logic, May 27, 1992 - May 29, 1992, Sendai, Jpn, 1992, p.

480.

[54] P. Melin, O. Mendoza, and O. Castillo, "An improved method for

edge detection based on interval type-2 fuzzy logic," Expert

Systems with Applications, vol. 37, pp. 8527-8535, 2010.

[55] L. S. Dooley, G. C. Karmakar, and M. Murshed, "A fuzzy rule-

based colour image segmentation algorithm," in Proceedings of

International Conference on Image Processing, 14-17 Sept. 2003,

Piscataway, NJ, USA, 2003, pp. 977-80.

[56] G. J. Klir, "Foundations of fuzzy set theory and fuzzy logic: a

historical overview," International Journal of General Systems,

vol. 30, pp. 91-132, 2001.

[57] "Fuzzy Inference System," 2011 a ed: Mathworks Corporation,

2011.

[58] K. Balasubramanian and I. Obeid, "Fuzzy logic-based spike sorting

system," Journal of Neuroscience Methods, vol. 198, pp. 125-134,

2011.

[59] M. Štěpnička, U. Bodenhofer, M. Daňková, and V. Novák,

"Continuity issues of the implicational interpretation of fuzzy

rules," Fuzzy Sets and Systems, vol. 161, pp. 1959-1972, 2010.

[60] Z. Muzaffar and M. A. Ahmed, "Software development effort

prediction: A study on the factors impacting the accuracy of fuzzy

logic systems," Information and Software Technology, vol. 52, pp.

92-109, 2010.

Page 101: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

88

[61] P. Cintula and P. Hájek, "Triangular norm based predicate fuzzy

logics," Fuzzy Sets and Systems, vol. 161, pp. 311-346, 2010.

[62] "Understanding fuzzy logic: An interview with Lotfi Zadeh {dsp

History}," IEEE Signal Processing Magazine, vol. 24, pp. 102-

105, 2007.

[63] A. V. S. Balasubramanian, N. R. Shankar, S. Subbaraman, and R.

Rengaraj, "A Novel Fuzzy Logic Based Controller To Adjust the

Brightness of the Television Screen with Respect to Surrounding

Light," World Academy of Science, vol. 39, p. 4, 2008.

[64] O. Ghita, T. Carew, and P. Whelan, "A vision-based system for

inspecting painted slates," Emerald Sensor Review, vol. 26, pp.

108-115, 2006.

[65] F. S. Abas and K. Martinez, "Classification of painting cracks for

content-based analysis," Ph.D., Intelligence, Agents, Multimedia

Group, Department of Electronics and Computer Science,

University of Southampton, Southampton, UK, 2004.

Page 102: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

89

Published Papers

1. A. M. Aly and W. A. El-Badry, "Design and Implementation of

Flexible Manufacturing Cell for Quality Inspection of Garnished Ceramic

Wall Plates," in 19th Conference of French Congress of Mechanics

Marseille, France, 2009.

2. F. A. Tolbah, A. M. Aly, and W. A. El-Badry, "Automated grading

system for garnished wall plates: A mechatronic approach," presented at

the 8th International Conference on Production Engineering and Design

for Development, Cairo, Egypt, 2010.

3. F. A. Tolba, A. M. Aly, and W. A. El-Badry, "An Enhanced Vision

System for Sorting Ceramic Plates Based on Hybrid Algorithm and USB

Interfacing Circuitry," presented at the 27th National Radio Science

Conference, Menoufia, Egypt, 2010.

Page 103: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

90

الرسالة ملخص

حائطبالط الإن الُجهدَ البحثّي المبذول للوصول إلى حّل موضوعّي لمشكلة فحص وتحليل وتمييز

يمكن تبريره بسهولة بالمكاسب التجارية وأمن االستخدام اللذْين يعودان على الصناعة الخزفّي المزخرف

الط انس ممكن ألنواع البعند تحويل نُُهج الفحص اليدوي العتيقة إلى طرائق آليّة، مما ينتج عنه أعلى تج

المنتج الموجود في المخازن، مما يؤدي في النهاية إلى ثبات عمليات المعالجة والفحص الشاملة.

ومن المشاكل العامة في عملية الفحص اليدوية اإلجهاد وضعف التركيز البشري، مما يؤدّي إلى

لمعروف أن التغير التدريجّي فيفي المصنع. ومن ا المزخرف أخطاء في عملية تصنيف البالط الخزفيّ

األلوان يصعب على الفاحص البشرّي اكتشافه، ومن المحتمل أن التغيرات الطفيفة المتدرجة والمستمرة

ال يمكن مالحظتها في مراحل مبكرة. وباإلضافة إلى ذلك، فإنه يصعب على العين البشرية على وجه

لواحد، عندما تتغير أحوال اإلضاءة أثناء العمل في الخصوص أن تفرز البالط بدقة طبقا لدرجات اللون ا

مكان الفرز )المصنع أو ملحقاته(. ويتأثر الحكم البشرّي، كالمعتاد، بمستوى التوقعات والمعرفة المسبقة.

ومنذ أوائل التسعينيات، نشرت أبحاث كان محور اهتمامها التوّصل إلى طرائق جديدة الكتشاف

التشققات الدقيقة وتبقُّع األلوان في البالط الخزفّي. وعلى أية حال، كانت هذه البحوث منصبّة على التوّصل

لّي )مثل تغيّر أحوال إلى خوارزميّة فحص مناسبة وأهملت الجوانب األخرى المتعلقة بنظام الرؤية اآل

اإلضاءة وكذا عاكسيّة سطح البالطة المفحوص، ونظام حيازة الصورة، والنظام الميكانيكّي لمناولة البالط

والدارة الكهربيّة للمواجهة البينيّة(. وفي نفس الوقت، لم يسبق وجود نظام يقيس األشكال الهندسية

ه، قبل االضطالع بتبعات وأعباء عملنا البحثّي هذا، المرسومة وجودة األلوان، وال حتى مناقشة موضوع

الخزفّي المزخرف. بالط الحائطفحص والذي نرى أنه ال بدّ منه عند

الخزفّي بالط الحائطفحص تعرض األطروحة الحالية المقترحة مدخال جديدا إلى حل مشكلة

يعتمد على كل مطّور م هجيننظابالنظر إلى الشكل الهندسّي واأللوان الناتجة، مؤسسا على المزخرف

ان الكتشاف التشققات الدقيقة وتبقّع األلوسي" –من منطق الحاالت الغامضة وطريقة "التعنقد بالمتوسط

تقييميّة لألشكال الهندسية واختالفات درجة اللون. كما يؤّمن هذا النظام المقترح كذلك إعطاء درجات

ومالمح األلوان النهائية.

لمعامالت موضع النقاش الخاصة بنظام حيازة الصور بالتفصيل مع تغيير هذه وقد تمت مناقشة ا

المعامالت المؤثرة على جودة الصور المستقاة مثل تصحيح "غاما" وتردد اإلطار المأخوذ في االعتبار.

كما ناقشنا الخطوات التفصيلية لنظام حيازة الصور محل النقاش.

ايرة الصور وذلك لتقليل التشوه الناشئ عن العدسة تم كذلك عرض الخوارزميّة المقترحة لمع

المستخدمة. ولقد أثبتت الخوارزميّة المقترحة صّحتها وقدرتها على تصحيح العدسة والقيام بالتصحيح

.المنظوريّ

"سي –من منطق الحاالت الغامضة وطريقة "التعنقد بالمتوسط وقد وقع االختيار على كل

وذلك للقيام بعملية التوزيع والتجميع طبقا لأللوان، حيث يتم عمل عرض تمثيلي لعملية التوزيع والتجميع

هذه إلعطاء تفسير مرئي لعزل األشكال المرسومة بدال من استخدام عقد عنقدة زائفة.

Page 104: The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

91

" توتتّم عملية استخالص المالمح الهندسية بواسطة اكتشاف الحواّف باستخدام مرشّح "بريوي

الُمناَغم باألشعة الُمبَْرَمقِة ومطابقة الخطوط لتقييم أطوال الحواّف. وعالوة على ذلك، فإن الدرجات المقابلة

لألشكال الهندسيّة ودرجات األلوان يتم حسابها اعتمادا على القياسات المسجلة.

زفّي الخ بالط الحائطولقد تّم بناء مصنّف مؤسس على منطق الحاالت الغامضة لتمييز وفرز

له مدخالن )هما الشكل الهندسي ودرجة اللون( وثالث مخرجات )مقبول أو مبقّع لونيا أو المزخرف،

مخالف لونيا(.

وبناء نظام التقييم المقترح وتصنيعه. وقد أثبت اختبار وتقييم وتم كذلك تصميم منصة اختبار

الخزفّي المزخرف، واعدة بالط الحائط النظام أن نتائج استخدامه، من حيث دقة االكتشاف وتمييز ألوان

ومبشرة. وقد تمت أيضا دراسة إجراء التقييم باستخدام هذا النظام على بالطات خزفية غير ثابتة اتجاه

المحاور )ُمَمالة عن المحاور األصلية للنظام محل التطوير واالختبار( وذلك لعرض كفاءة هذا النظام

ير مثالية. وفي هذه األثناء، تم إجراء مقارنة لنتائج هذا النظام ونظام المعايرة المقترح تحت ظروف غ

رح.المقتحتى يتسنّى عمل تقييم شامل ألداء هذا النظام نتائج اختبار المقارنة المرجعّي المقترح مع