draft - computer vision

52
Computer Vision Through Experiment using Python and OpenCV e-Yantra

Upload: arun-mukundan

Post on 12-Sep-2015

26 views

Category:

Documents


9 download

DESCRIPTION

draft of a book on using opencv and python for learning cv

TRANSCRIPT

  • Computer Vision

    Through Experiment using Python and OpenCV

    e-Yantra

  • Copyright 2015 e-Yantra

    SOMETHING SOMETHING

    e-yantra.org

    Example licence statement - Licensed under the Creative Commons Attribution-NonCommercial 3.0Unported License (the License). You may not use this file except in compliance with the License.You may obtain a copy of the License at http://creativecommons.org/licenses/by-nc/3.0.Unless required by applicable law or agreed to in writing, software distributed under the License isdistributed on an AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, eitherexpress or implied. See the License for the specific language governing permissions and limitationsunder the License.

    First edition - 2015

  • Contents

    I Part One : Basics

    1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    2 Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.1 Sense - Analyze - Control 9

    2.2 Eyes vs Webcam 10

    2.3 Models of Vision 10

    2.4 Gestalt Principles 112.4.1 Proximity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.4.2 Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.4.3 Closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.4.4 Common Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.4.5 Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.4.6 Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.1 Introduction 15

    3.2 Installation 15

    3.3 Python 16

    3.4 Numpy 16

    3.5 OpenCV 16

    3.6 Debugging 17

  • 4 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.1 Representation 19

    4.2 Properties 19

    II Part Two : Hands On

    5 Fundamental Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.1 Structure of a Program 23

    5.2 Image from file 23

    5.3 Image from Camera 25

    5.4 Video from Camera 26

    5.5 Experiments 26

    5.6 Debugging 27

    6 Basic Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296.1 Colourspaces 29

    6.2 Thresholding 30

    6.3 Zooming, Rotating and Panning 31

    6.4 Experiments 31

    6.5 Debugging 31

    7 User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337.1 Shapes 33

    7.2 Buttons 34

    7.3 Experiments 34

    7.4 Debugging 34

    8 Drawing on air . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358.1 System 35

    8.2 Light Pen 35

    8.3 Steps 368.3.1 Step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368.3.2 Step 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378.3.3 Step 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388.3.4 Step 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398.3.5 Step 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408.3.6 Step 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418.3.7 Step 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438.3.8 Step 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    8.4 Experiments 46

    8.5 Debugging 46

  • Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Books 47Articles 47

    Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

  • I1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    2 Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.1 Sense - Analyze - Control2.2 Eyes vs Webcam2.3 Models of Vision2.4 Gestalt Principles

    3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.1 Introduction3.2 Installation3.3 Python3.4 Numpy3.5 OpenCV3.6 Debugging

    4 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.1 Representation4.2 Properties

    Part One : Basics

  • 1. Introduction

    Computer Vision is simply the pursuit of teaching a machine to see as humans do. While the taskseems to be trivial at the outset, given that we have made so many advances in computing. However,the problem lies in the fact that while we humans certainly use our vision to understand the worldaround us, the exact way in which we make inferences from the images we see is still not very clear.For example, let me ask you to take the trouble to pick up a pencil and draw a cube.

    Figure 1.1: A Symbolic Cube

    If the above figure is what you drew, then I shall ask the following questions: Are the opposite edges parallel? Are the edges all of the same length? From which angle will you have to see the cube to get exactly what you have drawn?You may be surpised to learn that when we try to remember what something looks like, we

    usually associate it with a symbol, which has certain properties that relate to the object, rather thanremembering an image of the object itself. Here, we can see a glimpse of the problem I spoke about

  • 10 Chapter 1. Introduction

    earlier. Since we do not completely understand how we ourselves see, we can.....

    Figure 1.2: True Cube

    Towards this end, we will try to define a certain task to be completed, or an inference to bemade, and accordingly process the information that we have, for example, an image.

  • 2. Vision

    2.1 Sense - Analyze - Control

    A system is a model of almost anything we can imagine. A system may be a device, a robot, aperson, a unicorn, a university or even the universe. Once we have understood what is the systemwe are describing, we can break it down to three main parts:

    Sense Analyze Control

    Figure 2.1: Sense-Analyze-Control

    Let us take the example of a person as the system. Then, the Sense part corresponds to our fivesenses. The Analyze part corresponds to how our brain interpreting the information obtained byour senses. The Control part is our muscles, which helps the system respond to the stimulus thatwe have sensed in the first place. Suppose we take the example of a program as a system, then wecan define the Sense part as any input that is provided, for example, an image from a camera. TheAnalysis part is the algorithm that we run on the input that is provided, and finally, the Control partis the output that we give to the user.

  • 12 Chapter 2. Vision

    2.2 Eyes vs Webcam

    Since our goal is to emulate human vision using machines, we must understand the differencesbetween a machine and a human. Therefore, let us take our systems to be a human and a machine.We must compare the three parts of the two systems.

    From the previous description, it is clear that the sensor in the case of a human is the humaneye, and in the case of a machine, it is the camera or as is colloquially called, the webcam.

    resolution receptors focus binocular/stereo

    The difference in algorithm cannot be stated, as we do not know completely how humans see,but computer vision is progressing along many paths, a few broad fields of which are listedhere. The classification is not mutually exclusive, and many of these fields share overlaps inthe paths mentioned.

    Mapping, Localization, Depth maps Object detection Object identification Image/scene retrieval Augmented Reality Image segmentation Scene Understanding Action segmentation and recognition Feature detectors

    Since we can define the output loosely to be the kinds of inferences that can be drawn froman image/video. For example, given a class, to count the number of students attending, orto construct a 3D map of the seen world, or identifying an object in the scene. In such acomparizon, if you will excuse blatant arrogance, we may say that the outputs of machinevision are often good for specific applications, but no general purpose system exists thatmakes all the above inferences in a manner comparable to human vision. However, it is notonly a guess, but also a hope that the future might tip the scale.

    2.3 Models of Vision

    In order to understand how we have arrived at present day algorithms for computer vision, wemust first know how human vision works. The model that we have today is vastly different fromthose we constructed earlier, and a reading of these will help us understand both the strengths andweaknesses that these models offer.

    Emission Theory - Eye emits stuff that interacts with the outer world, perhaps modeled onthe sense of touch.

    Intro-mission - Stuff representative of objects enters the eye, perhaps modeled on the senseof smell.

    Unconscious inference (Helmholtz) - Vision is learned from past experience Gestalt theory - Visual system automatically groups elements into patterns:

    Proximity Similarity Closure Symmetry Common Motion Continuity

  • 2.4 Gestalt Principles 13

    Computational Models - Based upon study of brain functions, uses methods such as machinelearning, neural networks, etc.

    2.4 Gestalt Principles

    2.4.1 Proximity

    This image indicates the Law of Proximity. The birds flock closely together, causing viewers toperceive them as a group.

    Figure 2.2: Proximity

    2.4.2 Similarity

    This image indicates the Law of Similarity. Even though the blue shapes are arranged uniformly,the triangle made up of blue circles stands out from the rest of the figure.

    Figure 2.3: Similarity

    2.4.3 Closure

    This image indicates the Law of Closure. In this very famous logo of WWF, we can see the pandaclearly, even though there are no outlines to specify the head or the body.

    2.4.4 Common Motion

    This image indicates the Law of Common Motion.

  • 14 Chapter 2. Vision

    Figure 2.4: Closure

    Figure 2.5: Common Motion

    2.4.5 SymmetryThis image indicates the Law of Symmetry. Earlier, we discussed that objects that are close byare grouped together, but here, we see three sets of paranthesis, even though the different types ofbrackets are closer than the matching sets.

    Figure 2.6: Symmetry

    2.4.6 ContinuityThis image indicates the Law of Continuity. Even though, by the law of similarity, we should seetwo bent curves touching, we instead see two smooth curves intersecting.

    Therefore, we can see that these Gestalt Laws can be used as a foundation upon which we canbuild ways to see and understand objects in an image. However, note that they do not point to thesame inference, but instead these laws sometimes compete to form different interpretations of thesame image.

  • 2.4 Gestalt Principles 15

    Figure 2.7: Continuity

  • 3. Software

    3.1 Introduction

    In the previous chapter, we saw how the webcam is the Sense part of the system, and it retrieves animage for us to analyze. This does not define what exactly an image is. Therefore, in order to definean image, we must first learn how the image is represented. There are many ways to represent animage. For example, the representation of an image taken by an MRI may be different from theone taken by the webcam. Since we will be using the webcam, we will use a representation that isintuitive, widely-used and pliable : a matrix of numbers. In order to elaborate on this representation,let us first inspect how the webcam senses/captures the image. A webcam has an array of cellswhich are arranged in sets of 3 ( refer to fig. xx )1. This choice of three is modeled on the humaneye, which can loosely be said to be sensitive to 3 colours, red, green and blue. Therefore, eachcell on the array of the webcam has a receptor for blue light, green light and red light. These arethen given a value from 0 to 255 based on the amount of that particular light falling on the cell2.Therefore, the matrix of numbers that we spoke about is an n*m*3 dimensioned array, with n beingthe height, m being the width, and 3 for an 8 bit value corresponding to blue, red and green ( BGR). Please refer to fig. xx in order to get a better picture. We will be using the library called OpenCVand the language Python in order to obtain and process the above matrix in the rest of the tutorial.

    3.2 Installation

    In this part, we will install the following three softwares: Python Numpy OpenCV

    1This is a simplification. In commercial CCDs, each group consists of four pixels, one red, one blue and two green(thehuman eye is more sensitive to green than either red or blue).

    2Assuming the colour is represented in 8 bits

  • 18 Chapter 3. Software

    3.3 PythonPlease follow the steps given below: Download Python from the following link: https://www.python.org/ftp/python/2.7.6/python-2.7.6.msi Double-click the downloaded file to com-mence installation In order to configure the environment variables,

    Right-click on My Computer Click on Properties Click on Advanced System Settings to open the System Properties dialog box Under System Properties, select the Advanced tab Click on Environment Variables Under System Variables, search for the variable Path Add C:/Python27;C:/Python27/Scripts; at the start of the textbox Click on OKIn order to verify your installation, Open Command Prompt and type python and press enter You should see the following prompt (fig 3.1):

    Figure 3.1: Python shell

    Python as a scripting language... print assignments if then else for next modules - wikipedialists

    Python vs C++

    3.4 Numpy Download Numpy from the following link: http://sourceforge.net/projects/numpy/files/NumPy/1.7.1/numpy-1.7.1-win32-superpack-python2.7.exe/download

    Double-click the downloaded file to commence installationIn order to verify your installation, Open Command Prompt and type python and press Enter At the python prompt, type import numpy and press Enter You should see the following prompt (fig 3.2):

    3.5 OpenCV Download OpenCV from the following link: http://sourceforge.net/projects/opencvlibrary/files/opencv-win/2.4.9/opencv-2.4.9.exe/download

    Double-click the downloaded file to commence installation Navigate to the folder opencv/build/python/2.7 Copy the file cv2.pyd to C:/Python27/lib/site-packagesIn order to verify your installation,

  • 3.6 Debugging 19

    Figure 3.2: Check Numpy

    Open Command Prompt and type python and press Enter At the python prompt, type import cv2 and press Enter You should see the following prompt (fig 3.3):

    Figure 3.3: Check Numpy

    Suppose that we want to build a complex application using Computer Vision. This would implythat we may use some algorithms that are commonly used, to operate upon images, points, etc.However, if each person who was to write a program was to start from the basics, then we would notmake much progress. Therefore, we turn to libraries. Libraries are a collection of algorithms thatcan be shared, reused and improved. They make developing an application very simple. OpenCVis a library that implements many common algorithms used in Computer Vision, eg. zooming,shifting, thresholding, etc.

    3.6 Debugging

  • 4. Images

    4.1 RepresentationFrom the earlier section, we learned that one way to represent images is using an n*m*3 dimen-sioned matrix to represent the colours blue, breen and red. However, there are other representationsas well, which are defined in OpenCV. A partial list is as follows and will be elaborated upon in thechapter xx

    BGR RGB Grayscale Binary HSV YUV

    4.2 PropertiesThe properties of such representations of images are as follows

    Width - This is the number of rows in the image matrix Height - This is the number of columns in the image matrix Channels - This determines the kind of information stored in each pixel of the image matrix.

    For example, a BGR image has 3 channels, blue, green and red, whereas a grayscale imagehas only one channel, the grayness value.

    Depth - This is the type of information of each pixel of the image matrix. For example, itcould be 8 bits, therefore have a value from 0 to 255, or it could be 16 bits, which wouldcover the numbers from 0 to (216 - 1)

  • II5 Fundamental Programs . . . . . . . . . . . . . . 235.1 Structure of a Program5.2 Image from file5.3 Image from Camera5.4 Video from Camera5.5 Experiments5.6 Debugging

    6 Basic Image Processing . . . . . . . . . . . . . . 296.1 Colourspaces6.2 Thresholding6.3 Zooming, Rotating and Panning6.4 Experiments6.5 Debugging

    7 User Interfaces . . . . . . . . . . . . . . . . . . . . . . 337.1 Shapes7.2 Buttons7.3 Experiments7.4 Debugging

    8 Drawing on air . . . . . . . . . . . . . . . . . . . . . . 358.1 System8.2 Light Pen8.3 Steps8.4 Experiments8.5 Debugging

    Bibliography . . . . . . . . . . . . . . . . . . . . . . . . 47BooksArticles

    Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    Part Two : Hands On

  • 5. Fundamental Programs

    5.1 Structure of a Program

    The structure of a program can be derived from the Sense-Analyze-Control model of the program.Every program that we henceforth write will largely have the same structure, as represented in fig.xx below

    Figure 5.1: Structure of a Program

    Therefore, we will now write 3 types of programs, as detailed below, that can be modified laterto suit any purpose.

    5.2 Image from file

    The first type of program is one that does not use a camera, but instead takes the input from a filestored on the computer.

  • 26 Chapter 5. Fundamental Programs

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# ###########################################

    # ############################################# Read t h e imageimg = cv2 . imread ( ' g a n d a l f . j p e g ' )# ###########################################

    # ############################################# Do t h e p r o c e s s i n g

    # Noth ing

    # ###########################################

    # ############################################# Show t h e imagecv2 . imshow ( ' image ' , img )# ###########################################

    # ############################################# Close and e x i tcv2 . waitKey ( 0 )cv2 . destroyAllWindows ( )# ###########################################

    - - - - - - - - - - - - - - - explanation of statements - - - - - - - - - - - - - - - - - - - - -

    F import numpy - This command asks Python to use the Numpy library to manipulate matri-ces. Since the images we use are stored as matrices, we can then use numpy functions asnumpy.. Alternately, we can also use the command import numpy as np to shortenthe name of the module so that we can call the numpy functions as np.

    F import cv2 - This command asks Python to use the OpenCV library(also called module).Once we have imported cv2, Python understands that the functions we use are defined in theOpenCV library. Once we have imported this library, we can proceed to use functions fromthis library as cv2.

    F cv2.imread(, [flag]) - This command loads an image as a matrix. If we do not useany flags, then the image is read and returned as an HxWxC matrix, where H - Height, W- Width, C - Channels. For example, if our image is a 640x480 image(as is the case whenwe read from the camera), then the dimensions of the matrix are 480x640x3, where 480 isthe height, 640 is the width, and 3 refers to the number of channels, i.e. Blue, Green, Red.If we specify 0 as our flag, then the image is read as a grayscale image, whose matrix hasdimensions HxW, as there is only one channel now.

    F cv2.imshow(,) - This command creates a window with the name, and displays the image represented by . Note that the func-tion will always interpret m*n*3 images as if they are BGR images, irrespective of whatrepresentation we are storing in the .

  • 5.3 Image from Camera 27

    F cv2.waitKey(t) - This command waits for a time period of time milliseconds for the user topress a key, and if the user does press a key, then it returns the ASCII code of the key pressed.In case we pass 0 as our parameter for time, then it waits indefinitely till the user presses akey.

    F cv2.destroyAllWindows() - This command asks python to close all the open windows. This ishow we exit the program.

    You can find the source code of this program at pathtofileImage.py

    5.3 Image from CameraThe second type of program is one that processes a single image taken by a webcam connected tothe computer.

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# I n i t i a l i z e cameracap = cv2 . VideoCapture ( 0 )# ###########################################

    # ############################################# Read t h e imageret , frame = cap . read ( )# ###########################################

    # ############################################# Do t h e p r o c e s s i n g

    # Noth ing

    # ###########################################

    # ############################################# Show t h e imagecv2 . imshow ( ' image ' , frame )cv2 . waitKey ( 0 )# ###########################################

    # ############################################# Close and e x i t# c l o s e cameracap . release ( )cv2 . destroyAllWindows ( )# ########################################### - - - - - - - - - - - - - - - explanation of statements - - - - - - - - - - - - - - - - - - - - -

    F cv2.VideoCapture(i) -> cap - This command tells python to set up an instance of a Video-Capture class and assigns it to the variable cap. In other words, we need to tell python wherewe are getting our images from, in this case the number assigned to the camera we needto use. For example, cap = cv2.VideoCapture(0) will make python use the first camera itfinds(usually the laptop camera) whenever we need to read images using cap.imread().

    F cap.read() -> ret, frame - This is the command we use to retrieve images from the source wehave named as capture. For example, if we have used the command cap = cv2.VideoCapture(0),

  • 28 Chapter 5. Fundamental Programs

    then our capture is named cap and cap.read() will return frame, a matrix that stores an imagetaken from the camera, and ret, which if tells us if the capture was successful

    F cap.release() - This command releases the camera that we had initialized when we used thecv2.VideoCapture command. In the absence of this command, we will get errors when werun the program again and try to initialize the same camera without first releasing it.

    You can find the source code of this program at pathtocamImage.py

    5.4 Video from CameraThe second type of program is one that processes a video stream from a webcam connected to thecomputer.

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# I n i t i a l i z e cameracap = cv2 . VideoCapture ( 1 )# ###########################################

    # ############################################# Video Loopw h i l e ( 1 ) :

    ## Read t h e imageret , frame = cap . read ( )

    ## Do t h e p r o c e s s i n g# Noth ing

    ## Show t h e imagecv2 . imshow ( ' image ' , frame )

    ## End t h e v i d e o loopi f cv2 . waitKey ( 1 ) == 2 7 : ## 27 ASCII f o r e s c a p e key

    b r e a k# ###########################################

    # ############################################# Close and e x i t# c l o s e cameracap . release ( )cv2 . destroyAllWindows ( )# ########################################### - - - - - - - - - - - - - - - explanation of statements - - - - - - - - - - - - - - - - - - - - -

    F while(1) - This is an infinite loop, equivalent to saying while(True).

    F break - This statement breaks out of the loop in which it occurs.

    You can find the source code of this program at pathtocamVideo.py

    5.5 Experimentsshow multiple files, types of files, windows waitkey times waitkey catch key index different cameras

  • 5.6 Debugging 29

    5.6 Debugging

  • 6. Basic Image Processing

    6.1 Colourspaces

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# ###########################################

    # ############################################# Read t h e imageimg = cv2 . imread ( ' g a n d a l f . j p e g ' )# ###########################################

    # ############################################# Do t h e p r o c e s s i n g

    p r i n t img . shapegray = cv2 . cvtColor ( img , cv2 . COLOR_BGR2GRAY )p r i n t gray . shape

    # ###########################################

    # ############################################# Show t h e imagecv2 . imshow ( ' image ' , gray )# ###########################################

    # ############################################# Close and e x i tcv2 . waitKey ( 0 )cv2 . destroyAllWindows ( )# ###########################################

    F cv2.cvtColor(image, cv2.COLOR_BGR2HSV)

  • 32 Chapter 6. Basic Image Processing

    6.2 Thresholding

    Thresholding is a technique used to create a binary image from an existing image by assigningeither of two values based on the comparison of the intended threshold value and the pixel value.For example, supposing the image we are Thresholding is a grayscale image. Then each of thepixels has a value ranging from 0 to 255. Let us assume that the intended threshold value is 150.Then those pixels in the image that have a value equal to or below 150 are set to 0 and those aboveare set to 255. Thus the resulting image has pixels that are either black(0) or white(255).

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# ###########################################

    # ############################################# Read t h e imageimg = cv2 . imread ( ' l i o n . j p g ' )# ###########################################

    # ############################################# Do t h e p r o c e s s i n g

    gray = cv2 . cvtColor ( img , cv2 . COLOR_BGR2GRAY ) # We need a g r a y s c a l e image t o do t h e t h r e s h o l d i n g

    ret , thresh1 = cv2 . threshold ( gray , 1 5 0 , 2 5 5 , cv2 . THRESH_BINARY )ret , thresh2 = cv2 . threshold ( gray , 1 2 7 , 2 5 5 , cv2 . THRESH_BINARY_INV )ret , thresh3 = cv2 . threshold ( gray , 1 2 7 , 2 5 5 , cv2 . THRESH_TRUNC )ret , thresh4 = cv2 . threshold ( gray , 1 2 7 , 2 5 5 , cv2 . THRESH_TOZERO )ret , thresh5 = cv2 . threshold ( gray , 1 2 7 , 2 5 5 , cv2 . THRESH_TOZERO_INV )# ###########################################

    # ############################################# Show t h e imagecv2 . imshow ( ' image t h r e s h 1 ' , thresh1 )cv2 . imshow ( ' image t h r e s h 2 ' , thresh2 )cv2 . imshow ( ' image t h r e s h 3 ' , thresh3 )cv2 . imshow ( ' image t h r e s h 4 ' , thresh4 )cv2 . imshow ( ' image t h r e s h 5 ' , thresh5 )cv2 . imshow ( ' o r i g i n a l ' , img )# ###########################################

    # ############################################# Close and e x i tcv2 . waitKey ( 0 )cv2 . destroyAllWindows ( )# ###########################################

    F cv2.threshold(src, thresh, maxval, type) - Takes src as input and returns the threshed im-age. The types of thresholding include THRESH_BINARY, THRESH_BINARY_INV,THRESH_TRUNC, THRESH_TOZERO, THRESH_TOZERO_INV

    # inRange program

    F cv2.inRange - This command is used to apply thresholding when there is more than onechannel, eg. HSV, BGR images, etc.

  • 6.3 Zooming, Rotating and Panning 33

    6.3 Zooming, Rotating and Panning

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# ###########################################

    # ############################################# Read t h e imageimg = cv2 . imread ( ' g a n d a l f . j p e g ' )# ###########################################

    # ############################################# Do t h e p r o c e s s i n g

    p r i n t img . shapeh , w , c = img . shaperes = cv2 . resize ( img , ( w*2 , h *2) , interpolation = cv2 . INTER_CUBIC ) # Note t h e o r d e r o f

    h e i g h t and width ,# i t i s d i f f e r e n t from t h e one we used f o r img . shape# cv2 . INTER_AREA > s h r i n k i n g# cv2 . INTER_CUBIC , cv2 . INTER_LINEAR > zooming

    p r i n t res . shape

    # ###########################################

    # ############################################# Show t h e imagecv2 . imshow ( ' image ' , res )# ###########################################

    # ############################################# Close and e x i tcv2 . waitKey ( 0 )cv2 . destroyAllWindows ( )# ###########################################

    F cv2.resize - This is a command used to change the size of an image.

    6.4 Experiments

    6.5 Debugging

  • 34 Chapter 6. Basic Image Processing

    Figure 6.1: Check Numpy

  • 7. User Interfaces

    7.1 Shapes

    # ############################################# I mp or t OpenCVi m p o r t numpy as npi m p o r t cv2# ###########################################

    # ############################################# C r e a t e t h e imageimg = np . zeros ( ( 5 0 0 , 5 0 0 , 3 ) , np . uint8 )# ###########################################

    # ############################################# Do t h e p r o c e s s i n g

    # Draw a l i n ecv2 . line ( img , ( 1 0 , 1 0 ) , ( 4 9 0 , 1 0 ) , ( 2 5 5 , 0 , 0 ) , 5 )

    # Draw a r e c t a n g l ecv2 . rectangle ( img , ( 2 0 , 2 0 ) , ( 4 8 0 , 8 0 ) , ( 0 , 2 5 5 , 0 ) , 3 )

    # Draw a c i r c l ecv2 . circle ( img , ( 1 5 0 , 1 5 0 ) , 50 , ( 0 , 0 , 2 5 5 ) , 1) # F i l l e dcv2 . circle ( img , ( 3 5 0 , 1 5 0 ) , 50 , ( 0 , 0 , 2 5 5 ) , 3 ) # O u t l i n e

    # Draw an e l l i s p ecv2 . ellipse ( img , ( 2 5 0 , 2 0 0 ) , ( 2 0 0 , 1 0 0 ) , 0 , 0 , 1 8 0 , ( 1 0 0 , 1 0 0 , 0 ) , 5 )

    # Draw a polygonpts = np . array ( [ [ 2 0 0 , 4 0 0 ] , [ 3 0 0 , 4 0 0 ] , [ 2 5 0 , 4 5 0 ] ] , np . int32 )p r i n t ptspts = pts . reshape ( ( 1 ,1 ,2 ) )p r i n t ptscv2 . polylines ( img , [ pts ] , True , ( 0 , 2 5 5 , 2 5 5 ) , 2 )

    # ###########################################

    # ###########################################

  • 36 Chapter 7. User Interfaces

    ## Show t h e imagecv2 . imshow ( ' image ' , img )# ###########################################

    # ############################################# Close and e x i tcv2 . waitKey ( 0 )cv2 . destroyAllWindows ( )# ###########################################

    F cv2.lin

    F cv2.circle

    F cv2.rectangle

    F cv2.ellipse

    7.2 Buttons

    F cv2.setMouseCallback

    7.3 Experiments

    7.4 Debugging

  • 8. Drawing on air

    8.1 System

    8.2 Light Pen

    Figure 8.1: The Pen

  • 38 Chapter 8. Drawing on air

    (a) On (b) Off

    Figure 8.2: Working of the pen

    8.3 Steps

    8.3.1 Step 1

    As a first step, we need to get the video feed from the camera. Therefore, we will use ourcamVideo.py as the template. As usual, please choose the correct video channel when using theVideoCapture function.

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# I n i t i a l i z e cameracap = cv2 . VideoCapture ( 1 )# ###########################################

    # ############################################# Video Loop

    w h i l e ( 1 ) :

    ## Read t h e imageret , frame = cap . read ( )

    ## Do t h e p r o c e s s i n g

    ## Show t h e imagecv2 . imshow ( ' image ' , frame )

    ## End t h e v i d e o loopi f cv2 . waitKey ( 1 ) == 2 7 : ## 27 ASCII f o r e s c a p e key

    b r e a k# ###########################################

    # ############################################# Close and e x i t# c l o s e cameracap . release ( )cv2 . destroyAllWindows ( )# ###########################################

  • 8.3 Steps 39

    8.3.2 Step 2

    Next, we need to choose the colourspace that is appropriate for our application. We can expectchanges in lighting conditions, and we are tracking a certain colour, therefore, we choose the HSVcolourspace.

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# I n i t i a l i z e cameracap = cv2 . VideoCapture ( 1 )# ###########################################

    # ############################################# Video Loop

    w h i l e ( 1 ) :

    ## Read t h e imageret , frame = cap . read ( )

    ## Do t h e p r o c e s s i n gimg = cv2 . cvtColor ( frame , cv2 . COLOR_BGR2HSV )

    ## Show t h e imagecv2 . imshow ( ' image ' , img )

    ## End t h e v i d e o loopi f cv2 . waitKey ( 1 ) == 2 7 : ## 27 ASCII f o r e s c a p e key

    b r e a k# ###########################################

    # ############################################# Close and e x i t# c l o s e cameracap . release ( )cv2 . destroyAllWindows ( )# ###########################################

  • 40 Chapter 8. Drawing on air

    8.3.3 Step 3

    In this step, we introduce the concept of functions in Python. We simply call the function findPointto convert the image from BGR to HSV and return the converted image. Therefore, the output weshall see will be the same as Step 2.

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# I n i t i a l i z e cameracap = cv2 . VideoCapture ( 1 )# ###########################################

    # ############################################# F i n d i n g t h e p o i n t (LED)d e f findPoint ( img ) :

    ## Conve r t t o HSVimg = cv2 . cvtColor ( img , cv2 . COLOR_BGR2HSV )r e t u r n img

    # ############################################# Video Loop

    w h i l e ( 1 ) :

    ## Read t h e imageret , frame = cap . read ( )

    ## Do t h e p r o c e s s i n goutput = findPoint ( frame )

    ## Show t h e imagecv2 . imshow ( ' image ' , output )

    ## End t h e v i d e o loopi f cv2 . waitKey ( 1 ) == 2 7 : ## 27 ASCII f o r e s c a p e key

    b r e a k# ###########################################

    # ############################################# Close and e x i t# c l o s e cameracap . release ( )cv2 . destroyAllWindows ( )# ###########################################

  • 8.3 Steps 41

    8.3.4 Step 4

    Now that we have a suitable image to work with, we need to find the position of the LED. In orderto do that, we try to first reduce the image to show only the LED by thresholding. Please usefilterFind.py to find the appropriate threshold for the colour of LED that you are using. The valuesused here are meant for a red LED. Note that we are now returning the mask as the output of thefunction findPoint.

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# I n i t i a l i z e cameracap = cv2 . VideoCapture ( 1 )# ###########################################

    # ############################################# F i n d i n g t h e p o i n t (LED)d e f findPoint ( img ) :

    ## Conve r t t o HSVhsv = cv2 . cvtColor ( img , cv2 . COLOR_BGR2HSV )

    ## De f i ne t h r e s h o l d slower = numpy . array ( [ 0 , 0 , 2 3 0 ] )upper = numpy . array ( [ 2 0 , 1 0 , 2 5 5 ] )

    ## T h r e s h o l d t h e imagemask = cv2 . inRange ( hsv , lower , upper )

    r e t u r n mask

    # ############################################# Video Loop

    w h i l e ( 1 ) :

    ## Read t h e imageret , frame = cap . read ( )

    ## Do t h e p r o c e s s i n goutput = findPoint ( frame )

    ## Show t h e imagecv2 . imshow ( ' image ' , output )

    ## End t h e v i d e o loopi f cv2 . waitKey ( 1 ) == 2 7 : ## 27 ASCII f o r e s c a p e key

    b r e a k# ###########################################

    # ############################################# Close and e x i t# c l o s e cameracap . release ( )cv2 . destroyAllWindows ( )# ###########################################

  • 42 Chapter 8. Drawing on air

    8.3.5 Step 5

    Now we have a binary image where the LED portion of the image appears as a blob. We are lookingfor a position (x,y) to tell us which is the point on the screen that the LED indicates. Therefore,we can use contours to find the central point of the blob. In this step, let us display all the possiblecontours so that we can later eliminate those contours that result from noise.

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# I n i t i a l i z e cameracap = cv2 . VideoCapture ( 1 )# ###########################################

    # ############################################# F i n d i n g t h e p o i n t (LED)d e f findPoint ( img ) :

    ## Conve r t t o HSVhsv = cv2 . cvtColor ( img , cv2 . COLOR_BGR2HSV )

    ## De f i ne t h r e s h o l d slower = numpy . array ( [ 0 , 0 , 2 0 0 ] )upper = numpy . array ( [ 3 0 , 1 0 , 2 5 5 ] )

    ## T h r e s h o l d t h e imagemask = cv2 . inRange ( hsv , lower , upper )

    ## Find t h e b lob o f r e dcontours , hierarchy = cv2 . findContours ( mask , cv2 . RETR_TREE , cv2 .

    CHAIN_APPROX_SIMPLE )cv2 . drawContours ( img , contours , 1, ( 0 , 0 , 2 5 5 ) , 3 )

    r e t u r n img

    # ############################################# Video Loop

    w h i l e ( 1 ) :

    ## Read t h e imageret , frame = cap . read ( )

    ## Do t h e p r o c e s s i n goutput = findPoint ( frame )

    ## Show t h e imagecv2 . imshow ( ' image ' , output )

    ## End t h e v i d e o loopi f cv2 . waitKey ( 1 ) == 2 7 : ## 27 ASCII f o r e s c a p e key

    b r e a k# ###########################################

    # ############################################# Close and e x i t# c l o s e cameracap . release ( )cv2 . destroyAllWindows ( )# ###########################################

  • 8.3 Steps 43

    8.3.6 Step 6We see that in the earlier step, there are many contours that are returned. We must now choosethe contour that indicates the LED. In order to do so, we can use contour properties of area andmoments. Firstly, we can dismiss those contours that are very small, or have a low area. Thenwe can pick the biggest of those contours, using the assumption that it will be bigger than thosecontours that result from noise.

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# I n i t i a l i z e cameracap = cv2 . VideoCapture ( 1 )# ###########################################

    # ############################################# F i n d i n g t h e p o i n t (LED)d e f findPoint ( img ) :

    ## Conve r t t o HSVhsv = cv2 . cvtColor ( img , cv2 . COLOR_BGR2HSV )

    ## De f i ne t h r e s h o l d slower = numpy . array ( [ 0 , 0 , 2 0 0 ] )upper = numpy . array ( [ 3 0 , 1 0 , 2 5 5 ] )

    ## T h r e s h o l d t h e imagemask = cv2 . inRange ( hsv , lower , upper )

    ## Find t h e b lob o f r e dcontours , hierarchy = cv2 . findContours ( mask , cv2 . RETR_TREE , cv2 .

    CHAIN_APPROX_SIMPLE )

    ## Find b lob wi th b i g g e s t a r e ai f l e n ( contours ) >0:

    maxA = 0maxC = [ ]f o r cnt i n contours :

    area = cv2 . contourArea ( cnt )i f area>maxA :

    maxC = cntmaxA = area

    cv2 . drawContours ( img , [ cnt ] , 1, ( 0 , 0 , 2 5 5 ) , 3 )

    r e t u r n img

    # ############################################# Video Loop

    w h i l e ( 1 ) :

    ## Read t h e imageret , frame = cap . read ( )

    ## Do t h e p r o c e s s i n goutput = findPoint ( frame )

    ## Show t h e imagecv2 . imshow ( ' image ' , output )

    ## End t h e v i d e o loopi f cv2 . waitKey ( 1 ) == 2 7 : ## 27 ASCII f o r e s c a p e key

    b r e a k# ###########################################

    # ###########################################

  • 44 Chapter 8. Drawing on air

    ## Close and e x i t# c l o s e cameracap . release ( )cv2 . destroyAllWindows ( )# ###########################################

  • 8.3 Steps 45

    8.3.7 Step 7

    Now that we have found out the blob that we need, we need to find its centre. This is done inOpenCV by the use of contour moments. Once we find the centre, we return it using the samefunction, findPoint. We then draw a circle around it to show us where this point is.

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# I n i t i a l i z e cameracap = cv2 . VideoCapture ( 1 )

    canvas = numpy . zeros ( ( 4 8 0 , 6 4 0 , 3 ) , numpy . uint8 )# ###########################################

    # ############################################# F i n d i n g t h e p o i n t (LED)d e f findPoint ( img ) :

    ## Conve r t t o HSVhsv = cv2 . cvtColor ( img , cv2 . COLOR_BGR2HSV )

    ## De f i ne t h r e s h o l d slower = numpy . array ( [ 0 , 0 , 2 0 0 ] )upper = numpy . array ( [ 3 0 , 1 0 , 2 5 5 ] )

    ## T h r e s h o l d t h e imagemask = cv2 . inRange ( hsv , lower , upper )

    ## Find t h e b lob o f r e dcontours , hierarchy = cv2 . findContours ( mask , cv2 . RETR_TREE , cv2 .

    CHAIN_APPROX_SIMPLE )# c o n t o u r s = [ ]## D e f a u l t r e t u r n v a l u e( cx , cy ) = ( 0 , 0 )

    ## Find b lob wi th b i g g e s t a r e ai f l e n ( contours ) >0:

    maxA = 0maxC = [ ]f o r cnt i n contours :

    area = cv2 . contourArea ( cnt )i f area>maxA :

    maxC = cntmaxA = area

    ## Find t h e c e n t e r o f t h a t b lobM = cv2 . moments ( maxC )i f M [ ' m00 ' ] != 0 :

    cx = i n t ( M [ ' m10 ' ] / M [ ' m00 ' ] )cy = i n t ( M [ ' m01 ' ] / M [ ' m00 ' ] )

    r e t u r n mask , ( cx , cy )

    # ############################################# Video Loop

    w h i l e ( 1 ) :

    ## Read t h e imageret , frame = cap . read ( )

    ## Do t h e p r o c e s s i n gmask , outxy = findPoint ( frame )

    ## Draw t h e p o i n t r e t u r n e dcv2 . circle ( canvas , outxy , 5 , ( 0 , 0 , 2 5 5 ) , 2 )

  • 46 Chapter 8. Drawing on air

    cv2 . circle ( canvas , outxy , 20 , ( 2 5 5 , 0 , 2 5 5 ) , 2 )cv2 . circle ( canvas , outxy , 40 , ( 0 , 2 5 5 , 2 5 5 ) , 2 )# cv2 . l i n e ( canvas , o ldPos , pos , c o l o u r , t )

    ## Show t h e imagecv2 . imshow ( ' o r i g ' , frame )cv2 . imshow ( ' image ' , canvas )

    ## End t h e v i d e o loopi f cv2 . waitKey ( 1 ) == 2 7 : ## 27 ASCII f o r e s c a p e key

    b r e a k# ###########################################

    # ############################################# Close and e x i t# c l o s e cameracap . release ( )cv2 . destroyAllWindows ( )# ###########################################

  • 8.3 Steps 47

    8.3.8 Step 8

    Finally, we need to draw the points that we found earlier. This completes our sample application ofusing Computer Vision to draw on air.

    # ############################################# I mp or t OpenCVi m p o r t numpyi m p o r t cv2# I n i t i a l i z e cameracap = cv2 . VideoCapture ( 1 )

    # ca nv as = numpy . z e r o s ( ( 4 8 0 , 6 4 0 , 3 ) , numpy . u i n t 8 )# ###########################################

    # ############################################# F i n d i n g t h e p o i n t (LED)d e f findPoint ( img ) :

    g l o b a l oldPos

    ## Conve r t t o HSVhsv = cv2 . cvtColor ( img , cv2 . COLOR_BGR2HSV )

    ## De f i ne t h r e s h o l d slower = numpy . array ( [ 0 , 0 , 2 0 0 ] )upper = numpy . array ( [ 3 0 , 1 0 , 2 5 5 ] )

    ## T h r e s h o l d t h e imagemask = cv2 . inRange ( hsv , lower , upper )

    ## Find t h e b lob o f r e dcontours , hierarchy = cv2 . findContours ( mask , cv2 . RETR_TREE , cv2 .

    CHAIN_APPROX_SIMPLE )# c o n t o u r s = [ ]## D e f a u l t r e t u r n v a l u e( cx , cy ) = ( 0 , 0 )

    ## Find b lob wi th b i g g e s t a r e ai f l e n ( contours ) >0:

    maxA = 400maxC = [ ]f o r cnt i n contours :

    area = cv2 . contourArea ( cnt )i f area>maxA :

    maxC = cntmaxA = area

    i f maxC != [ ] :## Find t h e c e n t e r o f t h a t b lobM = cv2 . moments ( maxC )i f M [ ' m00 ' ] != 0 :

    cx = i n t ( M [ ' m10 ' ] / M [ ' m00 ' ] )cy = i n t ( M [ ' m01 ' ] / M [ ' m00 ' ] )

    r e t u r n mask , ( cx , cy )

    # ############################################# Video Loop

    w h i l e ( 1 ) :## Read t h e imageret , frame = cap . read ( )

    ## Do t h e p r o c e s s i n gmask , outxy = findPoint ( frame )

    ## Draw t h e p o i n t r e t u r n e dcv2 . circle ( frame , outxy , 5 , ( 0 , 0 , 2 5 5 ) , 3 )cv2 . circle ( frame , outxy , 20 , ( 2 5 5 , 0 , 2 5 5 ) , 3 )cv2 . circle ( frame , outxy , 40 , ( 0 , 2 5 5 , 2 5 5 ) , 3 )

  • 48 Chapter 8. Drawing on air

    ## Show t h e imagecv2 . imshow ( ' image ' , frame )

    ## End t h e v i d e o loopi f cv2 . waitKey ( 1 ) == 2 7 : ## 27 ASCII f o r e s c a p e key

    b r e a k# ###########################################

    # ############################################# Close and e x i t# c l o s e cameracap . release ( )cv2 . destroyAllWindows ( )# ###########################################

    8.4 Experiments

    8.5 Debugging

  • Bibliography

    Books

    Articles

  • Index

    B

    Buttons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    C

    Closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Colourspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Common Motion . . . . . . . . . . . . . . . . . . . . . . . . 11Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Contour Properties . . . . . . . . . . . . . . . . . . . . . . 41Contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    D

    Debugging . . . . . . . . . . . . . . . . 17, 27, 31, 34, 46Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    E

    Experiments . . . . . . . . . . . . . . . . . . 26, 31, 34, 46

    F

    Functions in Python . . . . . . . . . . . . . . . . . . . . . 38

    G

    Gestalt Principles . . . . . . . . . . . . . . . . . . . . . . . 11

    H

    HSV colourspace . . . . . . . . . . . . . . . . . . . . . . . 37

    I

    Image from Camera . . . . . . . . . . . . . . . . . . . . . 25Image from file . . . . . . . . . . . . . . . . . . . . . . . . . 23Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    L

    Light Pen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    M

    Models of Vision. . . . . . . . . . . . . . . . . . . . . . . .10

  • 52 INDEX

    N

    Numpy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16

    O

    OpenCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    P

    Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Proximity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    R

    Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    S

    Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    T

    Thresholding . . . . . . . . . . . . . . . . . . . . . . . . 30, 39Tying up loose ends . . . . . . . . . . . . . . . . . . . . . 45

    V

    Video feed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Video from Camera . . . . . . . . . . . . . . . . . . . . . 26

    Z

    Zooming, Rotating and Panning . . . . . . . . . . 31

    Part I Part One : Basics1 Introduction2 Vision2.1 Sense - Analyze - Control2.2 Eyes vs Webcam2.3 Models of Vision2.4 Gestalt Principles

    3 Software3.1 Introduction3.2 Installation3.3 Python3.4 Numpy3.5 OpenCV3.6 Debugging

    4 Images4.1 Representation4.2 Properties

    Part II Part Two : Hands On5 Fundamental Programs5.1 Structure of a Program5.2 Image from file5.3 Image from Camera5.4 Video from Camera5.5 Experiments5.6 Debugging

    6 Basic Image Processing6.1 Colourspaces6.2 Thresholding6.3 Zooming, Rotating and Panning6.4 Experiments6.5 Debugging

    7 User Interfaces7.1 Shapes7.2 Buttons7.3 Experiments7.4 Debugging

    8 Drawing on air8.1 System8.2 Light Pen8.3 Steps8.4 Experiments8.5 Debugging

    BibliographyBooksArticles

    Index