[frc 2011] vision targets tutorial

9
Vision Targets This document describes the scoring targets and techniqu be useful for playing the gam Retro-reflective Tape: This material should be relat visibility of road signs, traffic confused with reflective mat Reflective materials bounce t the blue and red angles show saying that the light reflects a Retro-reflective materials, h angle. Initially, this doesn’t eye are near one another, as material shines brightly. No orientation of the material, surface coating on the mate e visual properties of the material used to mark ues for processing images to make measuremen me. tively familiar as it is often used to enhance nigh c personnel, and pedestrians. The way it behave terials, so a brief comparison is provided. the majority of light back at a supplementary an wn below-left sum to 180 degrees. This is equiva about the reflective surface’s normal – the gree however, return the majority of light back at t seem like a useful safety property, but when as shown below, the reflected light enters the e ote that the shine is largely independent of the and apparent color is a combination of light s erial. the field nts that may httime es is often ngle. That is, alent to en line. the entry the light and eye, and the e source and

Upload: thelegace

Post on 27-Dec-2014

188 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: [FRC 2011] Vision Targets Tutorial

Vision Targets This document describes the visual properties of the material used to mark the field

scoring targets and techniques for processing images to make measurements that may

be useful for playing the game.

Retro-reflective Tape: This material should be relatively familiar

visibility of road signs, traffic personnel, and pedestrians.

confused with reflective materials, so a brief comparison is provided.

Reflective materials bounce the majority of light back at a supplementary angle. That is,

the blue and red angles shown

saying that the light reflects about

Retro-reflective materials, however, return the majority of light back at the entry

angle. Initially, this doesn’t seem like a useful safety property, but when the light

eye are near one another, as shown below

material shines brightly. Note that t

orientation of the material, and

surface coating on the material.

the visual properties of the material used to mark the field

scoring targets and techniques for processing images to make measurements that may

be useful for playing the game.

should be relatively familiar as it is often used to enhance nighttime

visibility of road signs, traffic personnel, and pedestrians. The way it behaves

confused with reflective materials, so a brief comparison is provided.

unce the majority of light back at a supplementary angle. That is,

shown below-left sum to 180 degrees. This is equivalent to

saying that the light reflects about the reflective surface’s normal – the green line.

, however, return the majority of light back at the entry

angle. Initially, this doesn’t seem like a useful safety property, but when the light

, as shown below, the reflected light enters the eye, and the

brightly. Note that the shine is largely independent of the

orientation of the material, and apparent color is a combination of light source and

surface coating on the material.

the visual properties of the material used to mark the field

scoring targets and techniques for processing images to make measurements that may

as it is often used to enhance nighttime

behaves is often

unce the majority of light back at a supplementary angle. That is,

is equivalent to

the green line.

, however, return the majority of light back at the entry

angle. Initially, this doesn’t seem like a useful safety property, but when the light and

reflected light enters the eye, and the

he

color is a combination of light source and

Page 2: [FRC 2011] Vision Targets Tutorial

Demonstration: To further study the effect, place the retro

surface, stand 10-20 feet away, and shine a small flashlight towards the material. Start

with the light held at your belly button, and raise it until it is even with your nose. As the

light gets near your eyes, the

at various locations in the room. The tape shine should work over a wide range of

viewing angles. Experiment with different ligh

is sufficiently bright, and will demonstrate that the color of the light determines

color of the shine. Place several people with different light

room. Note that the light returned to

coming from the individual’s source

overly bright, the primary difference will be the appearance and disappearance of the

shine. For fun, identify retro-

clothing, backpacks, shoes, etc.

Application: The retro-reflective tape will be applied to several surfaces on the field

them and aid with robot navigation

combination of color and flashing

Note that most flashlights act as

bright light onto a region or spot

type of light source used, only the objects that are visible to the camera

illuminated by the light source will shine. It may be

around to locate objects that will shine.

An alternate approach would

camera’s entire viewing area,

allow all marked areas visible to the camera

A very useful type of light source

light source very close to the ca

and provides very even lighting. Because of their

size, LEDs are particularly useful for constructing this

type of device. The low-tech ring light shown

right was constructed using battery

Christmas lights mounted at various angles by

poking holes into a rigid foam ring. The foam is

fragile, but otherwise, the light functions

flooding the entire scene and

simultaneously.

As a bonus, Christmas lights also come in a

variety of colors.

study the effect, place the retro-reflective material on a wall or vertical

feet away, and shine a small flashlight towards the material. Start

with the light held at your belly button, and raise it until it is even with your nose. As the

light gets near your eyes, the intensity of the returned light will increase rapidly

at various locations in the room. The tape shine should work over a wide range of

viewing angles. Experiment with different light sources. For example, a red bicycle light

ght, and will demonstrate that the color of the light determines

Place several people with different lights at different locations in the

room. Note that the light returned to an individual viewer is determined by the light

the individual’s source. Flash the light source. If the light source is not

, the primary difference will be the appearance and disappearance of the

-reflective materials already present around you

clothing, backpacks, shoes, etc.

e will be applied to several surfaces on the field in order to mark

with robot navigation. Each robot may use a light source and any

combination of color and flashing to help identify marked areas.

act as spotlights, using lenses or shaped mirrors to

or spot much smaller than what the camera views.

type of light source used, only the objects that are visible to the camera and

illuminated by the light source will shine. It may be possible to move the flashlight

that will shine.

would be to locate or design the light source so it illuminates the

camera’s entire viewing area, instead of just a small spot. This floodlight approach will

visible to the camera to shine at the same time.

source to research is the ring flash, or ring light. It places the

light source very close to the camera lens/sensor

and provides very even lighting. Because of their

size, LEDs are particularly useful for constructing this

tech ring light shown to the

was constructed using battery-powered LED

Christmas lights mounted at various angles by

poking holes into a rigid foam ring. The foam is

otherwise, the light functions well,

the entire scene and shining all markers

Christmas lights also come in a wide

on a wall or vertical

feet away, and shine a small flashlight towards the material. Start

with the light held at your belly button, and raise it until it is even with your nose. As the

returned light will increase rapidly. Repeat

at various locations in the room. The tape shine should work over a wide range of

t sources. For example, a red bicycle light

ght, and will demonstrate that the color of the light determines the

at different locations in the

by the light

t source is not

, the primary difference will be the appearance and disappearance of the

present around you, on

in order to mark

Each robot may use a light source and any

to project

much smaller than what the camera views. If this is the

and also

move the flashlight

design the light source so it illuminates the

a small spot. This floodlight approach will

is the ring flash, or ring light. It places the

Page 3: [FRC 2011] Vision Targets Tutorial

Image Processing: Calibration:

If the color of the light shine is being used to identify the marker, it is important to

control the camera settings that can affect the colors within an image. The most

important factor is the camera white balance. This controls how the camera blends the

component colors of the sensor in order to produce an image that matches the color

processing of the human brain. The camera has five or six named presets, an auto

setting that constantly adapts the output colors, and a hold setting -- for custom

calibration. The easiest approach is to select from the named presets, looking for a

setting that minimizes the saturation of other objects that might be confused for the

light shine. To custom-calibrate the white balance, place a known subject in front of the

camera, set to auto white balance, wait for the camera to update its filters (ten seconds

or so), and switch the white balance to hold.

The brightness or exposure of the image also has an impact on the colors being

reported. The issue is that as overall brightness increases, color saturation will start to

drop. Lets look at an example to see how this occurs. A saturated red object placed in

front of the camera will return an RGB measurement high in red and low in the other

two – e.g. (220, 20, 30). As overall white lighting increases, the RGB value increases to

(240, 40, 50), then (255, 80, 90), then (255, 120, 130), and then (255, 160, 170). Once

the red component is maximized, additional light can only increase the blue and green,

and acts to dilute the measured color and lower the saturation. If the point is to identify

the red object, it is useful to adjust the exposure to avoid diluting your principle color.

The desired image will often look somewhat dark except for the colored shine.

There are two approaches to correct for bright lighting. One is to allow the camera to

compute the exposure settings automatically, based on its sensors, and then adjust the

camera’s brightness setting to a small number to lower the exposure time. The

brightness setting acts similar to the exposure compensation setting on typical cameras.

The other approach is to calibrate the camera to use a custom exposure setting –

change the exposure setting to auto, expose the camera to bright lights so that it

computes a short exposure, and then change the exposure setting to hold.

When determining the approach to use, be sure to consider the effect of the black

curtain often found on the scoring side of the typical FIRST field. It will cause a camera

using auto to overexpose when it faces in that direction. It may be useful to include a

dark wall and brightly lit foreground objects in your vision testing scenarios.

Note that the light shine is almost entirely based on the light source carried by the

robots. Changes to ambient lighting such as moving from a room with fluorescent lights

to one with incandescent lights should have very little impact on the colors visible in the

shine. Once light source and white balance are selected, exposure adjustments to

maintain saturation should be sufficient when changing locations.

Page 4: [FRC 2011] Vision Targets Tutorial

Algorithms: The image below resembles how an ideal row of markers would look after you use color

thresholding or image subtraction to remove all background information. The dark gray

columns were added to assist with visualizing where the markers are located. Notice

that the circles and rectangles do not line up when the robot is not directly in front of

the column. This parallax effect occurs because the circles are closer to the camera than

the rectangles.

The next image shows this effect as the camera is lowered below the row of circular

markers.

Page 5: [FRC 2011] Vision Targets Tutorial

Initially, this parallax effect may seem unfortunate. However with some processing, and

using information we already know about the markers, it can be used to your

advantage. It can be used to reconstruct 3D information from a 2D image. Here is how.

Remember that the camera flattens all objects in the scene into a single plane. Looking

at the same portion of the field from above, the actual column markers are shown in

black, and in red, we see how the circular markers on the end of the pegs will project

away from the camera and onto the backplane. Notice that the areas highlighted in

yellow are similar triangles. This will allow us to use trigonometry and the known

measurements of the field to compute additional values.

Page 6: [FRC 2011] Vision Targets Tutorial

As an example, let’s move the camera to an unknown location on the field. The 320x240

image below is taken on a practice setup using a red ring light. It is an unknown distance

and location from the target. The green lines mark a few important locations such as the

center of the circle and the vertical center of the rectangular strips. The horizontal

distance between these lines is labeled as d. Also, the width and height of the top strip

are labeled w and h. In field units, we know that w is 1” and h is 4”. In pixels, w is about

11 or 12 pixels and h is about 47 pixels. This coincides well with the expected ratio and

can be used to calculate the distance from the camera to the strip.

Distance Calculations: Knowing the physical dimensions of a known object in a photo, and knowing the lens

and sensor information, you can calculate the distance from camera to the object. In

this case, a 1” object is about 12 pixels. The width of the image is 320 pixels, meaning

that in the plane of the vertical strip, the image is 320/12 or 26.6 inches wide. From the

Axis 206 datasheet, the horizontal angle of view is given as 54 degrees – this is

measured from edge to edge, so 27 degrees gives us the right triangle shown below. The

value of distance can be found from

13.33/tan(27deg), or 26.2 inches. Note that the

other Axis camera, the M1011 datasheet lists the

angle of view as 47 degrees, and obviously a

different lens will change the angle of view.

Note that you can compute the distance in many

other ways, and it may be useful to photograph a

ruler from various known distances and practice

a bit using different methods.

Page 7: [FRC 2011] Vision Targets Tutorial

To continue, the distance labeled d measures 39 pixels, giving 39/12 or 3.25 inches.

From the symmetric triangles shown below, we can determine that the camera is 26.2”

minus 14” or 12.2” from the circular marker, and that 3.25/14 = x/12.2, which means

that x is equal to 2.8”.

So, from this image, and knowing a few field measurements, we know that the robot

would need to move 2.8” to the left and 12.2” forward in order for the camera to touch

the circular marker.

To extend this method, you can measure from the center of the circular marker to the

center point between the two vertical markers and calculate the amount you would

need to raise or lower the camera in order to touch the marker.

Note that this approach assumes that the camera is perfectly aligned on the field –

perpendicular to the markers, which is not quite true for this photo, but the results are

pretty accurate anyway. Also, note that optical distortion was ignored. This will have a

more noticeable impact near the corners of the image, but its effect will generally be

small. If you decide to develop a more general solution, remember that all objects in the

image are projected into a single plane -- a plane that is parallel to the camera sensor.

You can then use known dimensions of the markers to compute the camera orientation

and location relative to those markers and calculate the remaining 3D measurements.

Also notice that it is possible to use the circle and vertical markers purely for qualitative

feedback. An algorithm that only outputs simple (left-a-little, straight-ahead, right-a-

little commands) is quite easy to build and may be sufficient for driving the robot to a

specific marker.

Page 8: [FRC 2011] Vision Targets Tutorial

Marker Clusters: Viewing the markers from a greater distance introduces a new

image shown below was produced

individual shapes are too small and JPEG compression noise

make accurate measurements on individual particles. Also note how the particles can

blend together at certain viewing angles. All is not lost, however.

very distinct and relatively easy to identify as belonging to a particular scoring grid

element such as (left, top) or (center, center). Even if JPEG noise causes an individual

particle to break up or if they merge, particles still help to id

locations in a statistical fashion.

There are many approaches

keep things simple, this paper will review a few approaches that may be effective.

Statistical Method:

Given the centers of each particle above,

notice how the histogram of the X

positions does a good job of identifying

the marker columns.

Absolute Y positions are more

challenging, but within each column,

distances are pretty well grouped.

Viewing the markers from a greater distance introduces a new challenge. The

produced using a ring light and color threshold. Not

individual shapes are too small and JPEG compression noise is too strong to be able to

make accurate measurements on individual particles. Also note how the particles can

ther at certain viewing angles. All is not lost, however. The partic

very distinct and relatively easy to identify as belonging to a particular scoring grid

element such as (left, top) or (center, center). Even if JPEG noise causes an individual

particle to break up or if they merge, particles still help to identify the column and row

locations in a statistical fashion.

approaches to this problem, and a large set of tradeoffs to consider. T

his paper will review a few approaches that may be effective.

the centers of each particle above,

togram of the X

positions does a good job of identifying

Absolute Y positions are more

challenging, but within each column,

distances are pretty well grouped.

The 640x480

using a ring light and color threshold. Notice that

too strong to be able to

make accurate measurements on individual particles. Also note how the particles can

les are still

very distinct and relatively easy to identify as belonging to a particular scoring grid

element such as (left, top) or (center, center). Even if JPEG noise causes an individual

entify the column and row

this problem, and a large set of tradeoffs to consider. To

his paper will review a few approaches that may be effective.

Page 9: [FRC 2011] Vision Targets Tutorial

Clustering algorithms such as K

locations – as shown below.

Finally, additional information such as estimated distance to target, starting location,

and position information from previous images can be

success by speeding up where processing begins, filtering results, and validating

assumptions.

Clustering algorithms such as K-means may also be effective at identifying the grid

Finally, additional information such as estimated distance to target, starting location,

and position information from previous images can be used to enhance the algorithms

success by speeding up where processing begins, filtering results, and validating

means may also be effective at identifying the grid

Finally, additional information such as estimated distance to target, starting location,

used to enhance the algorithms

success by speeding up where processing begins, filtering results, and validating