engr3390: cognition lab · • sense • think • act • mode checking • stepper actuation the...

53
ENGR3390: COGNITION LAB J. GORASIA TEAM BRAVO - 11/15/2009

Upload: others

Post on 06-Oct-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

ENGR3390: COGNITION LAB

J. GORASIA

TEAM BRAVO - 11/15/2009

Page 2: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

CONTENTS

Table of Figures ...................................................................................................................................................... 2

Executive Summary ................................................................................................................................................ 5

Summary of readings .............................................................................................................................................. 6

Software Systems Overview ................................................................................................................................... 6

Architecture ....................................................................................................................................................... 6

Laser Positioning ................................................................................................................................................ 6

Left hand wall following Exploration ................................................................................................................... 7

AUV/Webcam Exploration ................................................................................................................................ 11

Intermediary dialog box .................................................................................................................................... 15

Tree pruning Path finding Algorithm ................................................................................................................. 16

Stepper Actuation............................................................................................................................................. 20

Testing and results ............................................................................................................................................... 21

Conclusions .......................................................................................................................................................... 22

Revised Lab Writeup ............................................................................................................................................. 22

Appendix .............................................................................................................................................................. 22

TABLE OF FIGURES

Figure 1: The laser positioning sub-VI piggybacks on the stepper control sub-VI to make the stepper pan or tilt to move the laser to the desired position. Having speed control allows the operator to have finer control of the robot position, as if it moves too fast, it is difficult to position it precisely. Finally, we used keyboard control to make the positioning process faster. ...................................................................................................................................... 7

Figure 2: Sense sub-Vi for the left wall follower. This uses simple array indexing to determine the 3 by 3 array surrounding the robot. In addition, it stores that information into an occupancy grid, which is updated at every time step. ............................................................................................................................................................... 9

Figure 3: The think sub-VI for the left wall follower. The robot tries to go left, forward right then back in ssequence, which is essentially the algorithm. .......................................................................................................................... 9

Page 3: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Figure 4: If there are waypoints to head to, the robot will unbundle the desired waypoints and feed the current position and desired position to another sub-VI, Head to target. ........................................................................... 10

Figure 5: Head to target sub-Vi compares two points and determines the heading required to go from one to another. The problem is, the algorithm does not take obstacles into account, which means it is only useful with a lot of waypoints, or in an empty map.................................................................................................................... 10

Figure 6: Overview of the AUV code. The sub-VIs will be described later on. ......................................................... 12

Figure 7: Capture two camera images. It prompts the user before taking each image. Case statements can be reconfigured via T/F constant to load from a file instead [not shown]. .................................................................. 12

Figure 8: Continued from right edge of Figure 1 above. Grid calib gives the grid location, and pixel x,y location of every square in the blank (no-laser) image. Squares in the barn are detected as driveable squares. The “Find Barns” VI outputs the bounding rectangle (pixels in image) of each of the two barns. The find laser block looks at the blank image and the laser-containing image, and outputs the (x,y) pixel position of the laser in the image. Triangulate takes the calibration data produced by Grid Calib and an (x,y) point on the image, and outputs what position in the grid that corresponds to. The two for loops at the top build a world map, and use it to make the barns’ locations driveable. The sequence at the right cleans up image memory. .................................................. 13

Figure 9: The first two frames in the sequence are largely inconsequential, useful only when running the VI separate for testing purposes. When the VI is called externally, the second frame uses a known-good crop to cut out background parts of the image and make everything a little easier to process. The top-right of the second frame starts collecting image buffers so they can be deleted later (so that LabVIEW does not start taking hundreds of MB of memory more each time you run it). The third frame is the real meat. First, the incoming image is fed through some processing, converted to binary, processed some more with morphologies to isolate the squares from everything else, and then fed in again to a pattern recognizer (“Square finder”) that locates all of the blocks in the image and outputs them in a big array of clusters. The “Locate Spaces in Grid” then goes through each square, looks at where it is in the image, and deduces where a square is in the grid. The top-left square in the grid is assumed to be (0,0). Finally, in the last frame, some cleanup is done and the calibration data packed up for output. ............................................................................................................................................................................ 14

Figure 10: It subtracts the blank image from the laser image, leaving a bright spot and some background noise. There’s then a long string of IMAQ image processing to clean up the noise, better-isolate the laser, and then threshold it to pinpoint its location. A particle analysis is done to find the largest dot; some noise shows up as dots much smaller than the laser. ................................................................................................................................ 14

Figure 11: This locates the red and green barns. The basic principle is that the red squares look much brighter in the red color plane than they do in the green color plane (much less green light than red light), but almost nothing else in the grid gets brighter in red. Likewise, the green squares look much brighter in the green color plane than they do in the red color plane. So, the easiest thing to do is (carefully making sure not to overwrite buffers!) get the red and green planes, subtract them appropriately, and get a gray box against a black background where the barn is. The Find Barn sub-vi (which I won’t cover) does some processing to locate that square and output its position. The second frame in the sequence just cleans up image buffers – again, to keep memory from getting used up. ............................................................................................................................................................... 15

Figure 12: The leftmost for loop extracts only those squares within a given distance (px) from the point of inquiry. This is done because the distortion we’re trying to compensate for differs across the image. The second for loop

Page 4: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

builds an array of (pixel x, grid x) and (pixel y, grid y), which is then sorted. When sorting clusters, LabVIEW sorts by the first element, and then the second elements if the first are the same, and so on. The 1D Interpolation block then does the heavy lifting and produces an (x,y) position in the grid. ................................................................... 15

Figure 13: The optimal path finding algorithm....................................................................................................... 17

Figure 14: Main VI for the tree pruning algorithm. ................................................................................................ 17

Figure 15: Are we there yet? subVI. It determines if the robot has reached its destination. It knows that it has hit the destination as the destination has a weight of -1. ........................................................................................... 18

Figure 16: Been there sub-Vi. ................................................................................................................................ 19

Figure 17: Make new paths sub-VI ........................................................................................................................ 19

Figure 18: Connectors for the Stepper driver sub-VI. It keeps absolute position of the stepper in spherical coordinates, thus does not accumulate error. ....................................................................................................... 20

Figure 19: Stepper driver sub-VI............................................................................................................................ 21

Figure 20: Single step sub-VI. ................................................................................................................................ 21

Page 5: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

EXECUTIVE SUMMARY

The objective was to get a “robot tractor”, indicated by a laser pointer to create a map of an orchard, and then move optimally to mow the orchard. We accomplished making the map using a left wall follower, and using some waypoints to mark the ends of the orchard. In addition, we implemented a vision system which could capture what the world looked like. This creates the possibility of modifying the world to make the lab more interesting. Finally, a tree pruning algorithm is used to find the optimal path for the robot to traverse through the orchard and mow the orchard rows. This entire lab used a “robot” which consisted of a laser pointer mounted on two stepper motors, allowing it to pan and tilt.

Page 6: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

SUMMARY OF READINGS

This cognition lab is one where we have to develop an exploration and path planning algorithm for a robot tractor to get through a maze.

For exploration, there are a few possible choices of algorithms. On the simple end, there is the left hand wall follower which tries to follow a left wall continuously. Algorithms get progressively more complex then, with breadth first searches, depth first searches and A* algorithms. These algorithms are graph search algorithms that begins at a root node and explores all the neighboring nodes. Then for each of those nearest nodes, it explores their unexplored neighbor nodes, and so on, until it finds the goal.

For finding an optimal path, similar algorithms can be used. The crucial difference would be that you would need to keep track of how long different paths are, and choose the optimal path to a target.

This lab makes use of the concept of occupancy grid, which is the perception the robot has of its world. While in this lab it will be trivial to populate an occupancy grid, it is non trivial in other applications. It is an art by itself to make occupancy grids, and errors in making an occupancy grid could easily translate into errors in cognition and actuation.

SOFTWARE SYSTEMS OVERVIEW

ARCHITECTURE

We decided to use a flat sequence to arrange our robot code. The first step would be to choose which exploration algorithm to use. There were two techniques used, a left wall follower, and a vision based system. The vision based system used a USB webcam which we added to the lab set-up to enhance it. Using the webcam, we are now able to have feedback in the system. This gives the tractor robot AUV capabilities which help enhance its performance.

Once a map is generated, we prompt a dialog box to indicate that the robot has returned to the start barn, and is ready to carry out its mission. The robot will then use a tree pruning algorithm to find optimal paths through the orchard. This will take it through the robot through the orchard, mowing every row, then return to the end barn.

LASER POSITIONING

CChhoooossee eexxpplloorraattiioonn aallggoorriitthhmm..

•• CClloouuddyy == LLeefftt wwaayy

ffoollllooww

•• CClleeaarr == UUssee

ccaammeerraa//AAUUVV

MMaakkee rroobboott rreettuurrnn ttoo

hhoommee

MMooww tthhee oorrcchhaarrdd uussiinngg aann ooppttiimmaall ppaatthh ffiinnddiinngg

ttrreeee pprruunniinngg aallggoorriitthhmm..

Page 7: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Before the robot can navigate through the orchard, it needs to know where the laser is in the world. Since there is no feedback without the camera, a manual positioning is used with the operator’s eyes as feedback. Using the keyboard keys W, A, S and D, the laser either pans or tilts. In addition, basic speed control was added using the keys Q and E, making it quicker for the operator to position the laser.

Figure 1: The laser positioning sub-VI piggybacks on the stepper control sub-VI to make the stepper pan or tilt to move the laser to the desired position. Having speed control allows the operator to have finer control of the robot position, as if it moves too fast, it is difficult to position it precisely. Finally, we used keyboard control to make the positioning process faster.

LEFT HAND WALL FOLLOWING EXPLORATION

The exploration uses a simple left hand wall following algorithm.

Page 8: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

The code consists of five main sections:

• Sense • Think

• Act

• Mode checking • Stepper actuation

The sense block returns information about the area surrounding the robot as a 3x3 array. In addition, it updates the occupancy grid with the new information it is receiving about the world.

Page 9: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Figure 2: Sense sub-Vi for the left wall follower. This uses simple array indexing to determine the 3 by 3 array surrounding the robot. In addition, it stores that information into an occupancy grid, which is updated at every time step.

The Think block has two different behaviors depending on the mode. If the robot has not reached home and is not in waypoint mode, the robot carries out the wall following algorithm.

Figure 3: The think sub-VI for the left wall follower. The robot tries to go left, forward right then back in ssequence, which is essentially the algorithm.

The logic of the left wall follower is, can I go left? If not, can I go forward? If not, can I go right? If all of these are false, the robot moves backwards. Since the robot is a left wall follower, it is necessary to keep track of the robot’s direction, so that the robot knows what left is.

If the robot is in waypoint mode, it follows a simpler algorithm.

Page 10: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Figure 4: If there are waypoints to head to, the robot will unbundle the desired waypoints and feed the current position and desired position to another sub-VI, Head to target.

It will unbundle the waypoint array and feed the coordinates it wants to head towards into Head to target.vi.

Figure 5: Head to target sub-Vi compares two points and determines the heading required to go from one to another. The problem is, the algorithm does not take obstacles into account, which means it is only useful with a lot of waypoints, or in an empty map.

This VI will determine the relative position of the robot and the target and return the appropriate heading for the robot to tend to. This algorithm is simplistic, as it cannot round corners or avoid obstacles. While it was mildly appropriate to use in this case, it would have been better to the tree pruning algorithm described later on.

Page 11: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

The Act block moves the robot in the fashion described by the heading.

A crucial component of the system is the Modechecker VI, which determines if the robot has reached a few points, where it then changes the mode of the robot.

The stepper actuation code will be explained separately.

AUV/WEBCAM EXPLORATION

Page 12: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Figure 6: Overview of the AUV code. The sub-VIs will be described later on.

As an overview, the code grabs two images from the camera, one with and one without the laser, does some images processing and produces a world map and a current laser position.

Figure 7: Capture two camera images. It prompts the user before taking each image. Case statements can be reconfigured via T/F constant to load from a file instead .

Page 13: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

The first VI captures the images from the camera. A case structure is used to possibly load an image from memory for debugging purposes.

Figure 8: Continued from right edge of Figure 1 above. The grid location and the pixel location of every square in the blank (no-laser) image is given by GridCalib. Squares in the barn are detected as 0s, driveable squares. The “Find Barns” VI outputs the bounding rectangle (pixels in image) of each of the two barns. The find laser block looks at the blank image and the laser-containing image, and outputs the (x,y) pixel position of the laser in the image. The Triangulate sub-VI takes the calibration data produced by Grid Calib and an (x,y) point on the image, and outputs what position in the grid that corresponds to. The two for loops at the top build a world map, and use it to make the barns’ locations driveable. The sequence at the right cleans up image memory.

The VI then processes the image to give the position of the grid and the pixel location of every square.

The next VI produces the calibration data which is used to map image points onto locations on a grid.

Page 14: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Figure 9: The first two frames in the sequence are largely inconsequential, useful only when running the VI separate for testing purposes. When the VI is called externally, the second frame uses a known-good crop to cut out background parts of the image and make everything a little easier to process. The top-right of the second frame starts collecting image buffers so they can be deleted later (so that LabVIEW does not start taking hundreds of MB of memory more each time you run it). The third frame is the real meat. First, the incoming image is fed through some processing, converted to binary, processed some more with morphologies to isolate the squares from everything else, and then fed in again to a pattern recognizer (“Square finder”) that locates all of the blocks in the image and outputs them in a big array of clusters. The “Locate Spaces in Grid” then goes through each square, looks at where it is in the image, and deduces where a square is in the grid. The top-left square in the grid is assumed to be (0,0). Finally, in the last frame, some cleanup is done and the calibration data packed up for output.

The next VI takes in two images and uses them to locate where the laser is.

Figure 10: It subtracts the blank image from the laser image, leaving a bright spot and some background noise. There’s then a long string of IMAQ image processing to clean up the noise, better-isolate the laser, and then threshold it to pinpoint its location. A particle analysis is done to find the largest dot; some noise shows up as dots much smaller than the laser.

Finally, a VI looks for the barns, which helps establish waypoints for the robot to tend to.

Page 15: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Figure 11: This locates the red and green barns. The basic principle is that the red squares look much brighter in the red color plane than they do in the green color plane (much less green light than red light), but almost nothing else in the grid gets brighter in red. Likewise, the green squares look much brighter in the green color plane than they do in the red color plane. So, the easiest thing to do is (carefully making sure not to overwrite buffers!) get the red and green planes, subtract them appropriately, and get a gray box against a black background where the barn is. The Find Barn sub-vi (which I won’t cover) does some processing to locate that square and output its position. The second frame in the sequence just cleans up image buffers – again, to keep memory from getting used up.

In addition, triangulate sub-VI takes the grid and pixel-in-image coordinates of all the squares in the map. Using linear interpolation, it tries to determine where in the occupancy grid a certain point is.

Figure 12: The leftmost for loop extracts only those squares within a given distance (px) from the point of inquiry. This is done because the distortion we’re trying to compensate for differs across the image. The second for loop builds an array of (pixel x, grid x) and (pixel y, grid y), which is then sorted. When sorting clusters, LabVIEW sorts by the first element, and then the second elements if the first are the same, and so on. The 1D Interpolation block then does the heavy lifting and produces an (x,y) position in the grid.

INTERMEDIARY DIALOG BOX

Page 16: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Next is a simple dialog box to give a perceptible delay between the exploration and path finding algorithms.

TREE PRUNING PATH FINDING ALGORITHM

Page 17: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Figure 13: The optimal path finding algorithm.

The algorithm takes in a list of waypoints, and will produce an array of paths that the robot should move in, to get from one point to the other. The stepper driver takes those path arrays and tries to follow them, moving incrementally through them.

Figure 14: Main VI for the tree pruning algorithm.

The tree pruning algorithm requires the input of the coordinates of the starting point and the destination. It gives the starting point a weight of 0 and the destination has a weight of -1. The essential idea is that the array of possible paths is constantly being added to as the robot proceeds through the maze. At the same time, the array of possible paths is being pruned as dead ends and paths that lead to the same place are removed. It is important to note that the map is stored as 1s and 0s, where 1 is a wall and 0 is free space.

Page 18: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

A crucial sub-VI is the path find sub-VI. It takes the map and the starting location, then generates a list of points for the robot to travel to get to the destination. There are two cycle limits in the code, in order to make it not run indefinitely. There is one which stops the loop from continuously running if the destination is impossible to get to. The limit for that is slightly arbitrary, but based on the maximum number of possible steps it could take to traverse a map of a given size.

An array stores all the possible paths that the robot can choose to pursue. It is initialized as a blank array and is progressively added to every time the loop finds a new possible path. Each iteration of the loop looks at one path and takes it through a series of subVIs.

First, it uses the current position of the robot to find the index in the paths array that corresponds to the last entry to the array.

Figure 15: Are we there yet? subVI. It determines if the robot has reached its destination. It knows that it has hit the destination as the destination has a weight of -1.

If the current position matches the destination, it ends the loop and returns that this path is the shortest path. If not, the loop continues.

Page 19: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Figure 16: Been there sub-Vi.

Next, the Been There sub-VI makes the current position of the robot as 1. This erects an imaginary wall in the map, which prevents the algorithm from choosing to move backwards.

Subsequently, the Where To sub-VI checks where directions the robot can move in, ie where there is no 1s on the map. We have limited the robot to only moving horizontally or vertically, thus only 4 numbers, 0, 1, 2 and 3 required to denote possible headings. The number of free regions opens up new possibilities for the tree to branch out to. However, if there are no open paths, the array returned will be empty.

We use a simple check in the Is dead sub-VI to determine if the path is a dead end. If the path is dead, it ends by making the next entry zeros.

Figure 17: Make new paths sub-VI

However, if the path is not dead, a new path will be generated for all the possible ways that the robot can branch into. The code will make an array of the path and append it to the all paths array.

Page 20: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Finally, the Kill Paths sub-VI checks if two points get to the same point with the same number of steps. If that exists, one of the paths is killed, as they are effectively duplicates.

The code will return in the end the shortest path, as a list of coordinates of each step of the path.

STEPPER ACTUATION

The stepper driver takes in the current stepper position in steps and the intended row and column to tend to, and moves the stepper motors appropriately. It then moves the motor to get to the destination, and returns the new stepper position in number of steps.

Figure 18: Connectors for the Stepper driver sub-VI. It keeps absolute position of the stepper in spherical coordinates, thus does not accumulate error.

In addition, the stepper driver takes in an initialization cluster which contains a DAQ “Task”, which is a set of Input-Output settings. By initializing the DAQ only once and not making a new “Task” at every step, the maximum speed of the stepper motor was increased.

Page 21: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Figure 19: Stepper driver sub-VI.

The “Row/Col->Steps” block performs a spherical coordinate transform from the Row/Col position to number of steps. The “Step or Not?” block checks where the stepper is and where it needs to be, and determines the stepper needs to step in. Finally, the “Single Step” block communicates with the DAQ to send signals to the stepper motors.

Figure 20: Single step sub-VI.

The Single Step sub-VI contains a sequence structure to send three commands to the DAQ. The first is the new direction. The seconds enables the stepper. The third disables the stepper. If the stepper is not disabled, you will be unable to change the direction of the stepper motor.

TESTING AND RESULTS

The left wall following algorithm proved less than optimal in the speed sense. While it was robust, it was incapable of finding the rows of the orchard. The workaround we used was unsatisfactory as it relied heavily on our knowledge of the world.

Our optimal path finding algorithm seemed to work very well for our purposes. Only once we had integrated all the code and ran it did we realize the importance of limiting the number of trees the algorithm could produce.

The camera code was great and worked like a charm. There were some issues with the map generation when the laser pointer was over the red barn, but that was the only bug. When that happened, the walls around the barn would not be generated, causing the robot to generate paths that went outside the map, which is not acceptable.

Page 22: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Stepper motor control worked very well, as we maintained absolute position control of the pan and tilt of the stepper. That meant we did not have to compensate for error in the motor position which other teams had to as they had control of the robot’s position projected on the grid. While this did cause the laser to not be directly in the center of a cell, it was a worthwhile tradeoff to reduce the complexity of accounting for error. Since the stepper motor’s rotational inertia and axes of rotation were different, we had to deal with strange dynamics and jerky motion from the stepper. We resolved this by trial and error, changing the pulse width of the PWM controlling the steppers till they became bearable.

CONCLUSIONS

While it was easier to design a control system for the robot as it only had to move in a 2 dimensional grid, it was still challenging. Robust algorithms were required, and if memory of previous movements was not used, inefficient paths would be created. In conclusion, this was a good lab to introduce the concepts of path planning and navigation.

REVISED LAB WRITEUP

The instructions were clear enough for us to execute the lab. Notwithstanding that, I feel slightly more information on control of the stepper is required for teams who have yet to experience controlling the stepper motor in other labs.

APPENDIX

Code is attached at the end of this document.

Page 23: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Main.vi

Connector Pane

Front Panel

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

Page 24: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

UAV.vi

Connector Pane

Front Panel

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

Page 25: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

laser_find.vi

Connector Pane

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Front Panel

Block Diagram

Page 26: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

point_in_rectangle.vi

Connector Pane

Front Panel

Block Diagram

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

image_new_buffer.vi

Connector Pane

Front Panel

Block Diagram

uid_hex.vi

Connector Pane

Page 27: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Front Panel

Block Diagram

process_crop.vi

Connector Pane

Front Panel

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

Page 28: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

barn_find.vi

Connector Pane

Front Panel

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

Page 29: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

barn_subtracted_processing.vi

Connector Pane

Front Panel

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

process_prep_filter.vi

Connector Pane

Page 30: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Front Panel

Block Diagram

localize.vi

Connector Pane

Front Panel

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

image_control_get_clicked_point.vi

Connector Pane

Front Panel

Page 31: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

MAIN_calibrate.vi

Connector Pane

Front Panel

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

Page 32: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

mark_seen.vi

Connector Pane

Front Panel

Block Diagram

xy_to_ROI_point.vi

Connector Pane

Page 33: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Front Panel

Block Diagram

zero_grid.vi

Connector Pane

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Front Panel

Block Diagram

calibrate_rect2grid.vi

Connector Pane

Front Panel

Page 34: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

record_locations.vi

Connector Pane

Front Panel

Block Diagram

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

relative_position.vi

Connector Pane

Front Panel

Block Diagram

array_push_multiple.vi

Page 35: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Connector Pane

Front Panel

Block Diagram

array_pop.vi

Connector Pane

Front Panel

Block Diagram

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

rescale_points.vi

Connector Pane

Front Panel

Block Diagram

recenter_points.vi

Connector Pane

Page 36: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Front Panel

Block Diagram

find_neighbours.vi

Connector Pane

Front Panel

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

file_relative_path.vi

Page 37: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Connector Pane

Front Panel

Block Diagram

process_binary.vi

Connector Pane

Front Panel

process_squarefinder.vi

Connector Pane

Front Panel

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

Page 38: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Pathfind_MAIN.vi

Connector Pane

Front Panel

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Page 39: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

triangulate.vi

Connector Pane

Front Panel

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

Page 40: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

camera_singleimage.vi

Connector Pane

Front Panel

Block Diagram

camera_setgamma.vi

Connector Pane

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Front Panel

Block Diagram

Single Step 2.vi

Connector Pane

Page 41: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Front Panel

Block Diagram

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Stepper Initialization Cluster.ctl

Connector Pane

Front Panel

ConvertMap.vi

Connector Pane

Front Panel

Block Diagram

Page 42: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Step_or_Not.vi

Connector Pane

Front Panel

Block Diagram

Not_Dijkstra_v3_algorithm.vi

Connector Pane

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Front Panel

Page 43: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

1d_to_2d_array.vi

Connector Pane

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Front Panel

Block Diagram

arstartarstart.vi

Connector Pane

Front Panel

Page 44: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

create_new_paths.vi

Connector Pane

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Front Panel

Block Diagram

is_this_the_destination.vi

Connector Pane

Page 45: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Front Panel

Block Diagram

been_there.vi

Connector Pane

Front Panel

Block Diagram

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

am_i_dead.vi

Connector Pane

Front Panel

Block Diagram

where_can_i_go.vi

Connector Pane

Front Panel

Page 46: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

where_am_i.vi

Connector Pane

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Front Panel

Block Diagram

Stepper_to_cell.vi

Connector Pane

Front Panel

Page 47: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

RowCol_to_Step.vi

Connector Pane

Front Panel

Block Diagram

Page 48: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Stepper Initialization.vi

Connector Pane

Front Panel

Block Diagram

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Mode checker.vi

Connector Pane

Front Panel

Block Diagram

Page 49: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Sense.vi

Connector Pane

Front Panel

Block Diagram

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

KatieThink.vi

Connector Pane

Front Panel

Page 50: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Block Diagram

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Page 51: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Initialize OC.vi

Connector Pane

Front Panel

Block Diagram

Head to target.vi

Page 52: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Connector Pane

Front Panel

Block Diagram

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Act.vi

Connector Pane

Front Panel

Block Diagram

Page 53: ENGR3390: COGNITION LAB · • Sense • Think • Act • Mode checking • Stepper actuation The sense block returns information about the area surrounding the robot as a 3x3 array

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]

Main.vi

file:///G|/Homework/Olin/2009%20fall/Robotics/cognition/codelisting/index.html[11/13/2009 10:21:46 AM]