the 2004 mobile robot competition and exhibition

11
The thirteenth AAAI Mobile Robot Competition and Exhibition was once again collocated with AAAI-2204, in San Jose, California. As in previous years, the robot events drew competitors from both academia and industry to showcase state-of- the-art mobile robot software and systems in four organized events. T he thirteenth AAAI Mobile Robot Com- petition and Exhibition was held last Ju- ly, in conjunction with the Nineteenth National Conference on Artificial Intelligence (AAAI 2004), in San Jose, California. The prima- ry purpose of the Mobile Robot Competition and Exhibition is to bring together researchers and students from academe and industry to showcase the latest state-of-the-art mobile ro- bot capabilities. This year saw the return of the Rescue Robot Competition, the Mobile Robot Exhibition, and the Robot Challenge, and the addition of a new event, the Open Interaction Event. The Rescue Robot Competition For the fifth time, the Rescue Robot Competi- tion was run at AAAI, helping raise awareness of the unique challenges involved in urban search and rescue (USAR) operations. This com- petition gives researchers the opportunity to test their systems on the Reference Test Arenas for Urban Search and Rescue Robots, developed by the National Institute of Standards and Technology (NIST) (Jacoff, Weiss, and Messina 2003). 1 The arena is shown in figure 1. The are- na is described more completely in the RoboCup Rescue article in this issue of AI Mag- azine, along with the scoring metrics used to evaluate the participant robots. The Rescue Robot event requires the partici- pants to deploy robots that demonstrate the ability to perform USAR-relevant tasks, such as dealing with cluttered, highly unstructured en- vironments, mapping these environments, and providing a useful interface for human opera- tors. In 2004, seven competing teams devel- oped unique systems with very diverse charac- teristics. Three place awards and an innovation award were presented at this year’s competi- tion. The awardees are shown in figure 2. The place awards were based solely on the teams’ performances during the competition mis- sions. The award for innovation was given to the team exhibiting a particularly novel imple- mentation or technical advancement. First Place Awarded to Swarthmore College The Swarthmore College team, from Swarth- more, Pennsylvania, deployed two robots con- trolled by a single operator to explore the are- nas. The Swarthmore team’s human-robot interface allowed the operator to adjust the lev- el of autonomy of each robot to effectively Articles SUMMER 2005 25 Copyright © 2005, American Association for Artificial Intelligence. All rights reserved. 0738-4602-2005 / $2.00 The 2004 Mobile Robot Competition and Exhibition William D. Smart, Sheila Tejada, Bruce Maxwell, Ashley Stroupe, Jennifer Casper, Adam Jacoff, Holly Yanco, and Magda Bugajska AI Magazine Volume 26 Number 2 (2005) (© AAAI)

Upload: independent

Post on 05-Dec-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

■ The thirteenth AAAI Mobile Robot Competitionand Exhibition was once again collocated withAAAI-2204, in San Jose, California. As in previousyears, the robot events drew competitors fromboth academia and industry to showcase state-of-the-art mobile robot software and systems in fourorganized events.

The thirteenth AAAI Mobile Robot Com-petition and Exhibition was held last Ju-ly, in conjunction with the Nineteenth

National Conference on Artificial Intelligence(AAAI 2004), in San Jose, California. The prima-ry purpose of the Mobile Robot Competitionand Exhibition is to bring together researchersand students from academe and industry toshowcase the latest state-of-the-art mobile ro-bot capabilities. This year saw the return of theRescue Robot Competition, the Mobile RobotExhibition, and the Robot Challenge, and theaddition of a new event, the Open InteractionEvent.

The Rescue Robot CompetitionFor the fifth time, the Rescue Robot Competi-tion was run at AAAI, helping raise awarenessof the unique challenges involved in urbansearch and rescue (USAR) operations. This com-petition gives researchers the opportunity totest their systems on the Reference Test Arenasfor Urban Search and Rescue Robots, developed

by the National Institute of Standards andTechnology (NIST) (Jacoff, Weiss, and Messina2003).1 The arena is shown in figure 1. The are-na is described more completely in theRoboCup Rescue article in this issue of AI Mag-azine, along with the scoring metrics used toevaluate the participant robots.

The Rescue Robot event requires the partici-pants to deploy robots that demonstrate theability to perform USAR-relevant tasks, such asdealing with cluttered, highly unstructured en-vironments, mapping these environments, andproviding a useful interface for human opera-tors. In 2004, seven competing teams devel-oped unique systems with very diverse charac-teristics. Three place awards and an innovationaward were presented at this year’s competi-tion. The awardees are shown in figure 2. Theplace awards were based solely on the teams’performances during the competition mis-sions. The award for innovation was given tothe team exhibiting a particularly novel imple-mentation or technical advancement.

First Place Awarded to Swarthmore CollegeThe Swarthmore College team, from Swarth-more, Pennsylvania, deployed two robots con-trolled by a single operator to explore the are-nas. The Swarthmore team’s human-robotinterface allowed the operator to adjust the lev-el of autonomy of each robot to effectively

Articles

SUMMER 2005 25Copyright © 2005, American Association for Artificial Intelligence. All rights reserved. 0738-4602-2005 / $2.00

The 2004 Mobile Robot Competition

and Exhibition

William D. Smart, Sheila Tejada, Bruce Maxwell, Ashley Stroupe,Jennifer Casper, Adam Jacoff, Holly Yanco, and Magda Bugajska

AI Magazine Volume 26 Number 2 (2005) (© AAAI)

This team’s mapping implementation earnedan award for innovation as well.

Third Place Awarded to PARC The Palo Alto Research Center (PARC) team,from Palo Alto, California, competed with cus-tom-built robotic systems predominantly de-veloped by high school students associatedwith PARC’s Institute for Educational Advance-ment. This team implemented two differentmodular serpentine designs, each teleoperatedvia tether, and an innovative localization ap-proach. Equipped with cameras and micro-phones, the PARC team’s robots were the onlyrobots to explore the extremely confinedspaces within the rubble-strewn red arena. Thesystem also included a tanklike robot that de-ployed sensor motes for map creation.

Other ParticipantsOther participating teams demonstrated a widevariety of implementations (see figure 3). Theteam from the University of New Orleans useda small, field-deployable, tracked vehicleequipped with a color camera, thermal camera,and two-way audio. Its tether transmitted pow-er along with video and audio to a hardenedcontrol unit used by the operator, while a sec-ond operator performed tether management

manage concurrent exploration of the yellowand orange arenas. Once the robot found a vic-tim, its mapping system enabled the operatorto record the necessary victim data in an onlineform, include images of the victim’s situationand surrounding environment, and effectivelynote the location on the map. The Swarthmoreteam used this system to locate and map vic-tims more quickly than the other teams, and soscored very well in each of its missions.

Second Place Awarded to MITREThe MITRE team, from McLean, Virginia, de-ployed three semiautonomous robots con-trolled by a single operator. The team demon-strated multiple robot control by integratingsemiautonomous teaming behaviors to explorethe yellow and orange arenas and an effectivecustom interface. The three robots were simi-larly equipped with sensors for laser ranging,infrared proximity, sonar, and bumpers fornavigation and mapping. To find victims theyused custom pyroelectric sensors to detect bodyheat, along with a color pan-tilt-zoom camera.They were particularly noted for their approachto simultaneous localization and mapping(SLAM), with fused sensor data from all threerobots displayed on a single map accuratelyshowing the location of victims and obstacles.

Articles

26 AI MAGAZINE

Figure 1. The NIST Reference Test Arena.

from the point of entry into the collapse.Teams from the Utah State University, UT, andthe University of Manitoba, Canada, developedfully autonomous, custom robot implementa-tions for the competition. These teams at-tempted to deploy inexpensive technologiesthat could be easily replicated, deployed inswarms, and considered expendable duringsearch operations. The Scarabs team from LosAngeles, California, implemented a custom-built tracked robot.

Since the Eighteenth National Conferenceon Artificial Intelligence (held in 2002 in Ed-monton, Alberta), the competitors in the RobotRescue Competition have participated in astudy of human-robot interaction (HRI)(Maxwell et al. 2004; Scholtz et al. 2004; Yanco,Drury, and Scholtz 2004). Performance datawas captured and archived for teams takingpart in the study, including video of robotsmoving within the arenas, video of the opera-tor interfaces, and video of the operator work-load/tendencies during each mission. Overthese three competitions, we have observedfour trends in the development of the systems:(1) increasing video area; (2) more teams gener-ating maps; (3) improving interfaces, and (4)an increasing number of robots.

Increasing video area. We have seen a trend to-ward a larger portion of the screen area beingdevoted to showing the robot’s-eye view of theworld. To successfully find victims, the opera-tor must have a good view of the environment.We have also found that operators rely heavilyupon the video screen for navigation. Largervideo windows make it easier to accomplishthese tasks.

More teams generating maps. Over the years,we have seen more robot entries create maps ofthe environment. This trend can be attributedto rules changes that encourage machine-creat-ed maps. It might also be attributed to the factthat maps can help to provide an awareness ofwhere the robot is in the environment.

Improving interfaces. Interaction with the ro-bot systems has, in general, become much eas-ier. Text-based interfaces are now rare. There isreduced window occlusion in the interfaces. Ingeneral, teams have moved to using a singlemonitor for a display; however, this year wesaw two teams introduce interfaces that includ-ed multiple monitors. While this increases theamount of useful information that can be dis-played, it also increases the cognitive load onthe operator.

Increasing numbers of robots. Five of this year’sseven competitors had multirobot entries. Incontrast to prior years, most of the entries hadrobots that operated concurrently.

Articles

SUMMER 2005 27

Figure 2. The AAAI 2004 Rescue Robot Award Recipients.

Top: Swarthmore College. Middle: MITRE. Bottom: PARC.

displays of robot position within the arena foreach mission. This tracking data is helpful toreconstruct missions and identify both success-es and failures but is not yet incorporated intothe performance metric used for scoring. How-ever, measures such as “percent of arenasearched,” “average search speed,” and othersuch metrics can be determined and will beconsidered for inclusion in future events.

Also on display during the competition wererecently developed virtual robot/arena envi-ronments called USARsim.2 These high-fidelitysimulation tools, developed principally by theUniversity of Pittsburgh, allow robot program-mers and mechanical designers access to virtualcollapsed structures and USAR-specific robots.They enable researchers to develop robotic be-haviors, demonstrate advanced capabilities,and iterate mechanical designs without the dif-

There have also been changes in the difficul-ty of the victim placement. In the past twoyears, victims can be on the surface, trapped, ina void, or entombed. The ratio of victims ineach of these categories changes to make thetask more difficult as the rounds progress, withno surface victims likely to be present by the fi-nal round. Additionally, the number of victimsis reduced as the rounds progress, making itmore difficult to find victims in the laterrounds.

This year’s competition featured the recentlydeveloped robot localization and tracking sys-tem used to capture quantitative robot perfor-mance data. NIST deployed this tracking sys-tem, which uses golf-ball size radio frequencytags affixed to the robots (actively chirping ul-tra wideband signals), to capture time-stampedlocation data and produce graphical overview

Articles

28 AI MAGAZINE

Figure 3. Other Teams Included (top) the University of New Orleans, (bottom left) the University of Manitoba, and (bottom right) the Scarabs.

ficulties associated with physical robot hard-ware. Currently, the NIST arenas and somemore realistic testing facilities are modeled,along with a variety of actual robots. Plans toinclude entire building collapse models are un-derway for demonstration next year. The goalof future competitions is to combine develop-ment efforts from both the simulated andphysical environments, quicken the pace of ad-vancement of practical robot capabilities, anddemonstrate comprehensive emergency re-sponse scenarios.

Overall, the competition goals were achievedthis year by evaluating state-of-the-art tech-nologies, methods, and algorithms applied tosearch and rescue robots through objective test-ing in relevant environments, statistically sig-nificant repetitions, and comprehensive datacollection. Although several teams demonstrat-ed advances in certain key capabilities, morecollaboration between teams is needed to pro-duce ultimately effective systems for deploy-ment. When viewed as a stepping-stone be-tween the laboratory and the real world, thiscompetition provided an important opportuni-ty to foster such collaborative efforts and fur-ther raised expectations for next year’s imple-mentations. It also enticed many newresearchers into the field of USAR robots.

The Mobile Robot ExhibitionThe purpose of the Robot Exhibition is to giveresearchers an opportunity to present their lat-est robotics and embodied AI research in a non-competitive environment.

Due to various hardware failures thatplagued the exhibitors, only three teams wereable to participate in the event this year: TheMU Mites from the University of Missouri atColumbia, the Stony Brook Robot Design Teamfrom the State University of New York at StonyBrook, and a team from Washington Universityin St. Louis. The research presented spannedhuman-robot interaction, multimodal inter-faces, and robotic architectures.

The University of Missouri, Columbia pre-sented a group of small Lego robots built usingBotball kits (figure 4). The MU Mites robots per-form group behaviors by moving in formationsbased on recognized gestures sketched on a per-sonal digital assistant (PDA). The user sketchesa configuration of robot icons with an arrow todesignate the relative direction of travel. Hid-den Markov models were trained to recognizesketched symbols, such as arrows, rectangles,ellipses, and lines. The sketch interface alsosupports editing commands such as making anx to delete a robot from the formation or mov-

ing the robot icons to form a new formation.Commands are sent from the PDA to theHandyBoard controlling the robots using theinfrared (IR) port; the PDA is programmed tosend Sony remote control signals that are re-ceived and interpreted by the HandyBoard.Formations are accomplished by using distrib-uted control with no explicit communicationbetween robots. Each robot has brightly col-ored panels on the rear and sides, as well as acolor camera (the CMU cam) mounted on a ser-vomotor. The robots stay in formation by track-ing the appropriate color.

The Stony Brook Robot Design Team has fo-cused on three main areas of research in thecreation of JESTER (joke entertainment systemwith trivia-emitted responses, figure 5) naviga-tion, computer vision, and user interface andhuman interaction.

JESTER’s navigation system uses both in-frared sensors and computer vision to avoid ob-stacles and distinguish between people andother objects. There are six infrared sensorsused in JESTER. When any of these sensors de-tect an object, the robot base stops moving andlets the computer vision determine the nextstep. A motion-detection system, modifiedfrom Konrad Rzeszutek’s Motion DetectionToolkit,3 determines whether there is motionin front of the robot and thus distinguishes be-tween facing into a crowd and facing a wall.

The interface and human interaction com-

Articles

SUMMER 2005 29

Figure 4. The University of Missouri Mites.

ponents were designed keeping in mind theprinciple that humanlike avatars help ease theinteraction between robots and human users.Most avatars do not have a sense of emotion ora means to visually portray their emotions.Thus, through integrating a realistic avatarthrough software created by Haptek Technolo-gy with an intelligent conversation system, theinterface is able both to respond to human in-puts appropriately and to react with a displayof appropriate emotions. The end result is thatusers spend an average of approximately 40percent more time interacting with the system,with 92 percent of users reporting that they en-joyed their experience, compared to 84 percentof users interacting with the nonemotional in-terface.

One of the most challenging aspects of themobile robotics is the task of integration. Re-searchers in fields as diverse as motion plan-ning, machine learning, and computer visiontypically write components since mobile robotsdepend on all of these technologies. Combin-ing these heterogeneous components into astable system is more than just a middlewarechallenge since each part of the system has itsown failure modes, which may be exacerbatedby placement in a complex system. Correcthandling of faults is not just a quality of serviceissue, but also a safety issue since the robots in-teract with humans often in crowded environ-ments.

To address these problems, Washington Uni-versity in St. Louis has developed a frameworkthat allows implementation of numerous per-ceptual and control tasks as individual process-es, or services, which interact according to ex-plicitly defined interfaces. These interfaces

Articles

30 AI MAGAZINE

Figure 5. JESTER with AAAI-04 Attendees.

allow abstracting the information provided byeach component in the system in order to easeintegration. Running services in many smallprocesses improves fault tolerance since anynumber of services can fail due to program-ming faults without affecting the rest of thesystem.

While it is clearly important to be able tohandle a wide range of failures, application au-thors should not be required to implement rou-tines to test and react in every known mode offailure for every application, even if the failuresare abstracted to a common interface. Thus, theframework also provides transparent fault-tol-erance to users of system services. Errors in soft-ware and hardware are detected, and correctiveaction is taken. Services can be restarted or re-moved from the system, and clients are recon-nected to the same service or to another serviceimplementing the same interface without in-tervention from the application programmer.The Washington University team successfullydemonstrated its failure-tolerant framework onits robot, Lewis (figure 6).

The Robot ChallengeThe Robot Challenge is an event designed toencourage and showcase research in robot-hu-man interaction and operations in typical hu-man environments. The goal of the RobotChallenge is to design a robot system that canfully participate in the AAAI conference by reg-istering, navigating around the conference hallusing natural landmarks and human assis-tance, provide information on the conferenceand local area, have casual conversation withother conference attendees, and give a talk andtake questions. The common thread among allof these subtasks is interaction with humans,who are often not intimately familiar with mo-bile robot systems.

One of the key challenges for this event re-mains reliable perception in a dynamic, uncer-tain environment. With people and other ro-bots moving about, opening and closing ofdoors, and the occasional rearrangement of(possible previously mapped) furniture, match-ing sensor readings to a previously learned mapis difficult. This problem can be addressedthrough the use of probabilistic reasoning,based on the robot’s likely position, but as theuncertainty grows, it is possible for the robot tobecome irrecoverably lost. Currently, becominglost requires the intervention by an operator toreinitialize the system.

The single entrant in the 2004 Robot Chal-lenge was Lewis, developed by a team at Wash-ington University in St. Louis (see figure 6).

Lewis is an iRobot B21r robot with a pair ofcameras, a laser range finder, sonars, and bumpsensors to aid in navigation. Lewis uses theCMU CARMEN mapping package4 to buildmaps and navigate in dynamic environments.People interact with Lewis through an on-board keyboard and touchscreen interface. Themain improvement over the 2003 highly suc-cessful entry (at the International Joint Confer-ence on Artificial Intelligence in Acapulco,Mexico) was the addition of a highly fault-tol-erant autonomic control framework. Thisframework consists of two elements: (1) a finitestate-machine-based controller that sequencesindividual task-achieving behaviors, and (2) aservice-based middleware layer that providesrobustness in the face of hardware and softwarefailures.

Lewis was intended to begin the Robot Chal-lenge at the elevator on the first floor of the con-ference center. After requesting help with the el-evator button, Lewis sensed when the elevatordoor opened, entered the elevator and asked forthe second floor button to be pressed. On sens-ing the elevator door opening for a second time,

Articles

SUMMER 2005 31

Figure 6. Lewis, from Washington University in St. Louis.

at the front of the line, it interacted with the reg-istrar using preprogrammed speech scripts andkeyboard input. Finally, Lewis followed a guide,either another (red) robot or a human wearing ared shirt, to the exhibition hall.

An additional obstacle to success in the Ro-

Lewis exited the elevator and looked for a spe-cial sign indicating the proper registrationbooth. Lewis then navigated to the appropriatebooth, continually avoiding obstacles in its way.Once at the booth, Lewis waited in line until itgot to the head of the line. Once the robot was

Articles

32 AI MAGAZINE

Figure 7. Grace (left, behind desk), from Carnegie Mellon University, Interacting with Lewis.

bot Challenge event continues to be the ro-bustness of the complete robot system and theability to gracefully recover from failures inboth hardware and software. Particularly diffi-cult to deal with are failures induced by noisysensing. While dealing with the elevator, Lewishad mixed success in determining theopen/closed status of the elevator door, andperformance was unreliable. Inaccurate sensorinformation caused Lewis to switch operationalmodes at the wrong time, and the robot some-times believed it was entering the elevatorwhen it was exiting. Success in locating andnavigating to the registration booth was alsomixed; confusion over status related to ridingin the elevator did not allow Lewis to properlyswitch from elevator mode to sign-findingmode.

One of the most anticipated parts of thedemonstration was an intended robot-robotnatural interaction expected between Lewisand Grace (Carnegie Mellon University’s robo-ceptionist, described later) at registration. Re-peated failures to produce a conversation thatwas recognized by both robots were exacerbat-ed by the need to turn off much of Grace’s lan-guage-processing ability, again showing the dif-ficulty in integrating complete systems in areal-world setting.

The difficulties in getting robots out of thelab and into the real world are slowly being ad-dressed but remain largely unsolved. Unreliablesensing in dynamic environments can confuserobots and lead to failures; these failures arenot yet adequately identified or recovered au-tonomously, due to the complex and dynamicnature of the environment. Overemphasis onautonomy limits failure recovery; if a personbecomes confused, he or she would requesthelp to become reoriented, but the robots inthe Robot Challenge event do not yet use thisrecovery mode. The area of natural languageand gesture understanding remains a particu-larly open area of research. The facility withwhich humans can interpret language, drawingfrom a huge vocabulary and from rules of syn-tax learned over a long period of time, cannotyet be matched by a robot system. Hand-de-signed dictionaries and key-word matchingwithout syntax cannot approximate this levelof detail, and learning algorithms have not yetbeen successfully applied to learning language.Entries in the Robot Challenge explicitly ad-dressing these issues would be very welcome.

The Open Interaction EventThe Open Interaction Event was a new addi-tion to the AAAI robot program for 2004,

aimed at showcasing the latest results in thearea of robot-human interaction. The only re-quirement for entry in this event was that therobot systems have some significant interac-tion with humans. Four teams entered theOpen Interaction Event in 2004: B-dog (Uni-versity of New Orleans), Grace and George(Carnegie Mellon University and the Naval Re-search Laboratory), the University of MissouriMites, and JESTER (State University of NewYork at Stony Brook). The MU Mites andJESTER were described earlier, since they werealso entered in other events.

Grace (figure 7) and George (figure 8), bothiRobot B21 robots, acted as a robot receptionist(Grace) and robot guide (George) for confer-ence attendees. Both robots displayed an ex-pressive computer-animated face on a flat-pan-el LCD monitor as the central element of theirrobot-human interaction. Rather than using er-ror-prone speech-recognition systems, both ro-bots accepted human input via an attachedkeyboard.

Grace spent the conference behind a deskclose to the registration area, interacting withthe attendees. Most of the interactions in-volved “small talk” about her origin and capa-bilities, although Grace was also capable of pro-viding information on local restaurants andhow to find them. This information was storedin an on-board database, and accessed througha natural language interface based on the NRLNAUTILUS system (Wauchope 1990). Unfortu-nately, George was not able to perform its fullset of guiding duties, due to some last-minuteintegration problems. However, George wassuccessfully stationed outside of the exhibithall and interacted with conference attendeesin a manner similar to Grace.

Conclusions and the FutureMobile robots are complex systems, and de-ploying them in a public venue is challenging.As always, hardware failures and systems inte-gration issues reared their heads, forcing someteams to demonstrate only a subset of theirwhole systems. However, despite these prob-lems, the robot events once again proved to bea success, showcasing the state of the art in mo-bile robotics research and exposing members ofthe more general AI community to the field.

The 2005 Mobile Robot Competition and Ex-hibition will be cochaired by Sheila Tejada(University of New Orleans) and Paul Rybski(Carnegie Mellon University) and will mark adeparture from the traditional format. In thepast, the conference with which the event iscollocated has been held in a convention cen-

Articles

SUMMER 2005 33

The AAAI Mobile Robot Competition andExhibition would not have been possible with-out the generous support of the Defense Ad-vanced Research Projects Agency, NASA AmesResearch Center, the Naval Research Laborato-ry, and the American Association for ArtificialIntelligence. The Robot Rescue event wouldnot have been possible without the ReferenceTest Arenas for Urban Search and Rescue Ro-bots, provided and staffed by the National In-stitute of Standards and Technology.

The general cochairs for the 2004 event wereBill Smart and Sheila Tejada. The event chairswere Ashley Stroupe (Robot Challenge), BruceMaxwell (Open Interaction), Magda Bugajska(Robot Exhibition), and Jenn Casper and AdamJacoff (Robot Rescue). The robot workshop wasorganized by Ric Crabbe.

Notes1. See http://robotarenas.nist.gov/competitions.htm.

2. See http://usl.sis.pitt.edu/ulab.

3. See K. Rzeszutek’s Motion Detection Toolkit:http://darnok.com/programming/motion-detection/

4. See the Carnegie Mellon Robot Navigation Toolkit

ter, with an exhibition hall in which the robotscan operate. In 2005, the Twentieth NationalConference on Artificial Intelligence (AAAI-05)and the AAAI mobile robot events will be heldin a hotel. This means that all of the entrantswill be forced to deal with a less structured en-vironment as they compete in the events. Thischange reflects the growing maturity of thefield and the robustness of our systems. In thepast decade, the competition has gone fromspecially designed robot-friendly environmentsto the clutter and uncertainty of the real world,and the events at next year’s competition aresure to showcase this change.

AcknowledgementsWe would like to thank all of the competitorswho came to San Jose in 2004. Once again, de-spite the ever-present hardware failures and in-tegration problems, they successfully demon-strated the state of the art in mobile roboticsresearch. The competitors would also like tothank the San Jose Convention Center forkeeping the air conditioning and plumbingrunning for the duration of the conference.

Articles

34 AI MAGAZINE

Figure 8. Left: George, from Carnegie Mellon University. Right: George’s Face.

developed by M. Montemerlo, N. Roy, andS. Thrun, http://www-2.cs.cmu.edu/~car-men.

References

Jacoff, A.; Weiss, B.; and Messina E. 2003.Evolution of a Performance Metric for Ur-ban Search and Rescue Robots. Paper pre-sented at the Performance Metrics for Intel-ligent Systems Workshop, Gaithersburg,MD, August.

Maxwell, B.; Smart, W.; Jacoff, A.; Casper, J.;Weiss, B.; Scholtz, J.; Yanco, H. A.; Micire,M.; Stroupe, A.; Stormont, D.; and Lauwers,T. 2004. 2003 AAAI Robot Competition andExhibition. AI Magazine 25(2): 68–80.

Scholtz, J.; Young, J.; Drury, J. L.; and Yan-co, H. A. 2004. Evaluation of Human-RobotInteraction Awareness in Search and Res-cue. In Proceedings of the IEEE InternationalConference on Robotics and Automation(ICRA). Los Alamitos, CA: IEEE ComputerSociety.

Yanco, H. A.; Drury, J. L.; and Scholtz, J.2004. Beyond Usability Evaluation: Analy-sis of Human-Robot Interaction at a MajorRobotics Competition. Journal of Human-Computer Interaction 19(1–2): 117–149.

Wauchope, K. 1990. A Tandem Semantic In-terpreter for Incremental Parse Selection. Tech-nical Report NRL-9288, Naval ResearchLaboratory, Washington, D.C.

William D. Smart is anassistant professor ofcomputer science andengineering at Washing-ton University in St.Louis. His research cov-ers the areas of machinelearning and mobile ro-botics. He served as the

cochair of the 2003 and 2004 AAAI MobileRobot Competition and Exhibition. He canbe reached at [email protected].

Sheila Tejada is current-ly an assistant professorin the Computer ScienceDepartment at the Uni-versity of New Orleans.She was awarded hermaster’s and doctoral de-grees in computer sci-ence from the University

of Southern California in 1998 and 2002,respectively. Tejada has developed award-ing-winning robots, such as the robot YO-DA, which took the silver medal at theAAAI office navigation robot competition,held in Portland, Oregon, and the robotsoccer team DreamTeam, the first world

champions at the RoboCup InternationalRobot Soccer Competition in Nagoya,Japan. Most recently, the UNO RoboticsTeam won a technical award for researchon human-agent-robot interfaces at theAAAI/IJCAI Urban Search and Rescue Com-petition in Acapulco, Mexico, and an OpenInteraction award at AAAI 2004 as the audi-ence’s favorite for the AiBee interactive ro-botic art project. Tejada serves as cochairfor both the AAAI-04 and AAAI-05 robotcompetitions.

Bruce Maxwell is an as-sociate professor of engi-neering at SwarthmoreCollege. He has degreesfrom Cambridge Univer-sity and the CarnegieMellon Robotics Insti-tute. He has been bring-ing teams of undergradu-

ate students to the AAAI Robotcompetition each year since 1998, andcochaired the event in 2003.

Ashley Stroupe is a staffengineer at the JetPropulsion Laboratory.Her research interests in-clude autonomous ro-vers and cooperativemultirobot systems. Shealso is a member of theMars Exploration Rover

flight operations team. She received herPh.D. in robotics from Carnegie MellonUniversity in 2003.

Jennifer Casper is themarketing and salesmanager and research di-rector for American Stan-dard Robotics, Inc.Casper is a nationallyregistered emergencymedicine technician,serves as a special opera-

tions volunteer for Hillsborough CountyFire Rescue, is a technical search specialistfor Florida Task Force 3, and serves on com-mittees for the Florida Fire Chief’s Associa-tion. Her research is focused on human-ro-bot interactions in urban search and rescueenvironments.

Adam Jacoff is a robot-ics research engineer inthe Intelligent SystemsDivision of the NationalInstitute of Standardsand Technology (NIST).His research projectshave included develop-ment of innovative ro-

bots for applications in machine vision,large-scale manufacturing, and hazardous

environments. He designed and fabricatedNIST’s Reference Test Arenas for UrbanSearch and Rescue Robots as a representa-tive test environment to objectively evalu-ate robot performance and support devel-opment of advanced mobile robotcapabilities. Arenas such as these havehosted annual AAAI and RoboCup RescueRobot League competitions since 2000. Hisemail address is [email protected].

Holly Yanco is an assis-tant professor in theComputer Science De-partment at the Univer-sity of MassachusettsLowell. Her research in-terests include human-robot interaction, ad-justable autonomy,

assistive technology, and multiple robotteams. She graduated from the Massachu-setts Institute of Technology with her Ph.D.in 2000. Before joining the University ofMassachusetts Lowell, she taught at Welles-ley College, Boston College, and ArsDigitaUniversity. Yanco is the AAAI SymposiumCommittee chair and served as the AAAIRobot Competition and Exhibition cochairin 2001 and 2002. She can be reached [email protected].

Magda Bugajska is acomputer scientist at theNavy Center for AppliedResearch in Artificial In-telligence. She received aB.S. in mathematics andcomputer science withminor in AI and roboticsfrom Colorado School of

Mines and an M.S. in computer sciencefrom George Mason University. Her re-search interests include artificial cognitivesystems, robotics, and machine learning.She can be contacted at [email protected].

Articles

SUMMER 2005 35