gards: generalized autonomous robotic delivery system

2
GARDS: Generalized Autonomous Robotic Delivery System Jade Zsiros Brian Blalock Darien Craig Sudharsan Vaidhun Alexander Wang Zhishan Guo {jadezsiros, bblalock}@knights.ucf.edu, [email protected] [email protected], [email protected], [email protected] Department of Electric and Computer Engineering, University of Central Florida, Florida Trinity Preparatory School, Florida Abstract—In this demonstration, we present a generalized platform customized to suit the needs of a fast power-efficient and autonomous delivery system. As an application demonstration, we deployed a mapping and localization system based on a combination of sensor sources. An online navigation algorithm utilizes the map information to deliver to a destination in the mapped area. Index Terms—autonomous robots, GPU, embedded system I. I NTRODUCTION The past decade has seen significant progress in energy- efficient computational hardware and machine learning tech- niques. In this work, we demonstrate our platform that com- bines these technological advancements for applied research in autonomous delivery vehicles. One of the predominant sensory inputs for autonomous vehicles is in one way or the other, related to visual information. The visual information of the surroundings can be sampled using a combination of cam- era, LiDAR, radar sensors. The parallel nature of processing these sensor data can benefit from their implementation on a GPU. While GPU operations are typically power-hungry, the availability of embedded low-power compute modules such as the NVIDIA Jetson TX2 with a peak power consumption of 15 Watt is suitable for our needs. Details of the hardware setup is described in Section II-A. Besides having a good trade-off between power consump- tion and computational capability, a key feature of the platform is the general nature of the implementation. The following are the benefits of the platform for delivery robots Capability to implement machine learning inference al- gorithms. The machine learning algorithms need to mod- ification from a desktop implementation due to the avail- ability of general-purpose GPUs onboard. Offline computation for global path planning. The offline computation eliminates the need to be in constant com- munication with a database server. Low power consumption. Efficient computing platform saves power and regenerative braking to improves battery longevity. Fig. 1. Picture of the GARDS platform with a LiDAR, a stereo-camera and Jetson module mounted on an AWD chassis along with supporting power system II. DEMONSTRATION A. Hardware GARDS is built on the F1/10 platform [6]. The mechanical chassis models an all-wheel-drive (AWD) system and unlike differential drive platforms which are suitable for smooth indoor terrain, GARDS includes a suspension system for navigation on minimally structured terrain. The computational core of the platform is the NVIDIA Jetson TX2 [5]. The Jetson module has a quad-core ARM CPU, a dual-core NVIDIA denver CPU and a 256-core Pascal GPU capable for CUDA computing. The sensor system includes a 270-degree LiDAR, a stereoscopic ZED camera [8] and inertial measurement units. A snapshot of the platform integrated with the sensor system is shown in Figure 1. B. Software The software architecture used in the platform is the Robot Operating System (ROS) [7] and it is installed on a linux based Ubuntu distribution with a linux kernel modified for tegra devices such as the Jetson TX2. The ROS framework follows a publisher-subscriber model for inter-process communication among the various processes in the system. The stereo-camera

Upload: others

Post on 21-Mar-2022

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: GARDS: Generalized Autonomous Robotic Delivery System

GARDS: Generalized Autonomous RoboticDelivery System

Jade Zsiros Brian Blalock Darien Craig Sudharsan Vaidhun Alexander Wang† Zhishan Guo{jadezsiros, bblalock}@knights.ucf.edu, [email protected]

[email protected], [email protected], [email protected] of Electric and Computer Engineering, University of Central Florida, Florida

†Trinity Preparatory School, Florida

Abstract—In this demonstration, we present a generalizedplatform customized to suit the needs of a fast power-efficient andautonomous delivery system. As an application demonstration,we deployed a mapping and localization system based on acombination of sensor sources. An online navigation algorithmutilizes the map information to deliver to a destination in themapped area.

Index Terms—autonomous robots, GPU, embedded system

I. INTRODUCTION

The past decade has seen significant progress in energy-efficient computational hardware and machine learning tech-niques. In this work, we demonstrate our platform that com-bines these technological advancements for applied research inautonomous delivery vehicles. One of the predominant sensoryinputs for autonomous vehicles is in one way or the other,related to visual information. The visual information of thesurroundings can be sampled using a combination of cam-era, LiDAR, radar sensors. The parallel nature of processingthese sensor data can benefit from their implementation on aGPU. While GPU operations are typically power-hungry, theavailability of embedded low-power compute modules such asthe NVIDIA Jetson TX2 with a peak power consumption of15 Watt is suitable for our needs. Details of the hardware setupis described in Section II-A.

Besides having a good trade-off between power consump-tion and computational capability, a key feature of the platformis the general nature of the implementation. The following arethe benefits of the platform for delivery robots

• Capability to implement machine learning inference al-gorithms. The machine learning algorithms need to mod-ification from a desktop implementation due to the avail-ability of general-purpose GPUs onboard.

• Offline computation for global path planning. The offlinecomputation eliminates the need to be in constant com-munication with a database server.

• Low power consumption. Efficient computing platformsaves power and regenerative braking to improves batterylongevity.

Fig. 1. Picture of the GARDS platform with a LiDAR, a stereo-camera andJetson module mounted on an AWD chassis along with supporting powersystem

II. DEMONSTRATION

A. Hardware

GARDS is built on the F1/10 platform [6]. The mechanicalchassis models an all-wheel-drive (AWD) system and unlikedifferential drive platforms which are suitable for smoothindoor terrain, GARDS includes a suspension system fornavigation on minimally structured terrain. The computationalcore of the platform is the NVIDIA Jetson TX2 [5]. The Jetsonmodule has a quad-core ARM CPU, a dual-core NVIDIAdenver CPU and a 256-core Pascal GPU capable for CUDAcomputing. The sensor system includes a 270-degree LiDAR,a stereoscopic ZED camera [8] and inertial measurement units.A snapshot of the platform integrated with the sensor systemis shown in Figure 1.

B. Software

The software architecture used in the platform is the RobotOperating System (ROS) [7] and it is installed on a linux basedUbuntu distribution with a linux kernel modified for tegradevices such as the Jetson TX2. The ROS framework followsa publisher-subscriber model for inter-process communicationamong the various processes in the system. The stereo-camera

Page 2: GARDS: Generalized Autonomous Robotic Delivery System

Fig. 2. Occupancy grid with dark region shown obstacles and inaccessibleareas. 10 pixels of the occupancy grid roughly equals a distance of 1 feet.

Fig. 3. Constructed map of the cost function showing the cost to destination.The destination is represented with a red star.

publishes two video streams and an internal IMU data forframe of reference.

C. Navigation

The sensory information from two sources are integratedto for mapping, localization and navigation. First, the videostreams from the camera are interpreted by the ZED softwarepackage into a depth image. RTAB mapping package [4]combines the information from the stereo camera and theLiDAR to construct an occupancy grid. A sample occupancygrid is shown in the Figure 2.

With the occupancy grid, a graph search algorithm such asD* is applied to create cost map similar to the map shown inFigure 3. The navigation software is custom written for thisplatform, using a graph search algorithm to create a routinggradient over the occupation grid for later vector navigation.As internal building delivery only needs to navigate to thedoors inside the building, the navigation system generates onecost map for each end point, and stores the map for later use,cutting down on compute time while driving. With the occu-pancy grid relatively stable and unchanging, the constructionof the cost map takes only a few seconds onboard. As anexample, for an area of 5000 sq.ft. the cost map constructiontakes less than 15 seconds. The robot then loads the map intomemory at the beginning of navigation, and then computes

the drive vector from the map gradient at the robot’s currentlocation, as reported by RTAB’s localization system. Thelocalization system runs continuously while driving to keepthe algorithm updated with the current position and orientationof the vehicle. The local obstacle avoidance while navigationis handled by the LiDAR.

III. FUTURE WORKS

Currently, we have only exploited the general-purpose com-puting units in the GPU for processing video streams. Sincethe platform also has a varying-speed multi-core CPU cluster,we plan to implement energy-efficient multi-core schedulingalgorithms [1]–[3]. Such scheduling algorithms bound thelatency while also optimizing resource utilization and energyconsumption. We expect to quantitatively measure the benefitof variable speed processors for application in autonomousmobile robots.

ACKNOWLEDGMENT

This work is supported by NSF grant CNS-1850851.

REFERENCES

[1] A. Bhuiyan, S. Sruti, Z. Guo, and K. Yang. Precise scheduling of mixed-criticality tasks by varying processor speed. In Proceedings of the 27thInternational Conference on Real-Time Networks and Systems, RTNS ’19,pages 123–132, Toulouse, France, Nov. 2019. Association for ComputingMachinery.

[2] A. Bhuiyan, K. Yang, S. Arefin, A. Saifullah, N. Guan, and Z. Guo.Mixed-criticality multicore scheduling of real-time gang task systems. In2019 IEEE Real-Time System Symposium (RTSS), 2019.

[3] Z. Guo, A. Bhuiyan, D. Liu, A. Khan, A. Saifullah, and N. Guan.Energy-Efficient Real-Time Scheduling of DAGs on Clustered Multi-Core Platforms. In 2019 IEEE Real-Time and Embedded Technologyand Applications Symposium (RTAS), pages 156–168, Apr. 2019. ISSN:1545-3421.

[4] M. Labbe and F. Michaud. RTAB-Map as an Open-source LiDAR andVisual SLAM Library for Large-scale and Long-term Online Operation.Journal of Field Robotics, 36(2):416–446, 2019.

[5] NVIDIA Jetson TX2. developer.nvidia.com/embedded/jetson-tx2.[6] M. O’Kelly, V. Sukhil, H. Abbas, J. Harkins, C. Kao, Y. V. Pant, R. Mang-

haram, D. Agarwal, M. Behl, P. Burgio, and M. Bertogna. F1/10: AnOpen-Source Autonomous Cyber-Physical Platform. arXiv:1901.08567[cs], Jan. 2019. arXiv: 1901.08567.

[7] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs,R. Wheeler, and A. Y. Ng. ROS: An Open-source Robot OperatingSystem. In ICRA Workshop on Open Source Software, 2009.

[8] Zed Camera from Stereo Labs. www.stereolabs.com/zed/.