vision based cloud robotics -...

22
Vision Based Cloud Robotics Mohan Muppidi, Joaquin Labrado ACE Laboratory, UTSA

Upload: others

Post on 07-Oct-2020

4 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Vision Based Cloud Robotics

Mohan Muppidi, Joaquin Labrado

ACE Laboratory, UTSA

Page 2: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Overview (cntd..)

2

Robot

Cloud

Start

Page 3: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Introduction

Two questions help humans navigate their environment1. “What does the world around me look like?”2. “Where am I in this world”This problem of navigation is often solved with Simultaneous Localization And Mapping (SLAM)

If odometry and localization are based on image data then it is called Visual SLAM or VSLAM in short.

3

Page 4: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Introduction (cntd..)Applications

Asimo robot Jibo robotNao robot Dyson 360 eye

Dyson 360 eye navigation

1. Personal Robots

2. Vacuum Cleaning Robots

3. Assistive robots

4. Robotic guides in Museums

5. Assistive robots in Libraries

6. Unmanned Aerial Vehicles

4

Page 5: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Container based SLAM

1. Humans easily remember complex objects (such as trees, signboards or shops) as landmarks.2. Robot remembering or storing such complex objects is not only computationally expensive but

also time consuming. 3. Similar to how humans navigate their environment this research uses images to also identify

landmarks. 4. World is divided into a grid with 1x1 blocks. Each grid is treated as a container with landmark

images.5. For ease of localization these blocks are grouped to follow a convention

5

Page 6: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Container based SLAM (cntd..)

Odometry of robot can be determined by 1. IMU2. Encoders3. Visual Odometry etc.Images are treated as landmarks. Each container has 8 landmark images covering the 360⁰ view of robotNaming convention is followed so that coordinates of that container can be determined from the name itself

InputImage

Odometrystampanddirectionstamp

Saveitindatabasebasedonconvention

6

Page 7: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Container based SLAM (cont..)

Illustration of levels in landmark database 360⁰ robot view divided into 8 segments

7

Level 0

Page 8: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Localization

• Localization• “Where am I in this world ?”• Localization helps robot to know its current location in an environment with respect to base frame• Advantage of the localization algorithm is that the landmark database created using Container based

SLAM can be used to localize any robot• Image feature matching based localization

8

Page 9: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Experiment

• The experiment was conducted on the 2nd floor hallway of UTSA Bio Sciences and Engineering building. The experiment started and ended at same location with the hallway as shown below

9

Page 10: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Landmark Database created using SLAM

10

Page 11: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Localization of Robot based on previously built database

11

Page 12: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

CLOUD BASED LOCALIZATION

• Limitations of Localization Algorithm1. The robot has to be in immediate or next neighborhood of the previous location before next sample is taken2. If not in this area then full database has to be searched, Time consuming and slows down the robot.Overcoming the Limitations1. Performing parallel search2. Offloading some computing from on board computer

12

Page 13: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Illustration of cloud based parallel processing using ROS

X+1,Y+1

X+1,Y

X+1,Y-1X,Y-1X-1,Y-1

X-1,Y

X-1,Y+1 X,Y+1

X,Y

13

Page 14: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Time comparison between results with parallel algorithm and serial algorithms

Serial Algorithm results

8.23 seconds per 100 frames Without any manipulation such as resizing or cropping on Database images

Parallel Algorithm results

1.84 seconds per 100 framesWithout any manipulation such as resizing or cropping on database images

77%

14

Page 15: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Quadrotors

• Fly through the use of 4 motors

• Many different applications • Swarms• Eye in the Sky

Page 16: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Quadrotors

• In the Lab• ARDrone 2.0

• Parrot Bebop

• Erle Copter

Page 17: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Formation Control

• Multiple UAS's flying and moving as a group

• Useful for localization

• SAR applications

Page 18: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Formation Control in the Lab

• 4 total quadrotors in a formation

• Use of cameras to keep formations

• Implement SLAM

Page 19: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

3D Depth Cameras

• LiDAR

• RGB-D data

Page 20: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

3D Depth Cameras

• Xbox Kinect

• Asus XTION PRO

Page 21: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Use of the Xbox Kinect

• Point clouds

Page 22: Vision Based Cloud Robotics - TECHLAVtechlav.ncat.edu/Presentations/Techlav_Presentation_Mohan6_10_15.pdfAsimo robot Nao robot Jibo robot Dyson 360 eye Dyson 360 eye navigation 1

Use of the Xbox Kinect • DepthDataforcontrollers