final proj poster - indiana...

1
Obstacle Detection through Forward Movement Qiuwei Shou and Alan Wu, CSCIB657 Computer Vision, Indiana University Obstacle detection and avoidance are essential capabilities for autonomous robots (land and air) While traditional multisensory approach addresses obstacle detection, the use of a single (monocular) camera enables a more affordable solution Computational requirements must be manageable by embedded controllers Project is inspired by [Mori and Scherer, 2013] 1 paper We manually assembled our image dataset, which is primarily derived from drone hobbyist videos taken from a firstperson view (FPV), downloaded from YouTube Mostly forest scenes with various levels of luminance and tilt angles throughout flight Two datasets totaling ~500 previous/current frame combinations Set 1: ~400 dense forest Set 2: ~100 open field (no obstacles) Feature Detection Feature Filtering Template Extraction Feature Matching Obstacle Detection Algorithm Previous Image Obstacle: Yes or No Current Image We divide each image into 4x4 regions and use SURF for fast feature detection Only certain features in the “previous” frame are selected for matching with “current” frame Features must be of a minimum size Features must be within certain regions of the image We then perform template matching 1 to detect approaching object(s) that may be obstacles. See upperright of poster for details Initial template window size W affects detection accuracy as seen in graph below Motivation Dataset Methodology Current Image Previous Image Sample Obstacle Dataset Sample Nonobstacle Dataset Feature Filtering Template Extraction Challenges Dataset is largely “uncalibrated” – frame captures estimated to contain obstacles, but not guaranteed SURF is not optimal with noisy background Often unable to differentiate features on the ground from protruding objects (such as trees), especially when the camera is tilted downward Ref erences: 1 Mori, Tomoyuki and Sebastian Scherer. “First Results in Detecting and Avoiding Frontal Obstacles from a Monocular Camera for Micro Unmanned Aerial Vehicles.” ICRA, 2013. 2 http://www.ingeniousingenuity.com 3 http://clipartion.com/ 2 3 Conclusion andFutureWork Able to detect oncoming obstacles using a low computation algorithm based on SURF features and template matching Access textural library (ex. tree bark) for more targeted detection and matching; possibly combine with Visual SLAM to achieve higher fidelity in estimating obstacle depth Template Matching Perform template matching 1 to determine scale increase of approaching object For each remaining feature after filtering in “previous” frame, a small window is drawn around feature A similar window is drawn around matchingfeature in “current” frame, with window sizescaled up byK;resize window in “previous” frame also byK Scalar K is determined such that thecorrelation distance (SSD) between “previous” and “current” windows is minimized. Discard any matches with distance D >Dmax and K < Kmin Previous Frame Feature Current Frame Feature 1.Scale window size byK 2.Resize feature window byK 3.Calculate distance for various K Results We achieved an accuracy of 84% (i.e. False Neg = 16%) in detecting obstacles in Set 1 (dense forest) and a False Pos rate of 33% in falsely detecting obstacles in Set 2 (open field), for initial window size of 30 and ¼ resolution relative to original (72 ppi) Plots show effects of initial window size W and image resolution 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 10 15 20 25 30 35 40 45 50 False Rate Template Window Size Template Window Size vs. Detection Accuracy on Baseline Dataset False Neg False Pos 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 20 25 30 35 40 False Rate Template Window Size Template Window Size vs. Detection Accuracy, 0.25x ppi False Neg False Pos 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.1 0.2 0.3 0.4 0.5 False Rate Resolution re: Original Resolution vs. Detection Accuracy, W = 30 False Neg False Pos

Upload: truongnhu

Post on 24-Mar-2019

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Final Proj Poster - Indiana Universityvision.soic.indiana.edu/b657/sp2016/projects/alanwu/poster.pdf · ICRA,) 2013. 2 ... of)30)and) ¼)resolution) relative) to)original) (72) ppi)

Obstacle  Detection  through  Forward  MovementQiuwei Shou and  Alan  Wu,  CSCI-­‐B657  Computer  Vision,  Indiana  University

• Obstacle   detection   and  avoidance   are   essential  capabilities  for   autonomous  robots  (land  and  air)

• While  traditional   multi-­‐sensory   approach   addresses  obstacle   detection,   the   use  of  a  single  (monocular)  camera   enables   a  more   affordable   solution

• Computational  requirements   must  be  manageable   by  embedded   controllers

• Project   is  inspired  by  [Mori  and  Scherer,   2013]1 paper

• We   manually  assembled  our   image  dataset,   which  is  primarily   derived   from  drone  hobbyist  videos  taken   from  a  first-­‐person   view   (FPV),  downloaded  from   YouTube

• Mostly  forest   scenes   with  various   levels  of  luminance  and  tilt  angles   throughout   flight

• Two  datasets   totaling   ~500  previous/current   frame  combinations

• Set   1:  ~400  dense  forest• Set   2:  ~100  open  field  (no  obstacles)

Feature   Detection

Feature   Filtering

Template   Extraction Feature   Matching

Obstacle   Detection   Algorithm

Previous   Image

Obstacle:   Yes   or  No

Current  Image

• We  divide  each  image  into  4x4  regions  and  use  SURF  for  fast  feature  detection• Only  certain  features  in  the  “previous”  frame  are  selected  for  matching  with  

“current”  frame• Features   must  be  of  a  minimum   size• Features   must  be  within   certain   regions   of   the   image

• We  then  perform  template  matching1 to  detect  approaching  object(s)  that  may  be  obstacles.    See  upper-­‐right  of  poster  for  details• Initial   template  window   size   W  affects  detection   accuracy  as   seen   in  graph  below

Motivation

Dataset

Methodology

Current  ImagePrevious  Image

Sample   Obstacle  Dataset

Sample   Non-­‐obstacle  Dataset

FeatureFiltering

TemplateExtraction

Challenges• Dataset   is   largely   “uncalibrated”   – frame  captures   estimated   to  

contain  obstacles,   but   not  guaranteed

• SURF   is  not  optimal   with   noisy   background    

• Often  unable   to  differentiate   features   on   the  ground   from  protruding   objects   (such  as   trees),   especially   when   the  camera   is  tilted   downward

References:  1Mori,   Tomoyuki and   Sebastian   Scherer.    “First   Results  in  Detecting   and   Avoiding  Frontal  Obstacles   from  a  Monocular  Camera   for  Micro  Unmanned   Aerial   Vehicles.”  ICRA,   2013.

2http://www.ingeniousingenuity.com3http://clipartion.com/

2

3

Conclusion  and  Future  Work• Able   to  detect  oncoming   obstacles   using   a  low   computation  

algorithm   based   on  SURF   features  and   template  matching

• Access   textural   library   (ex.  tree  bark)   for  more   targeted  detection  and  matching;  possibly   combine   with   Visual   SLAM   to  achieve  higher  fidelity   in  estimating   obstacle   depth

Template  Matching• Perform  template  matching1 to  determine  scale  increase  of  

approaching  object• For  each   remaining   feature   after   f iltering   in  “previous”   frame,   a   small  

window  is  drawn  around  feature• A  similar   window  is  drawn  around  matching  feature   in  “current”  frame,   with  

window  size  scaled   up  by  K;  resize   window  in  “previous”  frame   also   by  K• Scalar   K  is  determined   such  that  the  correlation   distance   (SSD)   between  

“previous”   and  “current”   windows  is  minimized.    Discard   any   matches   with  distance   D   >  Dmax and  K  <   Kmin

Previous   Frame  Feature Current   Frame  Feature

1.  Scale   window  size   by  K

2.  Resize   feature  window  by  K

3.  Calculate   distance   for  various  K

Results• We  achieved  an  accuracy  of  84% (i.e.  False  Neg =  16%)   in  detecting  

obstacles   in  Set  1   (dense   forest)   and  a  False   Pos rate  of  33% in  falsely   detecting  obstacles   in  Set  2   (open   field),   for   initial   window  size   of  30  and   ¼  resolution   relative   to  original   (72  ppi)

• Plots   show   effects  of  initial window   size   W  and   image   resolution

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

10 15 20 25 30 35 40 45 50

False

-Rate

Template-Window-Size

Template-Window-Size-vs.-Detection-Accuracy-on-Baseline-Dataset

False-Neg

False-Pos

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

20 25 30 35 40

False/Ra

te

Template/Window/Size

Template/Window/Size/vs./Detection/Accuracy,/0.25x/ppi/

False/Neg

False/Pos

0

0.1

0.2

0.3

0.4

0.5

0.6

0 0.1 0.2 0.3 0.4 0.5

False

.Rate

Resolution. re:.Original

Resolution.vs..Detection.Accuracy,.W.=.30

False.Neg

False.Pos