Optimal Camera Placement of Large Scale Volume Localization System for Mobile Robot

Download Optimal Camera Placement of Large Scale Volume Localization System for Mobile Robot

Post on 18-Mar-2017

212 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

<ul><li><p>Optimal Camera Placement of Large Scale Volume </p><p>Localization System for Mobile Robot </p><p>Yingfeng WU1,a, Gangyan LI1,b, Huan Yan1,c </p><p>1(School of Mechanical and Electric Engineering, </p><p>Wuhan University of Technology, Wuhan 430070,China) awuyf1997@whut.edu.cn, bganyanl@ whut.edu.cn, cahyanhuan@163.com </p><p>Keywords: large scale volume localization system, camera network, relative position algorithm </p><p>Abstract: Large Scale Volume Localization System (LSVLS) is applied widely in industry. Large </p><p>Scale Volume Localization System with camera network has appropriate precise and cost, which is </p><p>a promising system in metrology and localization in industry and lives. Optimal camera placement </p><p>is significant to lower cost and facilitate targets auto-control for mobile robot in the large </p><p>workspace. The author optimized cameras placement with their relative position algorithm (RPA). </p><p>The result of optimal camera placement enhances greatly the efficiency of camera placement in </p><p>LSVLS and is verified with a model of field-winding mobile vehicle. </p><p>Introduction </p><p>Mobile robots are used in our lives and industry, they can sweep the floor, welding workpiece, </p><p>transport goods automatically or not. Some mobile robots should working in a large workspace </p><p>automatically. In order to track the mobile robots, Large Scale Volume Localization System can be </p><p>used to tracking the mobile robots in the large workspace. </p><p>Large Scale Volume Localization System (LSVLS) is applied widely in industry, including </p><p>aircraft and ship manufacturing, robot guidance, motion analysis, which is used for 3D coordinate </p><p>metrology accurately and tracking of moving object[1]</p><p>. LSVLS is constituted with several tech- </p><p>nologies, including laser tracker, theodolite, iGPS and high density CCD cameras, etc. Camera net- </p><p>work can enlarge the field of tracking with high measurement accuracy (millimetre-sized). And </p><p>CCD camera is cheaper than other technologies. A mobile spatial coordinate measuring system-II </p><p>(MScMS-II) [2]</p><p> is representative of using multiple CCD cameras, which is a promising system in </p><p>metrology and localization in industry. We use multiple CCD cameras (camera network) to build a </p><p>LSVLS to tracking the mobile robots in the large workspace. Wherever the mobile robots are, the </p><p>location of mobile robots can be detected precisely. Anyplace in the large scale volume must been </p><p>seen with three or more cameras, so dozens even hundreds cameras may used to seen precisely </p><p>the mobile robot in the such workspace. But how to arrange so many cameras to reduce overlap </p><p>between adjacent cameras and avoid any point missing by cameras. So the optimized camera arran- </p><p>gement is significant to lower cost and assure mobile robots auto-control, and automated camera </p><p>placement is essential to set the cameras to the right places to reduce the time spent to assemble the </p><p>LSVLS. Some LSVLS[3,4] </p><p>have encountered the same problem, or even not take it into account[2]</p><p>. </p><p>Visual coverage is an important quantifiable property of camera networks. Some algorithms, such </p><p>as genetic algorithms [5]</p><p>, quality metric [6]</p><p>, greedy selection heuristic [7]</p><p>, particle swarm optimization [8]</p><p>, etc., are used to optimize the placement of cameras. But their researches are mainly focused on </p><p>camera placement in 2D[5,9]</p><p>, or camera placement of subset in 3D[6]</p><p>, or only taking some single </p><p>Advanced Materials Research Vols. 945-949 (2014) pp 1390-1395Online available since 2014/Jun/06 at www.scientific.net (2014) Trans Tech Publications, Switzerlanddoi:10.4028/www.scientific.net/AMR.945-949.1390</p><p>All rights reserved. No part of contents of this paper may be reproduced or transmitted in any form or by any means without the written permission of TTP,www.ttp.net. (ID: 130.207.50.37, Georgia Tech Library, Atlanta, USA-14/11/14,02:50:46)</p><p>http://www.scientific.nethttp://www.ttp.net</p></li><li><p>factor into account [6,7]</p><p>. We will explore automated camera placement of dozens even hundreds </p><p>cameras of LSVLS, based on integer linear programming (ILP) [9]</p><p>. </p><p>Problem Definition </p><p>Modeling a cameras field-of-view in 3-D Space </p><p>The field-of-view of a camera can be described as a rectangular pyramid, as Fig.1(a). The </p><p>parameters of this pyramid can be easily calculated according to the intrinsic camera parameters. </p><p>Fig.1 Model of a cameras field-of-view </p><p>A camera can be modeled as a matrix FOV . In which, row is the coordinate of the apex of </p><p>rectangular pyramid. Camera in Fig.1(a), whose projection centre is at O(0,0,0) and focus axis is </p><p>pointed to Z, can be modeled as))0,0,0(,0,0,0(FOV . </p><p>=</p><p>ddd</p><p>ddd</p><p>ddd</p><p>ddd</p><p>FOV</p><p>yx</p><p>yx</p><p>yx</p><p>yx</p><p>)2</p><p>tan()2</p><p>tan(</p><p>)2</p><p>tan()2</p><p>tan(</p><p>)2</p><p>tan()2</p><p>tan(</p><p>)2</p><p>tan()2</p><p>tan(</p><p>000</p><p>))0,0,0(,0,0,0(</p><p>A random camera in Fig.1(b) can be regarded as a camera which is translated to point S and </p><p>rotated ryrxrz ,, around the axis Z,X and Y respectively from the camera in Fig.1(a). The random </p><p>camera can be modeled as),,,( SryrxrzFOV . </p><p>SRyRxRzFOV</p><p>z</p><p>z</p><p>z</p><p>z</p><p>z</p><p>y</p><p>y</p><p>y</p><p>y</p><p>y</p><p>x</p><p>x</p><p>x</p><p>x</p><p>x</p><p>FOV</p><p>D</p><p>C</p><p>B</p><p>A</p><p>s</p><p>D</p><p>C</p><p>B</p><p>A</p><p>s</p><p>D</p><p>C</p><p>B</p><p>A</p><p>s</p><p>Sryrxrz +=</p><p>= ))0,0,0,(0,0,0(),,,(</p><p> In which, </p><p>=</p><p>100</p><p>0)cos()sin(</p><p>0)sin()cos(</p><p>zz</p><p>zz</p><p>rr</p><p>rr</p><p>Rz, </p><p>=</p><p>)cos()sin(0</p><p>)sin()cos(0</p><p>001</p><p>1</p><p>xx</p><p>xx</p><p>rr</p><p>rrRx,</p><p>=</p><p>)cos(0)sin(</p><p>010</p><p>)sin(0)cos(</p><p>1</p><p>yy</p><p>yy</p><p>rr</p><p>rr</p><p>Ry, </p><p>=</p><p>S</p><p>S</p><p>S</p><p>S</p><p>S</p><p>S</p><p>S</p><p>S</p><p>S</p><p>S</p><p>S</p><p>S</p><p>S</p><p>S</p><p>S</p><p>z</p><p>z</p><p>z</p><p>z</p><p>z</p><p>y</p><p>y</p><p>y</p><p>y</p><p>y</p><p>x</p><p>x</p><p>x</p><p>x</p><p>x</p><p>S</p><p>(1) </p><p>(2) </p><p>(a)Green circle means a camera, rectangular </p><p>pyramid means the field-of-view of camera, d is </p><p>depth of field-of-view, x are the angle </p><p>between the plane ODAandOBC , and y are the angle between the plane OAB and OCD . </p><p>(b)A random camera translatedand rotated </p><p>from the camera in Fig.2 (a) to S . </p><p>Advanced Materials Research Vols. 945-949 1391</p></li><li><p>Modeling Workspace </p><p>In the ideal case cameras can be placed continuously in the space, but the continuous case </p><p>cant be solved. So the workspace is divided into grids, the distance between two grids0, the </p><p>approximated solution converges to the continuous-case solution. The workspace is divided into </p><p>grids respectively in observation area (a plane near the ground, = obser ) and camera placement </p><p>area (a plane near the ceiling, = piace ) . The grid points in camera placement area are the points </p><p>where the cameras can be placed and the grid points in observation area are the points where the </p><p>target can stay as Fig.2. Assume that the number of grid points in observation area and camera </p><p>placement area is ConPn _ and CamPn _ respectively. The 3D coordinates of these grid points </p><p>can be expressed with the matrix 3_ConP ConPn and 3_CamP CamPn . It is obvious that smaller the </p><p>obser and piace are, more grid points and more accurate visual coverage will be, and more time </p><p>and computer resource are needed. </p><p> Fig.2 The workspace is divided into grids respectively in observation area </p><p>and camera placement area, a camera is placed on grid point S , and </p><p>cameras optical axis is pointed to observation area. </p><p>Solution of Optimal Camera Placement </p><p>Optimal camera placement has two sides: any camera which is placed on a grid point in </p><p>camera placement area must see the most grid points in observation area, and any grid point in </p><p>observation area must be seen by no less than 3 cameras. Firstly, we assume that a camera is placed </p><p>on grid point CP in camera placement area, as Fig.2, and cameras optical axis is pointed to </p><p>observation area. The camera on CP can be expressed with ),,,( CPryrxrzFOV , according to Eq.1. In </p><p>which, yxz rrr ,, range from to . We discretize yxz rrr ,, from to step by 0.1. All of </p><p>the discretized yxz rrr ,, can be listed with a matrix 3_ PosMnPosM , which means all postures of the </p><p>camera on CP . PosMn _ is a variable of the number of the postures. We proposed relative position </p><p>algorithm (RPA) to find optimal camera placement on a grid point in camera placement area, and </p><p>solved based ILP Algorithm[9]</p><p>. </p><p>A program in matlab is designed to get optimal camera placement. </p><p>Grid the workspace and input the matrix 3_ConP ConPn and 3_CamP CamPn automatically. </p><p>Input the posture matrix 3_PosM PosMn , which contains all the postures of the camera. </p><p>Define a matrix ConPnCamPn __CAM . </p><p>for 1=i to n_CamP </p><p>Define a matrix i __CovP ConPnPosMn </p><p> for 1=j to n_PosM </p><p> for 1=k to n_ConP </p><p>if ( ) 1:),(,:),(:),,( =kConPiCamPjPosMf , whether any camera placed on a grid point in </p><p>camera placement area can see the most grid points in observation area can be </p><p>solved with RPA. </p><p>1392 Advances in Manufacturing Science and Engineering V</p></li><li><p>1=i</p><p>kjCovP ; which means that the camera on :),(iCamP can see the grid point </p><p>kn_ConP , the camera placed on i.NO placement point with the j.NO posture </p><p>can cover the k.NO grid point. </p><p> else </p><p> 0=i</p><p>kjCovP ; otherwise </p><p> End </p><p>End </p><p> End </p><p>In matrix iCovP , rows means postures of camera. Now the row which contains maximum </p><p>number of 1 is founded and assign the number of this row to iBPn _ . iBPn _ means that theiBPn _.NO posture is the best posture of camera. :),_(CovP:),(CAM ii BPni = </p><p>End </p><p>In matrix CAM , the minimum number of rows can be founded, which ensure that the sum of </p><p>every column is no less than 3. So any grid point in observation area can be seen by no less than 3 </p><p>cameras. The minimum number of rows is defined as minn_ , Define a linear array SerN which </p><p>contains the serial number of these rows. Define a matrix 6min_OPTI_CAM n . m.NO row of </p><p>OPTI_CAM means the 3D coordination and the best posture yxz rrr ,, of the NO. </p><p>SerN(m) camera. </p><p>Simulation and Experimental Results </p><p>A large workpiece, whose diameter is 25m, is manufactured by field-winding composite mate- </p><p>rial. A winding mobile vehicle can be used to wind composite material onto the surface of the work- </p><p>piece. The winding mobile vehicle is constituted with two parts: mobile vehicle and winding equip- </p><p>ment on the platform of mobile vehicle. The mobile vehicle moves around the workpiece at a cons- </p><p>tant speed in a trajectory of 30m. Large Scale Volume Localization System with camera network is </p><p>used to navigate the mobile vehicle. The workspace is devided into grids, as Fig.3. Cameras place- </p><p>ment in the LSVLS is achieved with our algorithm. 26 cameras are needed to ensure that the mobile </p><p>vehicle circling in its trajectory can been seen. And the 3D coordinates and the postures of the </p><p>cameras are calculated, listed in Table 1. </p><p>Fig.3 The workspace of field-winding a large workpiece is divided into grids respectively in </p><p>observation area (blue area) and camera placement area (yellow area), green circles in camera </p><p>placement area are the coordinate of cameras. </p><p>y</p><p>z</p><p>x</p><p>x</p><p>y</p><p> (a)3D drawing of optimal camera placement (b)Projection drawing of optimal camera placement </p><p>Advanced Materials Research Vols. 945-949 1393</p></li><li><p>Table 1: The 3D coordinates and the postures of optimal camera placement </p><p>camera x y z rz rx ry camer x y z rz rx ry </p><p>1 1.3 -4.3 5.5 1.8 0.1 -0.1 14 -3.6 5.4 5.5 1.3 0.2 0.1 </p><p>2 5.4 1 5.5 0.1 -0.1 -0.1 15 -4.7 4.5 5.5 1.2 0.2 0.2 </p><p>3 3 4.6 5.5 2.5 0 -0.1 16 -6.1 2.4 5.5 0.7 0.1 0.3 </p><p>4 -1.3 5.3 5.5 1.6 0.1 -0.1 17 -6.4 0.9 5.5 0.4 0.1 0.2 </p><p>5 -3.9 3.9 5.5 1.3 0 0.1 18 -6.5 -0.1 5.5 2.7 0 0.3 </p><p>6 -3.6 -4.2 5.5 1.9 -0.1 0 19 -6.4 -1.1 5.5 2.5 0 0.3 </p><p>7 2 -5.1 5.5 1.5 -0.1 0.1 20 -5 -4.2 5.5 2.1 -0.2 0.2 </p><p>8 6.3 1.5 5.5 2.5 0.1 -0.2 21 -3 -5.8 5.5 2.4 -0.2 0.1 </p><p>9 5.6 3.3 5.5 3.1 0.2 -0.2 22 -1.6 -6.3 5.5 2.2 -0.2 0.1 </p><p>10 4.3 4.9 5.5 1.9 0.2 -0.2 23 1.8 -6.2 5.5 1.8 -0.2 -0.1 </p><p>11 2.6 5.9 5.5 2.4 0.2 -0.1 24 5.4 -3.6 5.5 1 -0.1 -0.2 </p><p>12 0.7 6.5 5.5 2.1 0.3 0 25 5.7 -3.2 5.5 3.1 -0.1 -0.2 </p><p>13 -1.3 6.4 5.5 0.9 0.3 0 26 6.2 -1.8 5.5 0.6 -0.1 -0.2 </p><p>Conclusion and Futurework </p><p>Optimal camera placement is significant to lower cost and facilitate targets auto-control in </p><p>Large Scale Volume Localization System (LSVLS). The author proposed a relative position </p><p>algorithm (RPA) to find optimal camera placement. The result of optimal camera placement </p><p>enhances greatly the efficiency of camera placement in LSVLS. The relative position algorithm can </p><p>be used to solve camera placement. But as the grids of workspace increase, the computing time of </p><p>RPA is increases largely too. </p><p>References </p><p>[1]Estler W T, Edmundson K L, Peggs G N, et al. Large-scale metrologyan update[J]. CIRP </p><p>Annals-Manufacturing Technology, 2002, 51(2): 587-609. </p><p>[2]Franceschini F, Galetto M, Maisano D, et al. Distributed large-scale dimensional metrology[M]. </p><p>London: Springer, 2011. </p><p>[3]Zhou Hu. Study on the Vision-based Target Tracking and Spatial Coordinates Positioning </p><p>System. Tianjin University. Ph.D. Dissertation, 2011 </p><p>[4]Xiong Zhi. Research on network deployment optimization of workspace Measurement and </p><p>Positioning System. Tianjin University. Ph.D. Dissertation, 2012 </p><p>[5]Nikolaidis S, Arai T. Optimal arrangement of ceiling cameras for home service robots using </p><p>genetic algorithms[C]. Robot and Human Interactive Communication, 2009. RO-MAN 2009. The </p><p>18th IEEE International Symposium on. IEEE, 2009: 573-580. </p><p>[6]Xing Chen. Design of Many-Camera Tracking Systems for Scalability and Efficient Resource </p><p>Allocation. Ph.D. Dissertation, Stanford University, June 2002. </p><p>1394 Advances in Manufacturing Science and Engineering V</p></li><li><p>[7]Ercan A O. Object tracking via a collaborative camera network. Ph.D. Dissertation, Stanford </p><p>University, June 2007. </p><p>[8]Hu X, Eberhart R. Solving constrained nonlinear optimization problems with particle swarm </p><p>optimization[C].Proceedings of the sixth world multiconference on systemics, cybernetics and </p><p>informatics. 2002, 5: 203-206. </p><p>[9]Chakrabarty K, Iyengar S S, Qi H, et al. Grid coverage for surveillance and target location in </p><p>distributed sensor networks[J]. Computers, IEEE Transactions on, 2002, 51(12): 1448-1453. </p><p>Advanced Materials Research Vols. 945-949 1395</p></li><li><p>Advances in Manufacturing Science and Engineering V 10.4028/www.scientific.net/AMR.945-949 Optimal Camera Placement of Large Scale Volume Localization System for Mobile Robot 10.4028/www.scientific.net/AMR.945-949.1390 DOI References[1] Estler W T, Edmundson K L, Peggs G N, et al. Large-scale metrology-an update[J]. CIRP Annals-Manufacturing Technology, 2002, 51(2): 587-609.http://dx.doi.org/10.1016/S0007-8506(07)61702-8 [9] Chakrabarty K, Iyengar S S, Qi H, et al. Grid coverage for surveillance and target location in distributedsensor networks[J]. Computers, IEEE Transactions on, 2002, 51(12): 1448-1453.http://dx.doi.org/10.1109/TC.2002.1146711 </p><p>http://dx.doi.org/www.scientific.net/AMR.945-949http://dx.doi.org/www.scientific.net/AMR.945-949.1390http://dx.doi.org/http://dx.doi.org/10.1016/S0007-8506(07)61702-8http://dx.doi.org/http://dx.doi.org/10.1109/TC.2002.1146711</p></li></ul>