diver/uuv underwater measurement system · 2014. 5. 21. · where it is recovered prior to remote...

4
Diver/UUV underwater measurement system D.A. Pilgrim 1 , A. Duke 2 , G. Symes 2 Abstract Sea mines laid by an aggressor pose a deadly threat to both military operations and commercial shipping. Advances in technology have transformed relatively simple devices into a diverse range of complex and intelligent weapon systems. An important requirement when investigating a contact is to confirm that it is indeed a mine and to derive features such as overall size, protrusions, graphical descriptors and the like. This acquisition of visual intelligence assists the subsequent process of deciding what action needs to be taken. Video images obtained by sub-sea cameras have variable perspective and scale but this problem may be solved by the use of structured lighting comprising an array of diode lasers. A 5-spots laser system has been developed jointly between University of Plymouth and QinetiQ Bincleaves for use by divers and unmanned underwater vehicles (UUVs). An important aspect of this research is the ongoing development and optimisation of verifiable measurement and sampling techniques, which may then be employed with confidence in hostile waters. Introduction The UK Royal Navy uses remotely operated vehicles (ROVs) as the principal means of sea-mines disposal; the need being to remove the diver from this hazardous task wherever possible, and to increase the depth at which mine disposal can be conducted beyond that which can be achieved by non-saturation diving methods. The vehicle is first deployed to conduct a video survey to confirm the target identity. It is then used to place a demolition charge close to the mine before returning to the deploying platform where it is recovered prior to remote detonation of the charge. The maturity of current vehicle technology, compared to that available to the RN in the 1980s, offers the prospect of remote solutions for a wider variety of military objectives. These objectives encompass every ocean environment from operations at 200-300 m depth to those in shallow and turbulent waters, including the surf zone, with typical water depths of less than 10 m. Central to any operations is the need to survey and identify target features, and a variety of sensors are being considered for this purpose. Image acquisition using video cameras remains a highly attractive option since the technology to produce good quality pictures is readily available. These cameras are robust, inexpensive and easily interfaced with existing systems. A method that would allow dimensional data to be extracted from captured images would therefore be very welcome and of significant benefit to the RN. The analysis of underwater photographs to study and record seabed phenomena has been a technique in common use since the 1940s (e.g. Ewing et al, 1946). Early quantitative methods were limited usually to the use of downward-looking cameras in which a known altitude and fixed angular field of view were used to calculate image dimensions. A major problem associated with the analysis of ROV photographs of the seabed is that camera height is unknown and the use of zoom lenses results in a variable field of view. Also, oblique camera angles introduce perspective whereby the scale changes from the bottom to top of the image. It is possible to find these unknowns through the use of structured lighting. Technical solution The simplest system of structured lighting comprises a single pair of parallel lasers that project two spots onto the seabed in a camera’s fixed field of view (i.e. no zoom). Since the separation of the two spots on the image represent a known distance, the image can be scaled approximately. This has been the most widely used laser spot system for scaling photographs but suffers two significant disadvantages: it works for the whole image only if the camera is set at right angles (in both planes) to the seabed or similar target; and it cannot be used with a zoom lens (variable field of view) unless there is a separate input of camera-target distance. 1 Institute of Marine Studies, University of Plymouth, Plymouth PL4 8AA 2 QinetiQ Bincleaves, Newtons Road, Weymouth, Dorset DT4 8UR

Upload: others

Post on 20-Jul-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Diver/UUV underwater measurement system · 2014. 5. 21. · where it is recovered prior to remote detonation of the charge. The maturity of current vehicle technology, compared to

Diver/UUV underwater measurement systemD.A. Pilgrim1, A. Duke2, G. Symes2

AbstractSea mines laid by an aggressor pose a deadly threat to both military operations and commercial shipping. Advances intechnology have transformed relatively simple devices into a diverse range of complex and intelligent weapon systems.

An important requirement when investigating a contact is to confirm that it is indeed a mine and to derive features suchas overall size, protrusions, graphical descriptors and the like. This acquisition of visual intelligence assists thesubsequent process of deciding what action needs to be taken.

Video images obtained by sub-sea cameras have variable perspective and scale but this problem may be solved by theuse of structured lighting comprising an array of diode lasers. A 5-spots laser system has been developed jointlybetween University of Plymouth and QinetiQ Bincleaves for use by divers and unmanned underwater vehicles (UUVs).An important aspect of this research is the ongoing development and optimisation of verifiable measurement andsampling techniques, which may then be employed with confidence in hostile waters.

IntroductionThe UK Royal Navy uses remotely operated vehicles (ROVs) as the principal means of sea-minesdisposal; the need being to remove the diver from this hazardous task wherever possible, and to increasethe depth at which mine disposal can be conducted beyond that which can be achieved by non-saturationdiving methods. The vehicle is first deployed to conduct a video survey to confirm the target identity.It is then used to place a demolition charge close to the mine before returning to the deploying platformwhere it is recovered prior to remote detonation of the charge.

The maturity of current vehicle technology, compared to that available to the RN in the 1980s, offers theprospect of remote solutions for a wider variety of military objectives. These objectives encompassevery ocean environment from operations at 200-300 m depth to those in shallow and turbulent waters,including the surf zone, with typical water depths of less than 10 m. Central to any operations is theneed to survey and identify target features, and a variety of sensors are being considered for thispurpose. Image acquisition using video cameras remains a highly attractive option since the technologyto produce good quality pictures is readily available. These cameras are robust, inexpensive and easilyinterfaced with existing systems. A method that would allow dimensional data to be extracted fromcaptured images would therefore be very welcome and of significant benefit to the RN.

The analysis of underwater photographs to study and record seabed phenomena has been a technique incommon use since the 1940s (e.g. Ewing et al, 1946). Early quantitative methods were limited usuallyto the use of downward-looking cameras in which a known altitude and fixed angular field of view wereused to calculate image dimensions. A major problem associated with the analysis of ROV photographsof the seabed is that camera height is unknown and the use of zoom lenses results in a variable field ofview. Also, oblique camera angles introduce perspective whereby the scale changes from the bottom totop of the image. It is possible to find these unknowns through the use of structured lighting.

Technical solutionThe simplest system of structured lighting comprises a single pair of parallel lasers that project twospots onto the seabed in a camera’s fixed field of view (i.e. no zoom). Since the separation of thetwo spots on the image represent a known distance, the image can be scaled approximately. Thishas been the most widely used laser spot system for scaling photographs but suffers two significantdisadvantages: it works for the whole image only if the camera is set at right angles (in both planes)to the seabed or similar target; and it cannot be used with a zoom lens (variable field of view) unlessthere is a separate input of camera-target distance.

1 Institute of Marine Studies, University of Plymouth, Plymouth PL4 8AA2 QinetiQ Bincleaves, Newtons Road, Weymouth, Dorset DT4 8UR

Page 2: Diver/UUV underwater measurement system · 2014. 5. 21. · where it is recovered prior to remote detonation of the charge. The maturity of current vehicle technology, compared to

Two improved systems, following different approaches, have been used in work at the MontereyBay Aquarium Research Institute (MBARI). In the first (Davis and Pilskaln, 1992), the cameraprovides real-time readouts of focus distance and zoom (field width). In the second approach(Davis, 1999; personal communication), an arrangement of four lasers is used, three parallel to theoptical axis to give perspective angle, and one crossing two of the others to give a measure of rangeand hence scale. The trigonometry involved is solved using the Laser Measure program (Davis,1998) which runs in the Optimus image measurement and processing system.

The approach taken here is to fit a camera with an array of five diode lasers (Figs 2 and 3).

Fig.2 Abiss laser scaling array mounted on the camera of a Phantom XTL

Fig.3 Schematic of the Abiss laser scaling array

Four of these lasers are set parallel and 8⋅0 cm apart; they project a pattern of four dots onto theseabed. A fifth laser is set at an angle to, but in the same plane as, the bottom pair so that a dotappears between these two at a horizontal position which depends upon the camera-target range(Figs 4 and 5).

Parallellaser array

Camerafaceplate

Ranginglaser

ROVlamp

7

110

15

15

35

4

8 18

40

80.1 diameter

11

Page 3: Diver/UUV underwater measurement system · 2014. 5. 21. · where it is recovered prior to remote detonation of the charge. The maturity of current vehicle technology, compared to

Fig 4 Screen display of Benthic Imager software showing 5 laser spots on the seabed

In Figure 5, distance CB is calculated in this way during image analysis, and further calculationgives the required distance CP between the camera and the principal point.

Fig.5 Conversion of virtual to real, and real to virtual distances in the vertical plane

The four parallel lasers form a square only if the camera is normal to the plane of projection (seabedor whatever); usually the dots form a trapezium (Fig.5). The camera’s perspective angle (φ)measured downwards from a plane parallel to the seabed, is calculated from the ratio of the lengthsof the upper (smaller) to lower (longer) sides of the trapezium.

The varying scale of the image may be ascertained by further analysis such as that illustrated inFigure 5, in which the camera is at C, and has field of view ACB in the vertical plane. P is theprincipal point on the optical axis. A΄P΄B΄ is the projection of seabed APB onto the image plane.For trigonometrical calculations, it is convenient to place the image plane at Bd (clearly trianglesBCd and A΄C΄B΄ are similar). L(1,2) and L(3,4) represent pairs of lasers set equidistantly above andbelow the optical axis and parallel to it. Projected spots S(1,2) and S(3,4) on the seabed appear onthe image at s(1,2) and s(3,4). Any point Y on the seabed appears at point y on the image. Theproblem is to convert virtual distance py, measured in pixels, into a real distance PY measured in,

Page 4: Diver/UUV underwater measurement system · 2014. 5. 21. · where it is recovered prior to remote detonation of the charge. The maturity of current vehicle technology, compared to

say, centimetres. The trigonometrical solution to this problem is fairly straightforward. Essentially,the connection between PY and py is the common angle α in triangles PCY and pCy, and ourapproach has been to use angles at C to convert between virtual (pixel) measurements in plane Bdand real (cm) measurements in equivalent plane BA. For example, it is easy to show that since:

]/[tan 1 Cppy−=αwhere Cp, calculated separately, is a virtual (pixel) distance, then:

RPaltPY −×−+= )]2(tan[ φπα

where φ is the perspective angle of the camera, alt is the altitude of the camera above the bed, andRP is the distance measured along the seabed between the ROV and the principal point, P. Toconvert from real to virtual distances in the vertical plane the equivalent equations are:

φπα +−+= −

2]/)[(tan 1 altPYRPand

Cppy ×= ]tan[αA similar treatment resolves the real to virtual and virtual to real conversions in the horizontalplane. (The term ‘horizontal’ and ‘vertical’ as used here refers only to the up-down and left-rightplanes apparent to the camera; for example seabed plane AR need not be horizontal in thegravimetric sense.)

ConclusionsVideo imaging techniques such as those described here offer significant benefits in terms ofincreasing the capability to confidently gather accurate underwater target intelligence. These willboth speed and enhance the identification process.

Further, it has been demonstrated that modestly priced equipment and robust, low-complexityhardware designs can be utilised to deliver excellent results, though it is apparent that there is scopefor further development and optimisation.

It is also recognised that successful target identification has much to do with well developed, testedand practised techniques. It is in shallow-water work, such as that described here, that remotetechniques can be critically assessed and measurements can be verified by more direct methods. Inthis way, techniques will be perfected and subsequently used with confidence as remotely operatedmachines operate in hostile waters, or descend to otherwise inaccessible depths.

ReferencesCrone, D.R. (1963). Elementary photogrammetry, Frederick Ungar, New York. 197pp.Davis, D.L. (1998). Laser Measure: Users’ Guide Version 2.0. Monterey Bay Aquarium Research Institute, MossLanding, California.Davis, D.L. and Pilskaln, C.H. (1992). Measurements with underwater video: camera field width calibration andstructured lighting. Marine Technology Society Journal, 26(4), 13-19.Earle, S.A. (1996). Sea changea message from the oceans. Constable, London, pp.361.Ewing, M., Vine, A.C. and Worzel, J.L. (1946). Photography of the ocean bottom. Journal of the Optical Society ofAmerica, 36(4), 5-12.Pilgrim, D.A. (1998). The observation of underwater light - part 1. The Hydrographic Journal, 90, 23-27.Pilgrim, D.A. (1999). The observation of underwater light - part 2. The Hydrographic Journal, 91, 13-18.Pilgrim, D.A., Parry, D.M., Jones, M.B. and Kendall, M.A. (2000). ROV image scaling with laser spot patterns.Submitted to Underwater Technology.Saikku, R.M. (2000). Calibration of lens distortion of an ROV camera. BSc honours project, Institute of MarineStudies, University of Plymouth.Tusting, R.F. and Davis, D.L. (1992). Laser systems and structured illumination for quantitative undersea imaging.Marine Technology Society Journal, 26(4), 5-12.Wakefield, W.W. and Genin, A. (1987). The use of a Canadian (perspective) grid in deep-sea photography. Deep SeaResearch, 34, 469-478.