[ieee 2012 international conference on advances in social networks analysis and mining (asonam 2012)...

5
Robot-Assisted Medical Visualization with Floating Images Sandor Markon Kobe Institute of Computing Kobe, Japan Email: [email protected] Satoshi Maekawa National Institute of Communication and Information Technology Kyoto, Japan Email: [email protected] Ahmet Onat Sabanci University Istanbul, Turkey Email: [email protected] Abstract—Accessing volumetric medical data requires ad- vanced visualization techniques. One approach is to show slices in situ, by projecting images into space at the proper position. We propose using oating images, that is, undistorted real images of displays, appearing at the desired position and orientation. Such oating images are made possible by the new optical device ‘DCRA’ invented at NICT in Japan. To enhance the freedom of the user, we have developed a robot-assisted interactive visualization system, where the user can ‘hold’ the image slice in her/his hand, and freely change its position and orientation, thus inspect any part of the volumetric data set as desired. The hand position is detected by a stereo camera and image processing. We describe the structure and operation of our system, and show samples of its usage. I. I NTRODUCTION Visualization of volumetric data such as those generated by CT (Computed Tomography) [3] or MRI (Magnetic Reso- nance Imaging) [2] is an important but difcult task. Modern visualization systems provide many options and assistance for the users, but they cannot eliminate the fundamental conict between the at, static nature of images shown on convential displays, and the spatially distributed nature of the data. Overcoming this difculty requires intense mental work by the user, and remains one obstacle to the wider application of these important medical tools. There are already some new display technologies that allow more direct visualization of volumetric data. For instance, the Perspecta system [4] can display each ‘voxel’ at the proper location in space, by projecting the data at the appropriate timings to a rotating screen. The capability to show data at spatially correct positions comes at the price of having no means of direct interaction, as it is not possible to reach into the image, which is shown on the screen rotating at high speed. Here we propose a new interactive visualization system for presenting volumetric data, that is based on using our new optical device called ‘DCRA’. This system is in a line of devel- opment including the ’Volume Slicing Display’ of Cassinelli et al. [5], and our previously proposed interactive volume slicing system [6], and shares with them the interactivity provided by being able to ‘touch’ the planes of the oating images. Fig. 1. Light Rays in the DCRA Device II. I NTERACTIVE VOLUME SLICING DISPLAY SYSTEM A. The ‘DCRA’ Optical Device The National Institute of Communication and Information Technology (NICT) in Japan [1] has developed a new optical device called ‘DCRA’ (Dihedral Corner Reector Array). This device is constructed by arranging an array of micro-mirrors along a plane, in such a way that pairs of the mirrors are perpendicular to each other and to the plane. Fig. 1 shows schematically the principle of operation. Light rays originating in a light source at one side of the DCRA are reected on two micromirrors in sequence, and on the other side they converge in the mirror image of the original light source. This image is therefore a real image, and it can be observed exactly as if the light source were there in the air. We should note some differences between the real image projected by the DCRA, and real images obtained by classical 2012 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 978-0-7695-4799-2/12 $26.00 © 2012 IEEE DOI 10.1109/ASONAM.2012.147 842 2012 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 978-0-7695-4799-2/12 $26.00 © 2012 IEEE DOI 10.1109/ASONAM.2012.147 810

Upload: a

Post on 26-Mar-2017

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012) - Istanbul (2012.08.26-2012.08.29)] 2012 IEEE/ACM International Conference on Advances

Robot-Assisted Medical Visualization with FloatingImages

Sandor MarkonKobe Institute of Computing

Kobe, JapanEmail: [email protected]

Satoshi MaekawaNational Institute of

Communication and InformationTechnologyKyoto, Japan

Email: [email protected]

Ahmet OnatSabanci UniversityIstanbul, Turkey

Email: [email protected]

Abstract—Accessing volumetric medical data requires ad-vanced visualization techniques. One approach is to show slices insitu, by projecting images into space at the proper position. Wepropose using oating images, that is, undistorted real imagesof displays, appearing at the desired position and orientation.Such oating images are made possible by the new optical device‘DCRA’ invented at NICT in Japan. To enhance the freedomof the user, we have developed a robot-assisted interactivevisualization system, where the user can ‘hold’ the image slice inher/his hand, and freely change its position and orientation, thusinspect any part of the volumetric data set as desired. The handposition is detected by a stereo camera and image processing. Wedescribe the structure and operation of our system, and showsamples of its usage.

I. INTRODUCTION

Visualization of volumetric data such as those generated byCT (Computed Tomography) [3] or MRI (Magnetic Reso-nance Imaging) [2] is an important but dif cult task. Modernvisualization systems provide many options and assistance forthe users, but they cannot eliminate the fundamental con ictbetween the at, static nature of images shown on conventialdisplays, and the spatially distributed nature of the data.Overcoming this dif culty requires intense mental work bythe user, and remains one obstacle to the wider application ofthese important medical tools.There are already some new display technologies that allow

more direct visualization of volumetric data. For instance, thePerspecta system [4] can display each ‘voxel’ at the properlocation in space, by projecting the data at the appropriatetimings to a rotating screen. The capability to show data atspatially correct positions comes at the price of having nomeans of direct interaction, as it is not possible to reach intothe image, which is shown on the screen rotating at high speed.Here we propose a new interactive visualization system for

presenting volumetric data, that is based on using our newoptical device called ‘DCRA’. This system is in a line of devel-opment including the ’Volume Slicing Display’ of Cassinelli etal. [5], and our previously proposed interactive volume slicingsystem [6], and shares with them the interactivity provided bybeing able to ‘touch’ the planes of the oating images.

Fig. 1. Light Rays in the DCRA Device

II. INTERACTIVE VOLUME SLICING DISPLAY SYSTEM

A. The ‘DCRA’ Optical Device

The National Institute of Communication and InformationTechnology (NICT) in Japan [1] has developed a new opticaldevice called ‘DCRA’ (Dihedral Corner Re ector Array). Thisdevice is constructed by arranging an array of micro-mirrorsalong a plane, in such a way that pairs of the mirrors areperpendicular to each other and to the plane. Fig. 1 showsschematically the principle of operation. Light rays originatingin a light source at one side of the DCRA are re ected on twomicromirrors in sequence, and on the other side they convergein the mirror image of the original light source. This imageis therefore a real image, and it can be observed exactly as ifthe light source were there in the air.We should note some differences between the real image

projected by the DCRA, and real images obtained by classical

2012 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining

978-0-7695-4799-2/12 $26.00 © 2012 IEEE

DOI 10.1109/ASONAM.2012.147

842

2012 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining

978-0-7695-4799-2/12 $26.00 © 2012 IEEE

DOI 10.1109/ASONAM.2012.147

810

Page 2: [IEEE 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012) - Istanbul (2012.08.26-2012.08.29)] 2012 IEEE/ACM International Conference on Advances

Fig. 2. Floating Touch Screen with DCRA

optical devices, such as convex lenses or concave mirrors.With the DCRA, it became possible to create undistorted realimages, which stay static and constant in the air, even whenobserved from different directions. In the following we assumethat the image source is a conventional display, typically anLCD, placed under the DCRA.Although the oating images of the DCRA are seen in

the air, they are also different from stereoscopic (‘3D’) im-ages. While stereoscopic images depend on binocular parallax(binocular disparity) for the depth clue to the viewer, thereal image of the DCRA also has true focal depth (foraccommodation cue), thus eliminating one reason for viewingfatigue sometimes reported with 3D images. It also supportsmotional parallax, can be observed simultaneously by severalpersons, and needs no glasses etc. for viewing. For this reason,we call such images “ oating images” to distinguish themfrom 3D.

B. Development of Display Systems with DCRA

Since DCRA became available for application in interfacesystems, there has been several research results showingvarious possibilities for making use of the unusual imagingproperties.One of the rst applications was the ‘Floating Touch

Screen’ [7], shown in Fig. 2. Using an infrared touch screenwithout a glass panel, it was possible to create a system wherethe user could ‘touch’ images in the air and interact with them.Another example is the ‘Air ow Interaction System’ [8],

shown in Fig. 3. Here we made use of the fact that as the

Fig. 3. Air ow Interaction System with DCRA

Fig. 4. The Simple Volume Slicing Display

images appear in air, we can blow air through the image.Actually, users of the previous ‘Floating Touch Screen’ haveoften tried to blow into the oating image, especially when itwas showing realistically looking ames or similar contents.By using a laser sensor to detect the small perturbations in

air density when the user blows into the image, it becamepossible to interact with oating images without touchingthem.We have also developed a simple volume slicing display

with DCRA, as shown in Fig. 4.For constructing this volume slicing display, we observe

that when the display under the DCRA moves, the oatingimage moves together with it. Here the display is placed ona linear bearing, and it has a handle for the user to pullit into any location. The position of the display is detectedby a precision position sensor, and the displayed image sliceis selected according to this position. This manual handlingmethod was found to be quite satisfactory.The system of the present paper was developed on the base

of our experience with the above systems.

III. IMPROVED VOLUME SLICING DISPLAY

Although our previous ‘Interactive Volume Slicing Display’is capable of showing oating image slices in the correctposition, it has only limited choices for image selection. Sincethe display moves along a one-dimensional guide, the user

843811

Page 3: [IEEE 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012) - Istanbul (2012.08.26-2012.08.29)] 2012 IEEE/ACM International Conference on Advances

Windows Vista

InfraredCameras

R

L

Firewire

PC1

Ubuntu Linux

PC2

HandImageProcessing

ServoControl

RobotArmController

LAN

Display

VolumeSlice Imagegeneration

Handcoordinates

Infrared LED

Fig. 5. System Structure

cannot select different image orientations, or different planesfor slicing. On the basis of our experience with this system,we realized the need to enhance the freedom of the user.However, simply extending the principle of manual oper-

ation to full three-dimensional interaction would not work.While the one-dimensional positioning of the display along ahorizontal guide needs negligible force, and it is easy to keepit steady, if we wanted to do the same in three dimensions,the accurate positioning of the display to a desired positionwould be infeasible.Therefore, we have decided to take a different approach, as

described below. The three main components of the systemare the robot-assisted display, the hand position recognitionsensor, and the image generating algorithm.The overall structure of the system is shown in Fig. 5.The user’s hand is illuminated by an infrared LED light

source. Stereoscopic video images are captured and fed tothe image processing subsystem. The recognized hand coor-dinates are fed to the servo subsystem to position the servomechanism, mirroring the hand coordinates. They are alsofed to the image generation subsystem, to select the requiredvolume slice of the volumetric data, to be displayed at thehand position.The main components of the system are described in some

detail in the following sections.

A. Robot-Assisted Volume Slicing Display

For direct interaction, the important element is not themanual handling of the display, but the freedom of movingit at will. By introducing a servo mechanism, we can keepthis intimate connection, but release the user from providingaccurate and sustained force and coordination.The mechanism that we have chosen is a standard industrial

robot arm with the speci cations shown in Table I.The general appearence of the robot arm is shown in Fig. 6.

This robot arm is usually used in factories, providing highprecision and reliablity.We have attached an LCD display to the end effector of the

arm, as shown in Fig. 7.

B. Hand Position Recognition

The second component of the volume slicing system isa stereo video camera as shown in Fig. 8, with an imageprocessing algorithm for hand position recognition.

TABLE ISPECIFICATIONS OF THE ROBOT ARM

Type EPSON ProSix C3 Manipulator

Weight 27 kg

Number of Joints 6

Payload (kg) Max / Rated 3 / 1

Repeatability +/-0.020mm

Horizontal Reach to wrist center 600 mm

Vertical Reach to wrist center 820 mm

Cycle Time (1kg workload) 0.37 sec

Fig. 6. The Robot Arm

Fig. 7. Display on the Robot Arm

We can use a simple gesture for interacting, by the userforming a circle with his/her index nger and thumb. Asshown in Fig. 9, we can extract the contour of the hand forthe left and right camera images, then t an ellipse to thecontours. We can consider the two ellipses as projections of acommon single circle in three-dimensional space. Thus we canuse simple geometrical manipulations to recover the centralposition, radius, and orientation of that circle from the twoellipses.By continuously extracting the hand circle geometry from

each video camera frame, and commanding the robot to the

844812

Page 4: [IEEE 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012) - Istanbul (2012.08.26-2012.08.29)] 2012 IEEE/ACM International Conference on Advances

Fig. 8. Stereo camera for hand position recognition

Fig. 9. Processing for hand position recognition

corresponding mirrored position below the DCRA frame, itbecomes possible to freely ‘move’ the display, and thereforethe oating image, to any desired place. In an alternativegesture, shown in Fig. 10, the user forms a right angle withthe thumb and the index nger;with this, the image can berotated in the image plane.

C. Image Generation

As the user ‘moves’ the oating image throughout the rangeof the target volumetric dataset, we need to generate thecorresponding image slices with the correct geometry.In this work we have used the free version of the OsiriX

system for Macintosh computers [9]. Initially, it was usedto pre-process the datasets into the suitable slices. In thesubsequent development, it is modi ed to accept the geomet-rical information from the hand position sensor system, andgenerate the corresponding image slice.We have also used free sample DICOM data sets available

from the web site of the OsiriX visualization project.

Fig. 10. Images in the Volume Slicing Display

In Fig. 10 we show a sample of the images seen with theprototype system.

IV. CONCLUSION

We have developed a robot-assisted volume slicing display,using a new optical device. The new volume slicing displayis capable of showing arbitrary sections of a volumetric dataset as oating images, in the correct position and orientation.Users can command the system with natural gestures, withoutwearing any equipment.In the future, we intend to compare the usability of our

system with traditional visualization systems, and use feedbackfrom professionals in order to improve it.

ACKNOWLEDGMENT

This research was partly supported by the MEXT Grant-in-aid No. 23500161.

845813

Page 5: [IEEE 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012) - Istanbul (2012.08.26-2012.08.29)] 2012 IEEE/ACM International Conference on Advances

REFERENCES

[1] S. Maekawa, K. Nitta and O. Matoba, “Transmissive optical Imagingdevice with micro array”, Proc. SPIE Vol. 6392, 63920E (2006).

[2] D. Weishaupt, V.D. Kochli and B. Marincek, How does MRI work?: anintroduction to the physics and function of magnetic resonance imaging,Springer, 2006.

[3] P. Schoenhagen and A.E. Stillman, Cardiac CT Made Easy: An Intro-duction to Cardiovascular Multidetector Computed Tomography, Taylor& Francis, 2006.

[4] Actuality Systems: Perspecta Display, http://actuality-medical.com/site/content/perspecta display1-9.html, last accessed:Mar. 28, 2012.

[5] A. Cassinelli and M. Ishikawa, “Volume Slicing Display”, SIGGRAPHASIA 2009, Emerging Technologies, Yokohama (2009). Emerging Tech-nologies Catalog, p.88.

[6] S. Markon, S. Maekawa, A. Onat and H. Furukawa, “Interactive MedicalVisualization with Floating Images”, accepted for 2012 ICME Interna-tional Conference on Complex Medical Engineering (2012)

[7] S. Maekawa. Floating touch display. In 2nd International Symposium onUniversal Communication, Osaka, Dec 2008.

[8] S. Maekawa and S. Markon, “Air ow interaction with oating images”,SIGGRAPH ASIA Art Gallery & Emerging Technologies’2009”,

[9] OsiriX Imaging Software, http://www.osirix-viewer.com/, last accessed:Mar. 28, 2012.

[10] Sensable: Phantom Haptic Device, http://www.sensable.com/haptic-phantom-desktop.htm, last accessed: Mar. 28, 2012.

846814