An Application Platform for the Development and Experimental Validation of Mobile Robots for Health Care Purposes

Download An Application Platform for the Development and Experimental Validation of Mobile Robots for Health Care Purposes

Post on 02-Aug-2016

215 views

Category:

Documents

2 download

TRANSCRIPT

  • Journal of Intelligent and Robotic Systems 22: 331350, 1998. 1998 Kluwer Academic Publishers. Printed in the Netherlands. 331

    An Application Platform for the Development andExperimental Validation of Mobile Robots forHealth Care Purposes

    OLAF BUCKMANN and MATHIAS KRMKERBIBA Bremen Institute of Industrial Technology and Applied Work Science at the University ofBremen, Hochschulring 20, 28359 Bremen, Germany; e-mail: {buc;kro}@biba.uni-bremen.de

    ULRICH BERGERIWT Foundation Institute of Material Science, Badgasteiner Str. 3, 28359 Bremen, Germany;e-mail: berger@iwt.uni-bremen.de

    (Received: 7 July 1997; in final form: 18 October 1997)Abstract. This paper describes an Application Platform for the development and testing of mobilerobot units. Within this platform, various applications addressing different aspects of robot develop-ment are composed into an experimental environment. The Application Platform comprises modulessuch as a Neural Networks Simulator, a simulation and off-line programming system, optical sensorcomponents, a rapid prototyping system, and an experimental workcell. Each of these modules isdescribed in detail including its integration with the other modules. In conclusion, the potential useof this platform for health care tasks is indicated.

    Key words: health care, mobile robots, experimental platform, simulation techniques, optical sens-ing, neural network, rapid prototyping, robotics.

    1. Introduction

    It is a common understanding that mobile robots will be first introduced in hugenumbers in buildings providing necessary technical requirements such as hospital,geriatric or rehabilitation clinics or homes for disabled and elderly people. A surveyof the needs for the assistance of disabled and elderly people that was undertaken in1988 by the Finnish National Board of Social Welfare revealed that major requestsfor support (55%) in household tasks are needed in the fields of heavier work, e.g.,house cleaning. Furthermore, the forecast on an increase of people over 60 showsa need for mobile robots in these areas. It is estimated that in 2005 about 22% ofthe population and in 2020 25% of the population will be in the age of 60 and older[8, 16]. It is therefore highly desirable for these people to have a mobile devicewhich can assist them in performing these daily household tasks or support themin relation to health care. But before the installation and application of mobile robot

    JI1423TZ.tex; 21/07/1998; 13:06; p.1VTEX(P) PIPS No.: 157028 (jintkap:mathfam) v.1.15

  • 332 O. BUCKMANN ET AL.

    technology can be realised in individual homes or hospitals, substantial testing andapproval has to be performed.

    Mainly there are two fields of activity of interest: at first, the execution of clean-ing and home services and, second, the performance of logistical functions. Theseitems are further specified in the following list:

    Mobile Robotic Platforms for health care services can ensure adequate hy-gienic standards in view of dry/wet cleaning of ground floors, sanitary cham-bers and meal preparation areas.

    Food service from the main kitchen to rooms, suites or apartments (transporta-tion service of tablets, dishes, etc.).

    Mail, parcel, and magazine service or pharmaceutical products under super-vision of medical staff.

    Maintenance of electronic or computer programmed devices like television,telephone, information service communication networks by downloading ofstorage data into individual programmable systems.

    It is likely to envisage that mobile robots developed for industrial applicationscan be used for this purpose. The industrial mobile robots, however, normallyoperate in a clearly defined and static environment or they require intensive man-machine interactions like mobile telerobots. They further need sophisticated sensorand computer platforms as well as skilled operators. These conditions cannot bemet when a mobile robot operates within a common, altering environment whichconsists of stationary (e.g., furniture) and moving (e.g., people) obstacles. Thismakes traditional industrial mobile robots unlikely suitable for applications suchas health care tasks.

    The design of such a mobile robot for health care tasks must also take intoaccount the fact that manual control of any mobile device is generally difficult, evenfor healthy and young people. Therefore, the household mobile robot should have acertain degree of autonomy, so that it can be operated easily and user-friendly. Thisis of special importance for disabled and elderly people who have greater difficultyto handle and control complex devices due to physical or mental disabilities. Thedevice must be able to find the goal itself without colliding with the environmentby just specifying a destination point for executing a specific task.

    The design and development of such a mobile robot is quite a complex taskthat requires the co-operation of a multidisciplinary team of skilled developers.To qualify all members of the development team to an adequate level as well asto provide a sophisticated testing and training environment, within the EuropeanResearch Project Mobile Robotics Technology for Health Care Services ResearchNetwork (MobiNet) an Application Platform has been specified. The project isfunded by the European Commission under the programme Training and Mobilityof Researchers.

    JI1423TZ.tex; 21/07/1998; 13:06; p.2

  • PLATFORM FOR HEALTH CARE MOBILE ROBOTS 333

    The following techniques have been identified to enable a general purpose for amobile robotic platform:

    actoric components as drives and drive control units, handling and grippingdevices, finally signal emissions systems (optical and acoustical)

    sensoric components like navigation and manoeuvering systems includingcollision avoidance and obstacle bypassing technology

    human interaction devices like ergonomic user interaction devices such ashandles, joy-sticks, folding and unfolding systems for house to house/houseto car or house to environment movement and man machine communicationinterfaces.

    supervisory and maintenance functions such as emergency recovering (bat-tery loading), order receiving and a diagnostic communication system, errormessage and emergency handling.

    For an efficient and cost minimising strategy, utilisation of the existing industrialrobot technology was envisaged.

    The existing technology can encompass:

    6-axis industrial robot platform including a 2-axis extra payload handler toperform actoric components behaviour.

    Serial 2D- or 3D-optical sensor systems including elementary/binary sensors(inductive, switch functions) to perform sensor fusion strategies. Enhance-ment by integration of Neural Networks and Fuzzy Algorithms.

    A rapid prototyping studio containing conceptualisation and design of com-plex free form surfaces by high performance computing, stereolithographyapparatuses for building functional parts and postsequent processes like diecasting, forming, EDM technologies. These advanced technologies providebest-fit ergonomic handling, treating, and maintenance devices for humaninteraction.

    Figure 1. System components of the Application Platform.

    JI1423TZ.tex; 21/07/1998; 13:06; p.3

  • 334 O. BUCKMANN ET AL.

    A telecommunication network for internal and external communication in-cluding GPS (Global Positioning System) and the use of up-to-date commu-nication technologies for meeting the communication needs.

    A simulation and off-line programming system for design testing and func-tionality testing by building a digital mock-up of a robot device.

    The components of the platform are intended to be integrated as depicted inFigure 1.

    Various components of the Experimental Platform are described in the follow-ing sections. Not all components are precisely specified at this stage. The inclusionof telecommunication facilities will allow the replacement of the conventionalrobot by a mobile unit.

    2. Simulation and Off-line Programming Station

    In order to save expensive equipment from crashes and destruction, minimise pro-gramming of high complex moving paths and enhance understanding of functionby graphical virtual display of moving sequences, simulation systems allow usto test the feasibility and functionality of a robot system. Failures like designbottlenecks of robots, other kinematic components or tools of a shop floor cell,programming errors or reachability errors can be detected and changed duringthe design process. Through the possibility of the off-line programming function-ality it is possible to save time and money as the production process must notbe interrupted because the development of programming tasks can be performedindependently of the robot system itself.

    Such a station has been foreseen in the experimental platform. As a well-suitedsystem IGRIP (Interactive Graphical Robot Instruction Program), developed byDENEB Robotics? has been selected.

    IGRIP has a Two-world concept, supported by a powerful graphical user in-terface (as depicted in Figure 2). The first world is an integrated CAD System. Allparts of the workcell components can be modelled here. It is also possible to importdata from external CAD Systems via standard Interfaces like IGES?? or VDA-FS.The second world is the Workcell-World. All parts will be assembled here to de-vices and arranged to a layout of the real shop floor cell. Every device will be givenan identifier, colours, auxiliary frames or data fields etc. Kinematics for appropriatedevices can be assigned here. A kinematic will be described through the degree offreedom (dof), the motion axis of each moving part and the way of movement, i.e.linear, rotatory or both. If necessary, user defined functions and program parts canbe archived through the UNIX shared libraries, like special kinematics equationsor path planning tasks as we need by mobile units.

    ? Auburn Hills, MI, USA. Further information: http://www.deneb.com?? IGES: Initial Graphics Exchange Specification. VDA-FS: Verband Deutscher Automobilhersteller Flchenschnittstelle.

    JI1423TZ.tex; 21/07/1998; 13:06; p.4

  • PLATFORM FOR HEALTH CARE MOBILE ROBOTS 335

    Figure 2. The IGRIP simulation system (with experimental environment).

    Working points in the workspace of a device will be stored at the so-called TagPoints. Position and orientation values will be stored with this point in relationto a user selected basic part, e.g., a Robot base. Additional Information (e.g.,Robot configuration) or User defined information can also be stored at the TagPoints. A Program for a device will be written in the IGRIP internal language GSL(Graphical Simulation Language). This language is structured like Pascal with ad-ditional commands for the process simulation. Macros can be written in GSL for,e.g., mostly used tasks, automatic Tag Point generation or simulation of a usersinterface [6]. A device can also be controlled by a native language interface pro-vided by the simulation system manufacturer with self-written interpreters throughshared libraries of the simulation system included in external C-programs. Variousexperience with the IGRIP simulation system has been gained within the ESPRITproject 8338 NEUROBOT Neural Network based Robots for Disassembly andRecycling of Automotive Products [17].

    2.1. INTEGRATION OF SIMULATION AND OFF-LINE STATION AND THEEXPERIMENTAL ENVIRONMENT

    The system connection to the Robot controller is realised with a postprocessor thattranslates the GSL Program and the information of the Tag Points into the native

    JI1423TZ.tex; 21/07/1998; 13:06; p.5

  • 336 O. BUCKMANN ET AL.

    Figure 3. Robot cell and its simulation.

    robot controller language or from the controller to IGRIP. This postprocessor forthe Cloos robot is provided by Deneb [6].

    A user can also provide a data exchange to the experimental environment withGSL-Macros or self written translators. Data between the workstation and the robotcontroller will be transferred by a Local Area Network (LAN). A PC connects thecontroller via the Cloos CarolaEdi Software to the LAN Network.

    A download can be performed after the workcell is calibrated. IGRIP providesseveral methods for the calibration of robot position, workpiece position or for thecalibration of the robot model [6].

    When the program is transferred into the controller, the robot can execute theprogram as depicted in Figure 3.

    This connection will be later replaced by the program and command transferinterface for the mobile robot unit.

    3. Rapid Prototyping Module

    Rapid Prototyping (RP) techniques are methods that allow us to quickly producephysical prototypes with the important benefit to reduce the Time-to-Market. Byuse of these techniques, prototypes can be built needing skill of individual crafts-men for no more than just finishing the part. Furthermore, the resulting design costwill be decreased considerably. Within a rapid prototyping process, the object isfirst designed on a computer screen and then created basing on the computer data.This eliminates inevitable errors which usually appear when a model-maker inter-prets a set of drawings. An essential prerequisite is the computer representation:

    JI1423TZ.tex; 21/07/1998; 13:06; p.6

  • PLATFORM FOR HEALTH CARE MOBILE ROBOTS 337

    Figure 4. Steps from CAD model to real model.

    it is usually a 3D geometrical modelling system like a CAD system, a 3D scan-ner, a computer tomograph, etc. Its precision is the key parameter controlling thetolerances of the future model (different techniques allow an average accuracy ofapprox. 0,1 mm).

    Though for various RP systems there are no restrictions as to the complexity andgeometrical features, the physical objects are limited in their size. An advantage isthe fact that the same data used for the prototype creation can be used to go directlyfrom prototype to production, eliminating further sources of human errors.

    Several RP techniques are available. The first commercial process, Stereolitho-graphy (SL), was brought on the market in 1987. Nowadays, there exist morethan 30 different processes (not all commercialised) of high accuracy and a largechoice of materials. The most successfully developed techniques are: Stereolithog-raphy, Selective Laser Sintering, Laminated Object Manufacturing, Ballistic Parti-cle Manufacturing, Fused Deposition Modelling. Information for these systems in[4, 9, 11].

    In our Application Platform a Stereolithography system was integrated. It is a3-dimensional printing process which uses a laser beam directed by computer ontothe surface of a photocurable liquid plastic (resin) to produce copies of solid orsurface models. The basic steps of the overall process are depicted in Figure 4.

    In the process planning, a vector scheme is calculated, based on a 3D solidmodel in STL format of the workpiece with a so-called slicing program. Theoutput describes the workpiece in a layered form. A supporting structure will beadded during the process planning to the 3D Model in order to ensure that theproduced workpiece has a connection in itself. In the manufacturing process, theworkpieces will be built layer by layer out of a liquid photopolymer resin which ispartially hardened by a monochromatic ultra-violet laser, a helium-cadmium laseror an argon ion laser.

    The complete system consists of a process PC which controls the buildingprocess of the workpiece and a process chamber as depicted in Figure 5. Theprocess chamber mainly consists of a sink filled with a liquid photopolymer resin.Here is a carrier platform which is mounted into an elevator for vertical movement.The ultra violet laser beam is moved over the surface of the liquid by use of ax-y-scanner optical mirror device.

    An additional helium-neon laser measures the level of the photopolymer resinsurface. A recoater smoothes the surface before a new layer is built.

    JI1423TZ.tex; 21/07/1998; 13:06; p.7

  • 338 O. BUCKMANN ET AL.

    Figure 5. Schematic Layout of the stereolitography process chamber.

    At the beginning of a building process, the carrier platform is positioned belowthe surface of the photopolymer resin by the thickness of the layer, e.g., 0.25 mm.The X-Y-scanning system moves the ultra violet laser over the surface of the resinaccording to the vector scheme calculated by the process planning PC.

    The photopolymeric resin is hardened permanently at each point where the laserhits the surface. After finishing the first layer, the carrier platform is lowered bythe distance of the layer thickness and the next layer is produced, ensuring itsconnection with the lower layer. In such a way, the workpiece is created layer bylayer from its bottom to the top.

    This method was developed in 1986 by the American company 3D Systemsand is still distributed by them. With our system SLA-250 from 3D Systems?,workpieces with the maximum dimension of 250 mm 250mm 250 mm can bebuilt.

    3.1. INTEGRATION OF THE STEREOLITHOGRAPHY SYSTEM AND THESIMULATION AND OFF-LINE STATION

    The simulation and off-line programming system IGRIP comprises a stereolithogra-phy-interface. Data from the IGRIP CAD world can be directly exported to thestereolithography system and can be sliced there. Using this system, a fast transferof a model, e.g., a joystick, to the real world can be realised. Functionality tests canbe made or the model can be used to make a mould for this part.

    ? Further information: http://www.3dsystems.com

    JI1423TZ.tex; 21/07/1998; 13:06; p.8

  • PLATFORM FOR HEALTH CARE MOBILE ROBOTS 339

    Figure 6. Structure of NeuralWorks professional II/plus.

    4. Neural Networks Simulator

    In order to allow a mobile robot to operate and navigate autonomously, controlsystems based on artificial intelligence have to be integrated. The objective ofartificial intelligence is the reproduction of human skills with neural networks.Neural networks are based on principles that intend to correspond to the humanbrain, as the functionality of the human brain basically depedns on the work ofneurones and the use of the incoming impulse through a neurone.

    As a neural networks simulation tool, NeuralWorks Professional II/plus devel-oped by NeuralWare, Inc.?, has been selected since it is a well known simulationsystem of neural networks. Figure 6 depicts the main structure of the system.With this tool it is possible to build, train, refine, and deploy neural networkssolutions. The basic capabilities are a back-propagation builder (up to 3 hidden lay-ers), a modular neural network (adaptive mixtures of experts proposed by Jacobs,Jordan, Nowlan and Hinton), radial basis function network (self-organising cate-gory layer), a self-organising map, reinforcement learning, Perceptron, Hopefield,brain-state-in-a-box, enhanced uni-flow counter-propagation, a category learningnetwork, learning vector quantization I/II (LVQI, LVQII), extended LVQ, delta bardelta, extended delta bar delta, directed random search, cascade correlation, ART1,Hamming nets, probabilistic neural networks, general regression, fuzzy art-map,

    ? Pittsburg, PA, USA. Further information: http://neuralware.com

    JI1423TZ.tex; 21/07/1998; 13:06; p.9

  • 340 O. BUCKMANN ET AL.

    historic networks (Adaline, Boltzman, etc.). An introduction and a description offundamental principles of neural networks is given in [10, 12, 13, 15].

    In the NeuralWorks a tool named Flash Code is available which converts atrained network into the ANSI C-code and so the integration with popular com-mercial packages is possible. Comprehensive manuals and tutorials are included.NeuralWorks allow monitoring network performance, automatic train/test cycle,explaining of network decisions. Furthermore, an embedded IDL (InstaNet Defini-tion Language) can be used for customising and controlling neural networks [12].

    4.1. INTEGRATION OF THE NEURAL NETWORKS SIMULATOR AND THEOFF-LINE PROGRAMMING STATION

    The integration of neural works into IGRIP, e.g., to calculate inverse kinematics, isdone through the socket and shared library communication. A C-program controlsthe neural network (user control program) and contains all necessary routines usedfor the communication with the NeuralWorks Professional II/plus package. An-other C-program controls the simulation environment and contains all the routinesused for the shared library communication of the IGRIP simulation system. Bothsystems are installed on the same platform and communicate via TCP/IP sock-ets (Figure 7). The platform is a Hewlett Packard, HP 9000 series 700 computerworkstation on which the HP-UX (UNIX) operating system is running.

    With the C-program in NeuralWorks, it is possible to control the loading, sav-ing, training, and recalling of previously created networks. The user control pro-gram provides all information and functions to instruct NeuralWorks to carry outthe desired task. In its structure, a user control program is a subset of a user I/Oprogram. For UNIX workstations the user I/O mechanism involves setting up theuser program as a separate process and communicating with that process by use ofshared memory and semaphores.

    With the C-program for the IGRIP simulation, the user is able to define a kine-matics routine which executes the movement of a robot device. The set of functionsused for the definition is referred to the motion pipeline. The motion pipeline is aseries of procedure calls which are executed sequentially during the simulationof a motion statement in IGRIP. By this information the user can incorporate theself-written software for performing inverse kinematics, motion planning, motionexecution, dynamics analysis and dynamics simulation. The user is also able towrite his own functions to work as components in the motion pipeline.

    When the two systems are connected via the TCP/IP Sockets, IGRIP will sendposition information to NeuralWorks Professional II and receive after the calcu-lation by an appropriate Neural Network the joint values for this position. Thedevice will be moved to this position in the simulation. This handshake procedurewill continue until the desired position is reached (in the simulation) [14].

    It is also intended that the NeuralWorks system will be later connected to asensor system and can be used for learning decision tasks (e.g., object recognition).

    JI1423TZ.tex; 21/07/1998; 13:06; p.10

  • PLATFORM FOR HEALTH CARE MOBILE ROBOTS 341

    Figure 7. Communication between NeuralWorks and IGRIP.

    The results of this training task can be used later for the controlling of the mobileunit.

    5. Sensor Systems

    The sensor equipment to be provided for an autonomously mobile robot system canbe divided into several fields as depicted in Figure 8. The most important sensors,and therefor selected for this application platform, are the optical sensors arrayswith the possibility of topometric and range/distance measurement facilities.

    Figure 9 shows the principal information flow for 2D- and 3D-optoelectronicsystems based mainly on image processing sequences. The ITRAN multifunctionalsensor system has been chosen as the basic experimental platform for 2D sensing.This will be described below.

    5.1. 2D-SENSOR SYSTEM

    For image processing and image analysis the system ITRAN Multifunction Sen-sor 41 will be used (Figure 10). The image processing unit receives a two-dimen-sional image from the CCD-camera. The 2D System uses an image of 480 320

    JI1423TZ.tex; 21/07/1998; 13:06; p.11

  • 342 O. BUCKMANN ET AL.

    Figure 8. Active Methods for vision systems.

    Figure 9. Information flow of the image analysis.

    pixel with grey-values. In the continuous inspection-mode images are processedevery 16.67 msec. During this time the result of each image analysis is calculated,unaffected by the complexity of the image processing task.

    In the inspect on trigger-mode the system starts the inspection cycles afterreceiving a trigger signal. In this mode the system communicates with program-mable logic controllers, personal computers or other controlling devices by usingRS-232, discrete I/O, VME I/O or high-speed link I/O. The result of inspection canalso be given as a simple screen message. The programming of image analysis isdone on a WINDOWS based Software SensorEdit Version 2.3e on a separate PC.The PC is equipped with a special linkcard and connected via a linkcable to theimage processing unit. After the application is programmed and downloaded, theimage processing unit can be disconnected from the PC.

    For the image inspection two different groups of tools can be used, path andarea tools. Up to 32 path tools can take measurements along any line or contour

    JI1423TZ.tex; 21/07/1998; 13:06; p.12

  • PLATFORM FOR HEALTH CARE MOBILE ROBOTS 343

    Figure 10. Layout of the Image Processing Unit and PC with SensorEdit.

    in order to calculate the width, a centre location and a distance from a fixed pointto a feature or to count edges. Up to 64 area inspections can be made within non-overlapping areas of any shape. Area inspection methods include grey-level pixelcounting, edge pixel counting, grey-level intensity summing and edge intensitysumming.

    In addition there are two other tools that can be used. The lightmeter can beutilised in order to guarantee that the right illumination has been conditioned, whilethe locator determines whether the part to be inspected is in the right position. Fi-nally, there are two different kinds of powerful worksheets (the math worksheet andthe control worksheet) which make possible the solving of complex mathematicaland logical functions associated with the actual data of the visual measurementsand tests (similar to MS Excel) [3].

    The math worksheet executes only after a picture has been taken and the inspec-tion tools have been processed. The control worksheet executes every frame timeregardless of whether or not the inspection occurs. The flexibility of the used imageprocessing unit enables us to find applications for various visual inspection tasks.

    5.1.1. Link to Universal Controller InterfacesFirst of all, the image processing unit can be connected physically to the CLOOSindustrial robot control unit via the expert system. As the robot is giving a signal tothe image processing unit, every time it should take an image and inspect it. Thus,an application that is triggered by the industrial robot will be realised. This means

    JI1423TZ.tex; 21/07/1998; 13:06; p.13

  • 344 O. BUCKMANN ET AL.

    the image processing unit will wait for a binary signal from the robot before it takesand inspects a new image.

    The image processing unit is not able to communicate directly with the machinetool controller. A system that is supposed to communicate with this specific ma-chine tool controller has to create a complete file with all commands that shouldbe executed. The ITRAN MS 41 is able to send up to date status via an RS-232interface, but can not create a special file the machine tool controller needs.

    Therefore, a PC with the installed Software Procomm will be used as a linkbetween the image processing unit and the machine tool controller. This PC willacquire the up-to-date status as an ASCII-value. The ASCII-value will determine,if a workpiece is found or not. To receive the actual status that is transmitted by theimage processing unit, the chat mode of Procomm will be used.

    Nevertheless, it will also be possible to transmit very complex files with all thecommands the machine tool controller needs to detect a new target without stop-ping the machine. In this case the machine tool controller needs only the commandsto change the tool. As the image processing unit is able to detect the reason for anon detected target, it could transmit a special ASCII-value to the PC that causesthe transmitting of a specific file to the machine tool controller.

    5.2. 3D-SENSOR SYSTEM

    The advanced technology is given by the installation of a 3D sensing system, whichconsist of a line projector, a camera, and analysing software. This system utilisesan active optical 3D-measurement process: the Coded Light approach.

    The main feature of the Coded Light Process, in comparison with Triangulationprocesses is the mathematical connection between space and time in a topometricscene, which is realised by use of a sequential projection of n Grey coded stripepatterns in the selected field of view. The stripe patterns afford a differentiation of2n variable projection directions xp, which can be securely identified by a charac-teristic bright and dark sequence and the specific code of the projection. A sequenceof n = 7 stripe patterns affords therefore a differentiation of 27 = 128 directionsof projection.

    Figure 11 depicts the physical principle of topometric measurement by use ofthe Coded Light Process.

    The stripe patterns on the object of interest (Figure 12) are generated by atransparent Liquid Crystal Display (LCD) in a ABW LCD-320 projector.

    The images are received and collected by a CCD-camera, which is positioned ina defined direction to the projector. Now the point coordinates of a three-dimensio-nal object can be calculated by matching the plane generated by the stripe patternand a line with the direction through the centre of the CCD-camera. This line isbased on the image plane of the camera and the location of the object point on thecamera coordinates {k}. The stripe projector defines a specific plane for each of thedifferent stripes.

    JI1423TZ.tex; 21/07/1998; 13:06; p.14

  • PLATFORM FOR HEALTH CARE MOBILE ROBOTS 345

    Figure 11. Principle of 3D sensing by Coded Light approach.

    Figure 12. Line pattern being projected on workpiece by the ABW line projector.

    This plane can be calculated by the projector centre and each line, which isprojected in the scene. The existing planes then, according to the constraint ( and are current parameters) are:

    PE = 0

    10

    !+

    xp01

    !.

    JI1423TZ.tex; 21/07/1998; 13:06; p.15

  • 346 O. BUCKMANN ET AL.

    For each image point a specific code can be defined through binarisation of theimage and check, in which of the projections the image point is illuminated or not(value is 1 or 0).

    The available binary images can be collected into a bit plane stack. Each pixelof this stack contains for the specific image point an n-ranging bit-sequence, whichdescribes the projector direction to this scenery point. This code defines the projec-tion direction Xp and the constraint of the plane. Therefore the measurement resultof this process is singular and absolute. With the bit plane stack and the definedlocation of camera and projector three dimensional coordinates of the object canbe calculated. The main advantage of the Coded Light Process in comparison withother techniques is fast image processing of topometric data. This is related to thefact, that, regarding the resolution of the CCD-camera, only a small number of pat-terns has to be projected in the scene. In addition, the analysis of the bit plane stackas well as the calculation of three dimensional coordinates can be made by simpleimage analysis operations. So the whole topometric scene can be measured withinseconds. Also, the Coded Light Process gives advantages at changing illuminationconditions and different surface reflexions in comparison with other techniques.

    The optical system itself is based on three main optical devices, the ABW lineprojector and one CCD-camera. The line projector and camera are fixed on a robottool with a calibrated configuration. This tool can be picked up and released by thegantry robot with an automatic End-Of-Arm (E.O.A.) system. Its also possible toinstall the projector and camera fixed in the cell out of the reach of the robot. Forthe acquisition of accurate 3D-clusters (approx. 1:1000 of projection field width)from the detail area (< 0.3 0.3 meter) this tool is used.

    The ABW line projector encompasses a micro processor which controls theLCD-field. For controlling and programming tasks the projector is linked to theRS-232 serial interface of a PC. The CCD-cameras are connected to an imageprocessing board on this PC. For the 3D-cluster acquisition with the coded lightapproach the program HolonView-PC 2.87 is used [1].

    5.3. INTEGRATION BETWEEN SENSOR SYSTEM AND OFF-LINEPROGRAMMING STATION

    The integration of the 3D-sensor systems into IGRIP is realised by use of a self-developed interface. The IGRIP system is used to determine where the 3D-sensorsystem has to get information about the environment. The real camera image has aresolution of 512 512 pixels. From each pixel the x, y, z-values were calculatedand written into an ASCII file for a analysis. But there was too much data in this filefor the analysis in the appropriate time. This image was divided for data reductioninto 64 fields. These fields were also modelled in the IGRIP workcell of the ex-perimental cell that was used. While simulating the camera image, it was possibleto determine which field should be scanned for a specific point to be corrected.

    JI1423TZ.tex; 21/07/1998; 13:06; p.16

  • PLATFORM FOR HEALTH CARE MOBILE ROBOTS 347

    Additional information was delivered for the method of the analysis of the pointcloud for each specified field. Five methods were developed to find a specific point:

    minimal / maximal operator, intersection of three planes, intersection of a plane and a straight line, intersection of two straight lines, intersection of two straight lines and a plane.

    After the scanning process and the analysis were finished, a data set was givenback to the IGRIP System. All selected points were corrected and a download forthe robot was made [2]. A connection for the 2D system has not been developed asyet.

    6. Telecommunication System

    Within this module, the integration of mobile communications for control of mo-biles and telepresence will be addressed. The objectives of this task are to evaluate,specify and demonstrate future extension of an autonomous robot scenario us-ing the existing wireless communication technologies and computer interfaces toconnect mobiles.

    The intended system should allow us to communicate data for robot commands,programmes and transmit sensor data as well as to realise telepresence solutions bymultimedia data transmission.

    The communication platform which should be adapted to the robot cell is ori-ented to the Local Data Radio Link (LDRL) developed in the MOEBIUS project [7].The LDRL prototype system is used to connect multimedia equipment with aportable station and a base station terminal for satellite communication. This af-fords remote operation and an improved small scale mobility for users around asatellite communication platform via the radio link.

    The sensor systems or data system integration of mobile communication forcontrol of mobiles and telepresence should possibly include information aboutthe robot position, orientation, speed as well as audio and video data, e.g., forapplications like video conferencing. The telecommunication could also be usedfor navigation purpose, linkage for CAD based orientation, etc.

    The sensor system should include scanner technologies, CCD-camera, and ul-trasonic sensors. The main aspects for communication are radio modems for wire-less communication and infra-red for communication with infra structure (radiotransmission, infra-red). This also includes the specification and assessment ofmethods and algorithms like digital coding or compression techniques.

    For position detecting a GPS module can be used. This system should have awork range of 250 meters to cover the measurement of a hospital building. Withthis system the robot always knows where he is in its world. So a robot unit knows

    JI1423TZ.tex; 21/07/1998; 13:06; p.17

  • 348 O. BUCKMANN ET AL.

    where the docking stations and all the rooms are located in relation to its actualposition.

    7. Experimental Environment (Robot Cell)In the experimental platform, a six-axis joint co-ordinate industrial robot (see Fig-ure 13), the CLOOSR76 with an integrated 2-axis positioning table will be used un-til the development of the mobile robot is finished. This robot is normally used forwelding tasks, but the E.O.A. System is mounted to change the tool for other tasks.

    The CLOOS Romat 76 in the upright position is specified as follows [5]:Configuration: Revolving joints, 6-axes.Drive: DC-motor excited by permanent magnets.Resolution: 1024 impulses/degree.Load capacity: 10 kg.Positioning error: 0.2 mm.Working space: hemispherical, diameter approx.: 3600 mm, height

    3200 mm.Rotation angle: axes 1,4: 320,

    axes 2: 240,axes 3: 270,axes 5: 180,axes 6: 450.

    Controller: separate 32 bit micro controller for each axis with8MB Ram.

    The robot can be programmed conventionally by teach-in methods or as de-scribed in section 2 by use of the simulation and off-line programming systemlGRIP.

    8. Potential Use of the Application Platform in Health Care Tasks

    The main use for health care systems of this Application Platform is in the Subsys-tem, Simulation system/CAD System in combination with the rapid prototypingsystem. With these subsystems, various ideas can be quickly realised and serve asa basis for further tests, like an ergonomic test for joysticks or other peripherals forhandicapped people.

    Another average is the use of the simulation system for fast testing and designchecking of kinematic parts of the robot system. The algorithms of the controllercan be emulated and checked for bugs. The behaviour in the relation to the environ-ment can be tested and a 3D model can be viewed in an early stage of developmentin order to improve both the quality and performance of the final product.

    JI1423TZ.tex; 21/07/1998; 13:06; p.18

  • PLATFORM FOR HEALTH CARE MOBILE ROBOTS 349

    Figure 13. The Cloos Romat 76 robot.

    A new approach to the use in health care tasks may be the use of the STABILProject. This Project is developed by the Bayrisches Forschungszentrum fr Wis-senbassierte System (FORWISS) in Erlangen, Germany?. The goal of the projectis to develop methods and algorithms for recognising, modelling and analysingdynamic objects in particular human beings. A first demonstrator was installed inMunich in 1994. The systems consist of several cameras and are able to recog-nise movements, calculate 3D-position of persons, and the system can calculateand rough display the actual joint positions of an observed person [18]. In healthcare purposes, this system can be used during the task planning of a mobile unitor for technical aids like shifting handicapped people from their bed into theirwheelchairs.

    9. Conclusion

    The experimental platform specified in this paper provides a promising basis fortraining and experimenting in the development of an intelligent and autonomousmobile robot with high manoeuvre and manipulation features. The main goal ofthis platform should be the formulation of requirement specifications for an au-tonomous mobile robot in a clinical environment. It comprises powerful systemssuch as a Robot Cell, a Simulation, and Off-line Programming Station, a Stereoli-tography system, a Neural Networks Simulator, 2D- and 3D-Sensor Systems, andtelecommunication systems.

    ? FORWISS, Forschungsgruppe Wissensverarbeitung, Am Weichselgarten 7, 91058 Erlangen,e-mail: weierich@forwiss.uni-erlangen.de

    JI1423TZ.tex; 21/07/1998; 13:06; p.19

  • 350 O. BUCKMANN ET AL.

    The experiments undertaken and training courses organised within the follow-ing phase of the MOBINET project will show whether the current specification ofthe platform remains unchanged or if technical updates have to be implemented.

    References1. Berger, U. and Schmidt, A.: Active vision system for planning and programming of industrial

    robots in one-of-a-kind manufacturing, in: D. P. Cassent and E. L. Hall (eds), IntelligentRobots and Computer Vision, Proc. SPIE, Vol. 2588, SPIE, Bellinham, WA, USA, 1995.

    2. Berger, U.: Development of a sensor-based planing and programming system for industrialrobot manufacturing of one-of-a-kind and small batches products (Entwicklung einessensorgesttzten Planungs- und Programmiersystems fr den Industrierobotereinsatz in derUnikat-, Einzel- und Kleinserienfertigung), PhD Thesis, University of Bremen, Germany, 1995.

    3. Berger, U.: Final report for the Craft Project: High precision save grinding system for singlestep hard finishing operation of gears, Project No.: BRE2.CT94.13411.3., 1996.

    4. Burns, M.: Automated Fabrication Improving Productivity in Manufacturing, Prentice-Hall,Eaglewood Cliffs, NJ, USA.

    5. Carl Cloos Schweitechnik: Cloos Robot Manual, Haiger, Germany, 1994.6. Deneb Robotics, Inc.: IGRIP 3.0 Manuals, Auburn Hills, Michigan, USA, 1996.7. Eren, E.: Marin-ABC; MOEBIUS Mobile Experimental Broadband Interconnection Using

    Satellites (EURACE II), BIBA, Bremen, 1995.8. European Community: Europe in numbers (Europa in Zahlen), Council of Publication for the

    EC, EGKS-EWG-EAG, Luxemburg, 1995.9. Giebhardt, A.: Rapid Prototyping Tools for the Fast Product Development (Rapid Prototyping

    Werkzeug fr der schnelle Produktentwicklung), Germany, 1996.10. Hoffmann, N.: Simulation of Neural Networks (Simulation Neuronaler Netze), Germany, 1991.11. Jacobs, P. F.: Rapid Prototyping and Manufacturing; Fundamentals of Streolithography,

    Society of Manufactures Engineers, 1992.12. NeuralWare, Inc.: Neural Computing A Technology Handbook for Professional II/Plus and

    NeuralWorks Explorer, Pittsburgh, USA, 1995.13. Rojas, R.: Theory of Neural Networks A Systematic Introduction (Theorie der neuronalen

    Netze: Eine systematische Einfhrung), Germany, 1993.14. Schmidt, T.: Investigation of neural network based inverse kinematics formulations of a 6-axis

    articulated joint robot, ESPRIT Project No. 8338 NEUROBOT, Workpackage 4.6, BIBA,Bremen, 1995.

    15. Schneburg, E.: Neural Networks, Introduction, Overview and Application Feasibility (Neu-ronale Netzwerke, Einfhrung, berblick und Anwendungsmglichkeiten), Germany, 1990.

    16. Social indicators, Statistics Finland, http://www.eva.fi/finland/8.htm.17. Tuominen, J., Autere, A., Berger, U, and Meier, I. R.: Autonomous robot-based disassembly

    of automotive components, in: Proc. of the Conf. on Integration in Manufacturing, Vienna,Austria, 1315 September, 1995.

    18. Weierich, P.: Intelligent imagetracking The key for new applications of service robotsand rationalising in public traffic. (Intelligente Bildverfolgung Der Schlssel zu neuenAnwendungen von Servicerobotern und zur Rationalisierung im PNV), in: SymposiumAktuelle Entewicklungen un industrieller Einsatz der Bildverarbeitung, 5., 6. September inAachen, Germany, 1996.

    JI1423TZ.tex; 21/07/1998; 13:06; p.20

Recommended

View more >