three-dimensional imaging techniques: a current perspective

6
Evidence of Innovation Steven E. Seizer, MD, Editor Three-Dimensional Imaging Techniques: A Current Perspective Jayararn K. Udupa T o visualize human internal organs in their true form, shape, and function; to manipulate them to alter their structural form imaginatively; and to analyze them quantitatively for understanding their function have been primary human quests in medicine for centuries. Although the invention of X-rays gave birth to radiology, the advent of computerized tomography (CT) in the 1970s, magnetic resonance (MR) imaging in the 1980s, and various functional imaging modalities in the 1980s and 1990s brought us closer to fulfilling these quests. The development of the three-dimensional (3D) imaging technology in the 1980s and its rapid evolution have brought us even closer to the realization of these dreams. The main purpose of this article is to examine the current developments in the 3D imaging discipline. In so doing, I also briefly trace the history of past develop- ments and outline future directions. I use the phrase "3D imaging" to refer collectively to three classes of processes called visualization, manipulation, and analy- sis. Here, the term "visualization" refers to the class of processes that aid in creating mental images of struc- tures inclusive of their form, shape, and dynamics, if any. The term "manipulation" refers to the class of pro- cesses that allow interactively altering the form and shape of virtual models of structures. The term "analy- sis" refers to the class of processes that enable the extraction of quantitative information about the form, shape, and function of structures. Because of space constraints, I provide only sample references relating to the various aspects of 3D imaging. For more informa- tion, consult Udupa and Herman [1]. The suitability of technologies such as holography, varifocal mirrors, and rotating assemblies of light-emit- ting devices to 3D imaging has been investigated. How- ever, these are not as advanced technologically and in usability as computer display technology. Therefore, I concentrate only on the latter in this article. I refer to the 3D and higher dimensional image data sets gener- ated by imaging devices as "scenes." HISTORY Three-dimensional imaging began as a subject of sci- entific curiosity at a couple of centers in the 1970s [2-4]. Early works in 3D imaging methodology concentrated on detecting surfaces in scenes, forming surfaces from contour descriptions, and rendering them on a display screen [3--6]. Subsequently, methods for directly render- ing surfaces from binarized scenes emerged [7, 8]. Simul- taneously, the development of specialized hardware for fast rendering also was undertaken [9]. Although this hardware achieved impressive speeds, it soon became possible to achieve interactive rendering speed on com- mon workstations via purely algorithmic improvement and software implementation [10]. In the early part of the 1980s, when 3D imaging options became available on scanner display consoles, they quickly became popular for skeletal imaging applications [11]. In the latter part of the 1980s, the notion of volume rendering was introduced [12], which was different from surface rendering principles in that it allowed specifying object membership in a fuzzy way and retaining this fuzziness in the rendition. Its applications From the Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA. This research was supported by National Institutes of Health grant CA 56071. Address reprint requests to J. K. Udupa, Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 418 Service Dr., Blockley Hall, 4th FI., Philadelphia, PA 19104-6021. Received May 31, 1994, and accepted for publication after revision August 25, 1994. Acad Radio11995;2"335-340 © 1995, Association of University Radiologists 335

Upload: jayaram-k

Post on 04-Jan-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Three-dimensional imaging techniques: A current perspective

E v i d e n c e of I n n o v a t i o n Steven E. Seizer, MD, Editor

Three-Dimensional Imaging Techniques: A Current Perspective

Jayararn K. Udupa

T o visualize human internal organs in their true form, shape, and function; to manipulate them to alter

their structural form imaginatively; and to analyze them quantitatively for understanding their function have been primary human quests in medicine for centuries. Although the invention of X-rays gave birth to radiology, the advent of computerized tomography (CT) in the 1970s, magnetic resonance (MR) imaging in the 1980s, and various functional imaging modalities in the 1980s and 1990s brought us closer to fulfilling these quests. The development of the three-dimensional (3D) imaging technology in the 1980s and its rapid evolution have brought us even closer to the realization of these dreams.

The main purpose of this article is to examine the current developments in the 3D imaging discipline. In so doing, I also briefly trace the history of past develop- ments and outline future directions. I use the phrase "3D imaging" to refer collectively to three classes of processes called visualization, manipulation, and analy- sis. Here, the term "visualization" refers to the class of processes that aid in creating mental images of struc- tures inclusive of their form, shape, and dynamics, if any. The term "manipulation" refers to the class of pro- cesses that allow interactively altering the form and shape of virtual models of structures. The term "analy- sis" refers to the class of processes that enable the extraction of quantitative information about the form, shape, and function of structures. Because of space constraints, I provide only sample references relating to the various aspects of 3D imaging. For more informa- tion, consult Udupa and Herman [1].

The suitability of technologies such as holography, varifocal mirrors, and rotating assemblies of light-emit- ting devices to 3D imaging has been investigated. How- ever, these are not as advanced technologically and in usability as computer display technology. Therefore, I concentrate only on the latter in this article. I refer to the 3D and higher dimensional image data sets gener- ated by imaging devices as "scenes."

HISTORY

Three-dimensional imaging began as a subject of sci- entific curiosity at a couple of centers in the 1970s [2-4]. Early works in 3D imaging methodology concentrated on detecting surfaces in scenes, forming surfaces from contour descriptions, and rendering them on a display screen [3--6]. Subsequently, methods for directly render- ing surfaces from binarized scenes emerged [7, 8]. Simul- taneously, the development of specialized hardware for fast rendering also was undertaken [9]. Although this hardware achieved impressive speeds, it soon became possible to achieve interactive rendering speed on com- mon workstations via purely algorithmic improvement and software implementation [10]. In the early part of the 1980s, when 3D imaging options became available on scanner display consoles, they quickly became popular for skeletal imaging applications [11].

In the latter part of the 1980s, the notion of volume rendering was introduced [12], which was different from surface rendering principles in that it allowed specifying object membership in a fuzzy way and retaining this fuzziness in the rendition. Its applications

From the Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA. This research was supported by National Institutes of Health grant CA 56071. Address reprint requests to J. K. Udupa, Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 418 Service Dr., Blockley Hall, 4th FI., Philadelphia, PA 19104-6021. Received May 31, 1994, and accepted for publication after revision August 25, 1994. Acad Radio11995;2"335-340 © 1995, Association of University Radiologists

335

Page 2: Three-dimensional imaging techniques: A current perspective

UDUPA Vol. 2, No. 4, April 1995

to hard- and soft-tissue imaging subsequently have been widely explored [13]. Although volume rendering introduced significantly increased computational com- plexity, new algorithm developments, combined with the rapid speed increase of workstations, have made it possible to overcome this extra burden via software [14]. In the latter part of the 1980s, soft-tissue and multi- modality functional imaging were brought into the realm of 3D imaging, which continues to be the subject of much investigation [15]. This required the develop- ment of cross-modality registration techniques [16]. In the 1990s, the scope of 3D imaging seems to have expanded significantly from seeking direct clinical diag- nostic and therapeutic applications to contributing to the generation of new knowledge in a variety of areas, including the study of the physiologic, metabolic, bio- mechanical, kinematic, anatomic, and growth-related functions of the internal organ systems.

To my knowledge, the first 3D imaging software package was the one developed by the Medical Image Processing Group, Department of Radiology, University of Pennsylvania; it is called DISPLAY [17]. This package was implemented on the Physician Display Console of the General Electric CT 8800 scanner (General Electric Medical Systems, Milwaukee, WI) in 1980, which may be the first attempt to industrialize 3D imaging. From about 1984, most medical scanner manufacturers started providing a 3D imaging option on their scanner consoles. Other types of 3D imaging systems that sub- sequently became available include dedicated worksta- tions that incorporated 3D imaging algorithms in dedicated hardware and software packages designed to run on specific brands of graphics workstations.

CURRENT TRENDS

The current trends in 3D imaging research and system development are best explained via a generic schematic representation of what may be considered to be a com- plete 3D imaging system (Fig. 1). The workstation with the 3D imaging software forms the core of this system. The input devices provide image data (via scanners) and other forms of data related to the structures that are being visualized, manipulated, and analyzed. In an intraopera- tive situation, the scanners provide real-time data. Other input devices are used to point at and manipulate virtual models of structures and the actual structures themselves. The output devices are used to enhance output from the workstation in hard- and soft-copy forms and to create real models of the virtual computer models.

W o r k s t a t i o n I n p u t O u t p u t

Scanners Pointing Devices Wands Robotic Arms Dials/Knobs

-I Visualization P - Manipulation

Analysis

I U,er I

I . Video Display Printers Stereo/Head Mounted RoboUe Arms Model Generators

FIGURE 1. A schematic representation of a complete three-dimensional imaging system.

User interaction facilities are an integral part of the complete system. In describing current developments in 3D imaging, one needs to consider, in addition to visualization, manipulation, and analysis, two other classes of operations on data (viz., porting data and preprocessing). The term "porting data" refers to the operations of accessing and sending data from and to data-generating and data-recording devices. The term "preprocessing" refers to operations that allow forming computer representations of the structural information that is captured in scenes or that is provided by some other means. In the rest of this section, I discuss the recent developments in 3D imaging as they pertain to these five processes.

In porting data, the primary issues are the format of. representing data and the protocols used for their transmission between devices. The efforts behind the American College of Radiology-National Electronics Manufacturers Association standards in the 1980s and Digital Imaging and Communications in Medicine stan- dards in the 1990s have been addressing these issues commendably. From the 3D imaging perspective, sev- eral areas need considerable work. First, it should be possible to handle 3D and higher dimensional scenes as a whole. Second, as the need for exchange of struc- ture information derived from scenes increases, the standards also should be able to handle structures of all kinds--rigid, nonrigid, static, and dynamic--as well as their collections as a whole. Such a system of structures for the whole human body will soon be a reality as image data from the Visible Human Project [18] become available. We and others have made major efforts to generalize the existing standards so that these needs are adequately met [19]. The main difficulty in doing so is anticipating needs in a complete and consistent way as the discipline evolves. Finally, standards for certain basic aspects of the user interface to imaging systems should be established so that the training effort

336

Page 3: Three-dimensional imaging techniques: A current perspective

Vol. 2, No. 4, April 1995 3D IMAGING TECHNIQUES

required in the use of new systems can be minimized and different systems can be ne tworked to opera te

consistently. The main preprocessing operations used in 3D imag-

ing are filtering, interpolation, and segmentation. Filter-

ing converts a given scene into another scene of the same resolution after suppressing or enhancing certain

intensity details. The details may be noise that needs to be suppressed or edges or regions of a structure such as a tumor that need to be enhanced. Interpolation converts a given scene into a scene of different (higher

or lower) resolution. Its purpose in the 3D case is to create scenes of isotropic resolution. In the four-dimen-

sional (4D) case of dynamic scenes, interpolation may be used to create finer time sampling so that the

dynamics are portrayed more smoothly. The purpose of segmentation is to identify the structures of interest in the given scene and to derive a proper computer repre-

sentation of them. Automatic segmentation is perhaps the most difficult of all 3D imaging research problems at present. The main issues here are how knowledge

about an object can be properly expressed and h o w this expression can be translated into computable enti-

ties. Broadly, two classes of approaches to segmenta- tion are currently being pursued: automatic and user controlled. Automatic methods attempt to produce an

object description either as a boundary or as the region occupied by the object. Both fuzzy and binary forms are possible for boundary and region representation. Effective automatic methods exist only for special situa-

tions. In use>controlled methods, the user controls the segmentation process while it is in progress. Both

binary and fuzzy output for both boundary and region segmentation are possible with these methods. The aim

of the research efforts here is to minimize the time the user needs to spend in the process. User-controlled methods that can segment an object in a couple of min-

utes currently exist only for special situations. I think that this class of methods will bring tangible solutions to this difficult p roblem in the near future. For a good description of recent developments in preprocessing

operations, see Robb [20]. The main visualization methods used in 3D imaging

[1] are slice display, surface rendering, and volume ren-

dering. In slice display, two-dimensional (2D) "slices" of the given scene are extracted and displayed on the computer screen. The slices may be the natural ( tomo- graphic) slices of the scene or those computed on a plane or a curved surface that is specified within the scene domain. Although the computat ion of "orthogo-

nal" slices (axial, coronal, and sagittal) within the scene is straightforward, the extraction of oblique and curved slices requires high computational power and large ran- dom-access storage space. For 4D scenes with time constituting the fourth dimension, time can be frozen to carry out the slicing operation.

Surface rendering consists of two main operations: constructing a virtual (computer) model of the struc-

tures to be visualized and the subsequent rendition of this model on the computer screen. Model construction is facilitated by one or more of the preprocessing oper-

ations. The model may constitute just the surfaces of the structures or may represent their entire volume. In the case of the dynamic (4D) scenes representing

dynamic structures, the model also can represent the dynamics of the structures. Rendering of the model essentially consists of creating a 2D picture that depicts the appearance of the model for a given viewpoint and

for assumed conditions of the viewing environment, such as of the light source, ambient brightness, and var-

ious optical properties of the model. Analogous to surface rendering, volume rendering

also consists of two operations: constructing a virtual

model of the structures to be visualized and the subse- quent rendition of the model. Model construction here differs significantly from that used in surface rendering

in that fuzziness is allowed in indicating structure pres- ence. The basic elements used in model construction have a value associated with them that indicates the

extent of presence of the structure or its boundary. Because it is difficult to segment scenes unequivocally into binary scenes that indicate the presence or absence of structures, the flexibility afforded by fuzzy-model construction allows the uncertainty inherent in segmen-

tation to be retained in the constructed model. This flexibility by itself, however, does not necessarily guar- antee higher accuracy in segmentation. Most of the cur- rent methods of fuzzy-model construction are ad hoc in

nature, and this aspect of retention of inaccuracies accurately in the models requires considerable study.

The operation to render fuzzy models follows princi- ples similar to those underlying surface rendering except that, generally, it is computationally more inten-

sive than the latter. For model construction and render- ing in surface and volume rendering, numerous processing paths are possible, considering the variety of ways in which preprocessing operations can be

combined (e.g., interpolation, filtering, and segmenta- tion can be applied to a scene in totally independent orders) and the variety of available model construction

337

Page 4: Three-dimensional imaging techniques: A current perspective

U D U P A Vol. 2, No. 4, April 1995

and rendering methods. Although this richness of oper- ations is desirable, it brings the onus of carefully sorting through the myriad possible visualization methods to determine which among them are the best for a given medical application. Little scientific work has been done in this direction; research by Vannier et al. [21] is perhaps the only exception. Udupa and Goncalves [22] attempted to develop an algebra for visualization oper- ations and a methodology for their categorization and grading on the basis of task-specific mathematical phantoms. Task-based sifting of methods in this fashion becomes essential because the same method can pro- duce extremely different results in different situations, as demonstrated in Figure 2; see also Udupa and Gon- canes [22].

The speed of computation is an issue in volume ren- dering and in dealing with massive data sets such as those generated by the Visible Human Project [18]. Cur- rent approaches addressing this issue include the devel- opment of specialized hardware [23] and the development of fast algorithms [14] for general-purpose workstations of both the single and the multiprocessing kind. The latter approach seems to be more attractive because the computational power of general-purpose workstations is increasing rapidly, and desktop multi- processing systems are becoming more common and affordable. The greatest advantage of this approach is the widespread availability of its solutions to those other than the developers and the ease of portability of their implementation from one platform to another.

The activity of manipulation in 3D imaging is not as well developed as visualization. Algorithms for manipu- lating models of rigid structures (fragmenting, moving fragments, reflecting about a plane of symmetry to assess unilateral deformities) have been developed [10].

This ability has been used mainly in aiding craniomaxil- lofacial surgery planning [24]. However, the develop- ment of manipulation methods that will aid in the study of hard-tissue structures taking into consideration their functional aspects (e.g., motion, growth) is vastly more difficult. The development of theory and algorithms to manipulate soft-tissue structures also is challenging. Generic soft-tissue models that are based on local deformations that are independent of the underlying bone and muscle but that deal with only the skin sur- face are used as an aid in facial plastic surgery [25]. The use of multiple layers--skin, fatty tissue, muscle, and bone, where bone is stationary--has been attempted to model facial expression animation [26]. Attempts also have been made to capture soft-tissue mechanical properties in the virtual model, for example, how a musculotendon actuator functions or what the force- length function is [27]. However, no attempt seems to have been made to integrate hard-tissue (bone) changes with the soft-tissue modifications in a model. I envisage that much of the medical utility of 3D imaging will come from future ability to simulate dynamically the functioning of hard- and soft-tissue assemblies on the basis of patient-specific scene data.

Advances made in the activity of analysis in 3D imag- ing lag far behind those made in visualization. Currently, capabilities exist to derive a variety of structure geometry measures from the virtual models, such as distances, widths, heights, thicknesses, lengths, curvatures, areas, volumes, and how these parameters change with time in the case of time-varying structures [1]. When functional and anatomic information are obtained for a given organ system from multiple imaging modalities (e.g., positron emission tomography and MR imaging), capabilities also exist to represent the scenes so obtained in a common

A B C

FIGURE 2. The same method of rendering produced the high-quality rendition of a child's skull from com- puted tomography data (A) and the poor rendition of the talus of a live foot from magnetic resonance im- aging data (B). A better rendition of the same talus by a different method is also shown for comparison (C).

338

Page 5: Three-dimensional imaging techniques: A current perspective

Vol. 2, No. 4, April 1995 3D I M A G I N G T E C H N I Q U E S

coordinate system of reference. This usually requires the identification and modeling of some common structure or substructure in each modality and matching of the models. Once the scenes are registered in this fashion, it is then possible to compute various measures of the function (e.g., metabolic activity) within specific struc- tures or substructures using the tools of visualization, manipulation, and analysis.

Three-dimensional dynamics of rigid and nonrigid organ systems that are based on acquired scene data are just beginning to be investigated [28, 29]. Although these studies fall under "analysis operations" according to my classification, they pose some of the most diffi- cult challenges in all aspects of 3D imaging. One of the major drawbacks in addressing these problems is the inability to handle the rigid and nonrigid components of an organ system in an integrated fashion, especially in dynamic systems, not just in the analysis process but also in visualization and manipulation.

FUTURE DIRECTIONS

Future developments in 3D imaging are likely to be dictated by the requirements of an encompassing 3D imaging system such as the one schematically repre- sented in Figure 1. Standards will need to be developed for interfacing to devices other than imaging scanners, for handling scenes of dimensionality greater than two as well as structure systems (to facilitate the exchange of digital atlases), and for addressing some of the com- mon and basic user-interface issues.

Three-dimensional imaging software systems that are open, machine-independent, object-type (rigid, nonrigid, static, and dynamic) independent, and user-expandable can greatly facilitate cooperative research and minimize repeated efforts. (My research group has attempted to emulate these characteristics in a system, called 3DVIEWNIX, which we developed with the scheme expressed in Figure 1 as its long-term goal. This system has been installed at more than 100 sites worldwide. Most of the views expressed in this article resulted from the lessons learned during the development of this system.) Software systems whose design fulfills these characteristics as well as the requirements of a complete system as shown in Fig- ure 1 will facilitate significantly the future development of applications.

Lack of effective solutions to the segmentation prob- lem is hindering the progress of 3D imaging applica- tions. I think that interactive paradigms have the

potential to yield tangible solutions for segmentation. The use of stereoscopic and immersive displays com- bined with advanced input devices opens up numerous avenues for interactive segmentation strategies. Newer display technologies allowing immersive displays and newer interactive techniques will have numerous con- sequences on the design, development, and implemen- tation of algorithms for visualization and manipulation.

Considerable work is needed before researchers will be able to model soft tissues and to analyze organ sys- tems composed of only soft tissues or of soft and hard tissues. Vastly more sophisticated analysis will become possible once the modeling of soft tissues that takes into account their mechanical properties becomes practical.

Validation of visualization and other operations in which subjectivity invariably enters into the assessment of their accuracy has been largely neglected. Statements of accuracy and relative superiority relating to these operations reported in the literature are mostly expres- sions of superficial observations. Much insight into these phenomena can be obtained through mathemati- cal phantom simulations, although scientific observer studies should be the final arbiters for closely compet- ing techniques.

In conclusion, outstanding progress has been made during the past 15 years in 3D imaging--it has moved from imaging research laboratories to the clinic. During the next 10 years, "virtual humans" complete with ana- tomic, histologic, functional, mechanical, and perhaps even psychological information may become a reality. In the process, the field will face exciting multidisciplinary challenges, but the outcome will be a 3D imaging system that will serve as a personal virtual laboratory to the practicing physician as well as to the imaging scientist.

ACKNOWLEDGMENTS

The help of Brenda Stone and Mary A. Blue in pre- paring the manuscript is appreciated.

REFERENCES

1. Udupa J, Herman G, eds. 311) imaging in medicine. Boca Raton, FL: CRC Press, 1991.

2. Greenleaf JF, Tu TS, Wood E. Computer-generated three-dimensional oscilloscopic images and associated techniques for display and study of the spatial distribution of pulmonary blood flow. IEEE Trans Nucl Sci- ences 1970; 17:353-359.

3. Huang H, Ledley R. Three-dimensional image construction from in vivo consecutive transverse axial sections. Comput Biol Med 1975;5:165-170.

4. Herman G Liu H. Three-dimensional display of human organs from com- puted tomograms. Comput Graph Image Process 1979;9:1-29.

339

Page 6: Three-dimensional imaging techniques: A current perspective

U D U P A Vol. 2, No. 4, April 1995

5. Udupa J. Interactive segmentation and boundary surface formation for 3D digital images. Comput Graph Image Process 1982; 18:213-235.

6. Cook P. A study of three-dimensional reconstruction algorithms. Automedica 1981 ;4:3-12.

7. Farrell E, Zappulia R, Yang W. Color 3D imaging of normal and pathologic intracranial structures. IEEE Comput Graph App11984;4:5-17.

8. Frieder G, Gordon D, Reynolds R. Back-to-front display of voxel-based objects. IEEE Comput Graph App11985;5:52-60.

9. Goldwasser S, Reynolds R. Real-time display and manipulation of 3D medical objects: the voxel architecture. Comput Graph Image Process 1987;39:1-27.

10. Udupa J, Odhner D. Fast visualization, manipulation and analysis of binary volumetric objects. IEEE Computer Graph App11991 ;11:53-62.

11. Vannier M, Marsh J, Warren J. Three-dimensional computer graphics for cra- niofacial surgical planning and evaluation. Comput Graph 1983; 17:263-274.

12. Drebin R, Carpenter L, Hanrahan P. Volume rendering. Comput Graph 1988;22:65-74.

13. Fishman EK, Drebin RA, Magid D, et al. Volumetric rendering techniques: applications for 3-dimensional imaging of the hip. Radiology1987;16:56-61.

14. Udupa J, Odhner D. Shell rendering. IEEE Comput Graph Appl 1993; 13:58-67.

15. Levin D, Pelizzari C, Chen G, Chen C, Cooper M. Retrospective geomet- ric correlation of MR, CT and PET images. Radiology 1988;169:817-823.

16. Maguire GQ, Noz M, Lee E, Schimpf J. Correlation methods for tomo- graphic images using two and three dimensional techniques. In: Bacharach S, ed., Information processing in medical imaging. Dordrecht, the Netherlands: Martinus Nijhoff, 1986:266-279.

17. Udupa J. DISPLAY: a system of programs for two- and three-dimensional display of medical objects from CT data (technical report no. MIPG41). Medical Image Processing Group, Department of Computer Science, State University of New York, Buffalo, NY, 1980.

18. Ackerman M, Masys D. The Visible Human Project of the National Library of Medicine: the need for representation standards for volumetric anat-

omy. In: Vannier MW, Yates RE, Whitestone J, eds., Proceedings of the workshop on electronic imaging of the human body. Dayton, OH: Wright- Patterson Air Force Base, Human Engineering Division, Armstrong Labo- ratory, 1992:77-87.

19. Udupa J, Hung H, Odhner D, Goncalves R. Multidimensional data format specification: a generalization of the American College of Radiology- National Electric Manufacturers Association standards. J Digit Imaging 1992;5:26-46.

20. Robb R, ed. Proceedings of visualization in biomedical computing, 1992. Bellingham, WA: Society of Photo Optical Instrumentation Engineers, 1992.

21. Vannier MW, Hildebolt CF, Marsh JL, et al. Craniosynostosis: diagnostic value of three-dimensional CT reconstruction. Radiology 1989;173:669-673.

22. Udupa J, Goncalves R. Imaging transforms for surface and volume ren- dering. J Digit Imaging 1993;6:213-236.

23. Fuchs H, Poulton J, Eyles J, et al. Pixel planes 5: a heterogeneous multi- processor graphics system using processor-enhanced memories. Corn- put Graph 1989;23:79-88.

24. Cutting C, Grayson B, Bookstein F, Fellingham L, McCarthy J. Computer- aided planning and evaluation of facial and orthognathic surgery. Comput Plastic Surg 1986; 13:449-461.

25. Thalmann NM, Thalmann D. Towards virtual humans in medicine: a pro- spective view. Comput Med Imaging Graph 1994; 18:97-106.

26. Terzopoulos D, Waters K. Techniques for realistic facial modeling and ani- mation. In: Proceedings of Computer Animation. New York: Springer-Ver- lag, 1991:59-74.

27. Jianhua S, Thalmann D. Muscle-based human body deformations. In: Proceedings of CAD/Graphics. 1993:95-100.

28. Udupa J, Hirsch B, Samarasekera S, GoncaIves R. Joint kinematics via 3D MR imaging. In: SP/E proceedings: visualization in biomedical com- puting, 1992:664-670.

29. Acel L, Goncalves RC, Bloomgarden D. Regional heart wall motion: two- dimensional analysis and functional imaging of regional heart wall motion with magnetic resonance imaging. Radiology 1992; 183:745-750.

3 4 0