[ieee 1997 ieee international conference on systems, man, and cybernetics. computational cybernetics...

6
Human - Machine Performance Configuration for Computational Cybernetics Insook Choit and Robin Bargar* Beckman Institute, 405 N. Mathews, Urbana, IL 61801, USA email: [email protected] ABSTRACT The term, human-machine performance has a precedent in a term, human-machine intelligent interaction with the two following emphasis in our project. The first emphasis is on the multi-modal capacity of a performer. This capacity is supported by parallel processing computing power, various input devices, gesture input time scheduling techniques, and the configurations of graphics and sound engines to provide perceptual feedback to a performer. The second is on the method for coupling two spaces; a physical space where an observer acts, and an abstract space where computational states are represented as solution space or control parameter space. The coupling should be implemented in a way the dichotomy of the two spaces is intelligible and composable. Few methods have been tested in VR environments with graphical interfaces. The circularity in human-machine performance supports cybernetic observation by which an observer is included in a computing environment as an active player. The paper describe three technical areas we have established in order to support this configuration. 1) A software architecture to accommodate an interactive performance that is crucial to achieve a circularity in a computing environment. 2) An implementation of a manifold space to maintain multiple models in parallel for real-time access and control. A graphic interface for manifold space and a method for notating manifold time for time critical observation will be discussed. 3) A theoretical framework for coupling computational space and physical space given the infrastructure to be discussed in (1). In particular, the core of our project lies in exploring high-dimensional systems with an ecological orientation towards an arbitrarily defined n- dimensional space. 1. INTRODUCTION This paper presents a task-based definition of computational cybernetics as human-machine performance. Our application i s supported by a virtual reality (VR) computing environment where multiple applications can be observed in an interactive mode. We wish to support not only data display, also and more importantly an active exploration of computational models that are dynamical. The models are explored with n- dimensional interactive parameter variations. The tasks involve a generalization VR software architecture and performance technology integration. The concept of performance includes the performances of computers and human performers. We emphasize that human performers are observers whose performance capacity includes the ability to see, t o movd, and to listen. Thus the system components, software architecture, and platform configurations aim not only to support this capacity, also to allow observers to successively refine their performance observation skills through multiple investigations conducted in the virtual environment. ' Human-Computer Intelligent Interaction, Beckman Institute, and NCSA National Center for Supercomputing Applications 0-7803-4053-1/97/$10.00 @ 1997 EEE 2. PERFORMANCE METRICS The concept of human-machine performance is drawn upon two existing domains of practice. One is engineering practice where the focus is on system design and integration, signal transmission, and control strategies. The other is performance practice where human performers, particularly musicians, arrive at fine ability to interact with their instruments. Usually musicians achieve their skill by exercising historically- validated task repetitions. Due to its particular association with culture, not every aspect of musical performance practice i s suitable for our concept. However with the combination of both practices, supported by an appropriate hardware-software configuration, the concept of human-machine performance intersects with the criteria for cybemetic observation. 2.1 Time-Critical Machine Performance We are concerned with the following metrics for computer performance: (1) the number of CPU cycles required to execute a state change in a computational model; (2) the number of input control devices and the rate at which each permits an observer to affect changes to system states; (3) the number of display signal gencrators and the rates at which they update the display engines. These metrics correspond to three classes of VR processes: (1) simulation processes, which include ODES or other equation- based models, and interactions such as collision detection; (2) input processes, which iteratively poll input device drivers; (3) display processes, which update states of graphics and sound synthesis models and continuously render output signals from those models. 2.1.1 Temporal characteristics of machine performance processes The three classes of processes, input, display and simulation, execute at independent time scales, as input rate, display rate and simulation rate. An input rate originates in a peripheral device and is filtered at the rate its device driver is polled. Position information is transmitted at 48 Hz from magnetic tracking devices, including head, body and hand position measurements in three dimensions.' Pressure sensors transmit at 96 Hz, limited by the serial baud rate [3]. Signals from mechanical devices such as joystick operations are transmitted at 200 Hz or higher. Software polling of input device drivers is less time-critical than simulation and display processes. It is managed at rates below 60 Hz in our present applications. Display rates for audio and video signals differ greatly from one another. Real-time display of raster graphics can achieve rates of 30 Hz but rates of 5-20 Hz are more common. When visual display rates drop below 10 Hz the perception of smooth motion is disrupted. Audio samples are not output to uniform raster arrays but to a ring array. Samples are then transferred to the DAC using variable-sized double-buffered scheduling. The scheduler detects a low water mark on the ring buffer to determine when to block high-level processes to allow sound sample computation. A high This is the rate of the Ascension Flock of Birds system. 4254

Upload: r

Post on 15-Dec-2016

216 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [IEEE 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation - Orlando, FL, USA (12-15 Oct. 1997)] 1997 IEEE International Conference

Human - Machine Performance Configuration for Computational Cybernetics

Insook Choit and Robin Bargar* Beckman Institute, 405 N. Mathews, Urbana, IL 61801, USA

email: [email protected]

ABSTRACT

The term, human-machine performance has a precedent in a term, human-machine intelligent interaction with the two following emphasis in our project. The first emphasis is on the multi-modal capacity of a performer. This capacity is supported by parallel processing computing power, various input devices, gesture input time scheduling techniques, and the configurations of graphics and sound engines to provide perceptual feedback to a performer. The second is on the method for coupling two spaces; a physical space where an observer acts, and an abstract space where computational states are represented as solution space or control parameter space. The coupling should be implemented in a way the dichotomy of the two spaces is intelligible and composable. Few methods have been tested in VR environments with graphical interfaces.

The circularity in human-machine performance supports cybernetic observation by which an observer is included in a computing environment as an active player. The paper describe three technical areas we have established in order to support this configuration. 1) A software architecture to accommodate an interactive performance that is crucial to achieve a circularity in a computing environment. 2 ) An implementation of a manifold space to maintain multiple models in parallel for real-time access and control. A graphic interface for manifold space and a method for notating manifold time for time critical observation will be discussed. 3) A theoretical framework for coupling computational space and physical space given the infrastructure to be discussed in (1). In particular, the core of our project lies in exploring high-dimensional systems with an ecological orientation towards an arbitrarily defined n- dimensional space.

1. INTRODUCTION

This paper presents a task-based definition of computational cybernetics as human-machine performance. Our application is supported by a virtual reality (VR) computing environment where multiple applications can be observed in an interactive mode. We wish to support not only data display, also and more importantly an active exploration of computational models that are dynamical. The models are explored with n- dimensional interactive parameter variations. The tasks involve a generalization VR software architecture and performance technology integration. The concept of performance includes the performances of computers and human performers. We emphasize that human performers are observers whose performance capacity includes the ability to see, to movd, and to listen. Thus the system components, software architecture, and platform configurations aim not only to support this capacity, also to allow observers to successively refine their performance observation skills through multiple investigations conducted in the virtual environment.

' Human-Computer Intelligent Interaction, Beckman Institute, and NCSA

National Center for Supercomputing Applications

0-7803-4053-1/97/$10.00 @ 1997 EEE

2. PERFORMANCE METRICS

The concept of human-machine performance is drawn upon two existing domains of practice. One is engineering practice where the focus is on system design and integration, signal transmission, and control strategies. The other is performance practice where human performers, particularly musicians, arrive at fine ability to interact with their instruments. Usually musicians achieve their skill by exercising historically- validated task repetitions. Due to its particular association with culture, not every aspect of musical performance practice i s suitable for our concept. However with the combination of both practices, supported by an appropriate hardware-software configuration, the concept of human-machine performance intersects with the criteria for cybemetic observation.

2.1 Time-Critical Machine Performance We are concerned with the following metrics for computer performance: (1) the number of CPU cycles required to execute a state change in a computational model; ( 2 ) the number of input control devices and the rate at which each permits an observer to affect changes to system states; (3) the number of display signal gencrators and the rates at which they update the display engines. These metrics correspond to three classes of VR processes: (1) simulation processes, which include ODES or other equation- based models, and interactions such as collision detection; ( 2 ) input processes, which iteratively poll input device drivers; (3) display processes, which update states of graphics and sound synthesis models and continuously render output signals from those models.

2.1.1 Temporal characteristics of m a c h i n e performance p r o c e s s e s The three classes of processes, input, display and simulation, execute at independent time scales, as input rate, display rate and simulation rate. An input rate originates in a peripheral device and is filtered at the rate its device driver is polled. Position information is transmitted at 48 Hz from magnetic tracking devices, including head, body and hand position measurements in three dimensions.' Pressure sensors transmit at 96 Hz, limited by the serial baud rate [3]. Signals from mechanical devices such as joystick operations are transmitted at 200 Hz or higher. Software polling of input device drivers is less time-critical than simulation and display processes. It is managed at rates below 60 Hz in our present applications.

Display rates for audio and video signals differ greatly from one another. Real-time display of raster graphics can achieve rates of 30 Hz but rates of 5-20 Hz are more common. When visual display rates drop below 10 Hz the perception of smooth motion is disrupted. Audio samples are not output to uniform raster arrays but to a ring array. Samples are then transferred to the DAC using variable-sized double-buffered scheduling. The scheduler detects a low water mark on the ring buffer to determine when to block high-level processes to allow sound sample computation. A high

This is the rate of the Ascension Flock of Birds system.

4254

Page 2: [IEEE 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation - Orlando, FL, USA (12-15 Oct. 1997)] 1997 IEEE International Conference

water mark limits the number of samples that may be computed in advance of buffering. This is to avoid excessive latency between input sample computation and display [6]. Audio signals are generated at a 44,100 Hz sampling rate. It produces acceptable latency at buffer lengths of 512 to 2048 sample-increments, a buffer exchange rate of 43-172 Hz for stereo signals. Table 1 indicates different rates and a different prioritization for audio and graphic buffering. These differences demonstrate the prudence for independent organization of graphics and audio processes.

Table 1. Conflicting protocols for graphics rendering and sound synthesis

I graphics

pixels/frame

variable 5 -25 f p s

current frame I depends on next protocol

10 - 12 f p s

44,100 samples per second, per channel

fixed at sample rate

next &me depends on current frame

20 Hz = tone Usample rate

2.1.2 Display constraints for s imulat ions Simulation rates vary according to the nature of the model. In physically-based dynamics, numerical accuracy and computational efficiency are inverse to one another. The number of cycles required by an ODE depends upon the accuracy of its integration method and the numbcr of independent motion paths involved in the model. Approximations are introduced into numerical models to allow computations to be sustained i n real-time. Approximations in physically-based models present a high risk of real-time accumulation of error. For this reason most of the parameters vary within strict boundaries. For ODES we apply a high water mark and low water mark boundary condition for choosing the integration timestep. For rendering smooth-motion graphics we apply a minimum At of 0.0005 seconds and a maximum At of 0.001 seconds. The service rate of physics updating graphics ranges between 0.01 and 0.001 seconds, resulting in several physics integration timesteps for each graphic frame update. Collisions are tested at a rate of 0.15 - 0.3 seconds depending upon the number and velocity range of bodics involved. Control signals for sound cvcnts are transmitted every 0.01 - 0.15 seconds. whereas audio signals are cpmputed at a At of -2.25e-5 seconds.

Interactions between graphical objects create worst-case scenarios for simulating motions and collisions. For efficiency we compute collision detection once per graphical time frame. Detection is based upon the centers of objects which are assumed to be spherical. Of course the graphical objects do nor collide at their centers, and their motions do not constrain them to collide at graphical time steps. We limit the velocities of potential colliding objects so that they cannot pass though one another completely within one graphical time step. Their degree of overlap at the frame where collision is detected adversely affects the accuracy of the motion vector computed from the collision. The adjustments are required to match integration

rates and display rates, in order to reduce aliasing. These adjustments depend on the CPU performance and the complexity of the specific performance space.

Temporally-independent processes help to maintain real-time response to input signals and real-time delivery of display signals. A temporal description must support the coordination of dissimilar process rates. Sequential process organization risks the introduction of intolerable delays into the system [9]. If processes of dissimilar rates are sequentially organized in a VR application, then the slower processes will tend to block faster processes. Computational performance improves when processes are parallel and asynchronous.

2.1.3 Interactivity and hysteresis Interactivity is characterized by the access to control parameters and the coherence of a state change in simulations. Immediate display supports an observer’s recognition of feedback from an interaction. The observable behavior of the models is governed by the coherence law particular to the given models under interaction. This particularity supports an observer’s ability to distinguish one interaction from another. Histeresis is a property whereby a system alters its response to a given input signal as it is repeatedly applied. Histeresis is characterized by a function describing a change in the relation between an input signal and a corresponding state change [7]. The Necker cube optical illusion presents a classical example of histeresis. Prolonged attention to the cube results in an increasing sequence of perceived changes in perspective. The longer one looks at the cube the more rapidly changes take place. Histeresis may be an emergent property of a numerical simulation, or it may be an explicit mechanism for altering the storage and retrieval of system states.

2.2 Human Performance We learn two things from music performance practice. First, the practice encourages performers to develop an active kinesthetic intelligence. Second, the practice encourages performers to develop auditory intelligence. Considering the various dimensions and attributes of musical instruments we note that musicians tend to acquire a variety of kinesthetic control in their own body movements when interacting with the instruments. Musicians with ability to play a variety of instruments show the potentials of human beings for developing kinesthetic intelligence that is adaptable to various conditions of interacting systems. In this light we wish to overcome the limited use of this intelligence in current technology. The auditory intelligence of performers is a finely integrated intelligence that accounts for: (1) an ecological orientation towards the acoustic space when monitoring the resulting sounds, (2) the sensibility towards the instruments, i n other words, the sensory intelligence towards the systems in interaction, (3) ability to adjust performance strategy with respect to the resulting sound. Particularly, the performance strategy involves an adaptive kinesthetic energy control based upon attention to sounds output.

2.2.1 Synchronization and latency Synchronization is an action to apply a uniform temporal scale to manage asynchronous processes. Synchronization represents an automated or a human observer applying a common temporal unit to processes which operate at different timescales. The temporal unit may be applied to a control signal to modify a simulation process, including coupling or phase-locking one process to another. This is to obtain synchronous advance among all process time steps. Alternatively the temporal unit is applied to the signals passing between processes with no modification to the processes internal states. For example, we can intercept two or more asynchronous signals and annotate a sample location at each, then align the samples before

Page 3: [IEEE 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation - Orlando, FL, USA (12-15 Oct. 1997)] 1997 IEEE International Conference

transmitting the signals to their destination processes. Intercept-synchronization may be performed at multiple sites i n a virtual performance system: (1) control signals may be synchronized while passing from device drivers to rendering processes; (2) display signals may be synchronized while passing from rendering processes to display devices. Synchronization performed in simulations avoids latencies which are a chronic problem of intercept-synchronization.

Real-time delivery of control signals and of rendered signals is critical for synchronizing an observer’s movements with a system’s response. Latencies of greater than 100 ms are intrusive to the perception of synchronization between actions and multi-process output signals [9]. Latency is implicit when a processor’s internal cycle time is greater than the external frame rate required by the system to deliver an output signal to another process, therefor delaying the consequences of state changes. Latency is explicit when an action imparted to a system marks an observable time interval before the system responds 281. Latency tolerances have wider margins in computer graphics than they have in auditory displays. This is due to lower temporal resolution in the visual signal. For example, if a collision is depicted in sound and graphics, the moment of the collision will be more accurately reported by the ears of an observer than by the eyes.

2.2.2 Problem S t a t e m e n t We encounter an experimental technology where optimization includes metrics for calibrating human performance and machine process performance. Human orientation is a highly structured parallel process involving in part kinesthetics, memory and predictions. Compared to a human, a computable orientation is minimally parallel, having lesser memory and little or no predictive capability, therefor encoding only “objective” information. Thus the machine cannot be programmed according to human performance metrics. However, we can calibrate the orientation of a performance space to support the coherent presence of an observer in a computing environment.

i* S

3. ARCHITECTURE FOR SUPPORTING TIME- CRITICAL OBSERVATION

Performance practice provides a comprehensive paradigm for cybernetic observation. Computational cybernetics are prototyped by engineering integrated systems to accommodate cybernetic observations. Research is conducted in an integrated experimental system for virtual reality performance. The architecture supporting human-machine performance comprises an image, a sound, and a control subsystem. The system architecture has been designed for modularity to support extensions and new projects. The basic laboratory implementations are in OpenGL for graphics and the NCSA Sound Server for synthesized sound [ 13.

The base immersive display is a C A W system, a surround- screen surround-sound virtual reality theater supported by position tracking peripherals and a software library [4]. CAVE functions manage buffering and rendering of monoptic or stereoptic images displayed on multiple screens. The projective geometry includes a model of the physical locations of screens. This allows images to be displayed continuously and synchronously across multiple screens, with the scene orientation determined by the viewing angle of an active observer wearing a head position tracking device. Off-axis projection is applied by distorting the images according to a virtual projection plane that maintains an angle of projection parallel to the observer’s viewing angle. The distortion increases with the disparity between the virtual axis of projection and the fixed axis of projection perpendicular to the

foot pressure sensor

simulation

walls. OpenGLbased application can be displayed by passing a graphical frame function to the CAVE library [lo].

3.1 ScoreGraph We extend the CAVE library with a modular development framework for integrating and exchanging information across parallel processes. We refer to the process of modular performance application development as VR Authoring. ScoreGraph is an application development library for organizing connectivity among processes in a virtual environment. A C++ class library provides tools for defining and connecting processes. ScoreGraph integrates multiple descriptions of time: pre-recorded event sequences, real-time computation of simulated dynamics, and independent process rates for sounds, graphics, interactive control and simulations. In this framework a VR application can be navigated according to time-critical sequences resembling a musical score, while maintaining the freedom to interactively traverse events represented as a directed graph.

3.1.1 A Generative Grammar for E v e n t s ScoreGraph provides process management including i/o with time-based operations, and user-interface management under the same temporal paradigm. ScoreGraph encourages modular application development by assembling tasks, processes that iterate at a quasi-periodic rate. Tasks include graphics, sounds, dynamics and input processes. Tasks inherit temporal structure including onset time, duration, time step or frame rate boundaries for process execution, and integration time steps for simulations.

Table 2. Symbol table for task processes I

input

magnetic tracker I ; I 1 button joystick

graphics sound

Connectors are a special class of ScoreGraph event enabling data to pass between tasks. Virtual objects in the form of interactive events are created by networks of tasks exchanging control data. Signal flow among tasks is either unidirectional or bi- directional. A task can be connected to multiple tasks following the connectivity rules. Connectivity among tasks is governed by a simple generative grammar (see Table 2 and 3), organized into inputs, simulations, observable outputs, and paths. Paths are data arrays with temporal attributes generated by hand movements. Inputs include buttons, a joystick, continuous pressure sensors and magnetic position-tracking devices. Outputs include graphics and sounds.

3.1.2 Topology of Tasks .and I n t e r a c t i v i t y Individual tasks do not correspond to indivisual virtual objects in a scene. Networks of connected tasks comprise events, which include objects and scenes with temporal attributes. For example, imagine a virtual representation of a child‘s swing which an

4256

Page 4: [IEEE 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation - Orlando, FL, USA (12-15 Oct. 1997)] 1997 IEEE International Conference

observer can grasp and push, producing squeaking sounds as i t rocks back and forth. The task network for this swing is described as follows: a control device is one-way connected to a graphical representation of a swing, which is two-way connected to a pendulum simulation, which is further one-way connected to the sound server. The two-way connection of the graphical representation of swing indicates its duel functionality. It functions as a visual interface for imparting force to the pendulum simulation, while the same graphical representation responds to position information generated by the pendulum simulation when the observer releases the swing (see Figure 1).

Table 3. Generative grammar for task networks. The 3-member rule enables the control of display processes by collisions and other interactions between multiple simulations.

source R O W

input

Path

SimuLlim

sim + DiaPlnJ

Recipient Procese Simulation Path Display sim + Display

Large-scale organization of virtual performance space in a ScoreGraph application does not rely upon a three-dimensional spatial paradigm. Linear travel through space in order to access data or to initiate functionality is relatively inefficient. ScoreGraph provides an alternative structure: a performance space is a topology of traversal rules for accessing functionality according to a position in a directed graph. Space is ordered by designating groups of tasks to receive direct interactive input at the same time. An abstraction is provided for connecting task networks to one or more nodes in a performance space graph. These nodes are referred to as zones. Zones represent access points to networks of tasks (see Figure 2). A zone is defined by ( I ) a set of pointers to tasks that are accessible from that zone, (2) timing rules for the onset of tasks at a zone, and (3) navigation rules for moving from one zone to another. A zone may access more than one task, and a task may be accessed from more than one zone. Only one zone may be active at a time, and the access from one zone to another is defined as a directed graph. There is no forced relation between zones and scenes, nor between task networks and objects or scenes. Camera positions may be associated with zones, however the continuity of the performance space is defined by the zone graph, not by arbitrary motion of a camera through a three-dimensional space

3.2 Integration of multiple applications to s i n g l e interplay A task network embodies and integrates processes that are building blocks for simulation, visualization and sonification of data or numerical models. Multiple applications with diverse contents may be implemented sharing support and interface modules in common. Computational models are developed externally and linked to ScoreGraph as a task. Once linked, the display of the state changes of the model can be brought into the interaction. The changes of states in models correspond to a performer’s actions based upon the internal properties of the models’ simulations.

Multiple numerical and graphical models may be engaged and superimposed in a ScoreGraph performance space. Multiple device drivers may be attached to tasks for introducing control signals. To instantiate different applications and their combinations, a score concept is employed. A graph-based configuration file initializes process combinations at start time. The score defines the topology of tasks, topology of zones, temporal attributes such as onsets and durations of tasks, minimum process frame rates, and integration time steps for ODE-based simulations. These are run-time specifications that can be reconfigured in the score file without requiring a recompile of the application.

4. CALIBRATION OF PERFORMANCE SPACE

Observers move in a space that can be measured by machines, but such a measurement does not guarantee a compatibility to the structure of space described by the observer. We cannot assume that the observer’s space and the space measured by the machine are the same space. We propose a method for structuring the mutual embedding of an observer-movement space and a machine-movement space in a cybernetic performance space. We provide a reference system for calibration founded upon alternative descriptions of three-dimensional space. These descriptions are embodied in interface applications.

Figure 1. Task network for a single swing in a virtual playground, and image of rendered scene.

4.1 Definition of Space The configuration of the spaces in human-machine performance is a crucial part in achieving a coherent rehearsal competence. In this section we describe three spaces for supporting a coherent presence of an observer with respect to a computing environment. In VR, space is an immersive interface to computational processes. Since control, display, and simulated dynamics appear to be occurring in same space in VR, it is important to maintain distinct definitions of three spaces. We will take the CAVE system as a case for discussing three spaces.

4257

Page 5: [IEEE 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation - Orlando, FL, USA (12-15 Oct. 1997)] 1997 IEEE International Conference

Figure 2. Topology of Interactivity the fixed physical screens, as the observer’s head position and angle change, resulting in off-axis projection. At the same time the geometry appears stable compared to the observer’s movements, which relocate the observer’s viewing angle in the Zones organizing

interactive access virtual space. to control tasks.

4.1.3 Computational Space . . * . * . * Computational space is where dynamical

- .

models are iterated and their states updated. A number n of parameters determines the variety of controls the model affords an observer. Parameters can be visualized with an equivalent number of axes. Since we are bound to three- dimensional perception in space even with a VR display system it is not possible to visualize parameter values in spatial

coordinates when the number of orthogonal axes exceeds three. We apply a method for shadowing down from n-dimensions to two or three dimensions. A set of generating points in two or three-dimensional space are mapped to a set of vector arrays of

defining events.

4.1.1 Physical Space Physical space is a pre-quantized space that is continuous within boundaries. The boundaries are determined by the space affordance that is inherent to the display apparatus. The CAVE is a display apparatus that measures 10 wide x 10 deer, x 9 high in units of feet. Thus physical space iH bound io a 10’ x 10’

actions take place. Spatial coordinates are represented from the point [o, o, 01 (in x, y, z) at the center of the cube, measuring the floor at -5, ol

9, cube within which a Figure 3. Performance space calibration applying a Manifold Control method for mapping physical space to high-dimensional control space, with ecological organization of movement. The white arrow represents a Path event created by a hand gesture. The MC governs numerical projections.

and the left wall at [-5,0, O] as you face into the closed cube (the front screen). Geometry i s represented in CAVE applications at the same scale as the physical space, suggesting that the virtual space is a direct encoding of the physical dimensions.

4.1.2 Numerical Space Numerical space is a quantization of physical space. The space translates position information to computable descriptions. The space displays extended three-dimensional views such in a perspective projection in which the perception of the space may exceed the actual dimensions of physical space. While position and control information is bound to 10’ x 10’ x 9’ in the CAVE the perception of this space is scaled according to the desired degree of geometric detail with respect to the simulated dimensions of the scene, the viewing distance and the angle of view. For example one can view the entire surface of the Earth in the CAVE. But to obtain a high degree of detail only a portion will be visible as a closer viewing angle will then be required.

Numerical space is quantized by digitization performed by measuring devices that move through the space while transmitting position coordinates in feet, and polar angles in radians. Quantization is time-dependent on the sampling rate of the measuring device (48 Hz for the magnetic trackers) and numerical resolution is dependent upon the rate of motion through the physical space. Sensors also have limits of resolution that are more course for ultrasonic devices than for magnetic devices, and finest for mechanical devices. Numerical space may be re- oriented in physical space according to the position changes of an observer. Perspective projection of visualized three-dimensional geometric objects is iteratively relocated against

2,

Data flow

Computational Space

4258

Page 6: [IEEE 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation - Orlando, FL, USA (12-15 Oct. 1997)] 1997 IEEE International Conference

size n . A geometric method creates a continuous, differentiable manifold passing through each value in each vector array. We traverse this manifold from physical space. This method is described in [2] where we also discuss the distinction between window space - where generating points reside, and phase space - the n-dimensional control space of computational models.

4.2 Calibration method The use of a magnetic, video or mechanical tracking device to measure a person’s position in 3-space provides an initial link to establish an embedding. This tracking only measures points in 3-space and not the organization of space which is implicit in an observer’s movements. We tend to think of limbs and joints as motion constraints. An alternative is to describe their constraints as a manifold in a high-dimensional vector space. In other words, limbs and joints imply a highly structured movement space. We align human-centric motion hierarchies with the phase space controlling models in a virtual environment. 3-D space can be viewed as a projection surface where movement space and phase space can be coupled. This is an extension of the concept of window space, to allow multiple high-dimensional systems to share projection into the same window space. We calibrate the coordinates of these projections in order to bring the two manifolds into conjunction. We arrive at a multiple-embedding of numerical projections, and movements executed in real-time for traversing the projections (see Figure 3).

Further notational methods are adopted for visually articulating the ecology of the movement space [5]. The ecology is the positional organization of the shadowing from phase space to window space. Before computing the projection from a phase space to a window space, we arrange the generating points in the window space such that a hand or body movement tracing a path from one point to another supports an efficient performance, according to the natural disposition of the body in the space. Graphical paths and surfaces can be traced in the virtual space serving as a movement notation for sounds or animation events. These notations can be applied in the composition stage and later replaced, or remain as a visual attribute of a performance.

5. CONCLUSIONS AND FUTURE DIRECTION

An architecture has been outlined for conducting research in cybernetic performance, with the following findings. (1) Interlinked parallel processes can be a general solution for integrating multiple applications in VR, however, temporal management is required to protect asynchronous processes and deliver real-time synchronization upon an observer’s demand. (2) Three-dimensional projection is useful for visualization and interactive gestures, but does not provide an efficient organization for connecting the function modules of the underlying computational models. Virtual objects can be defined as interactive event functions, organized in an abstract topology representing networks of tasks. This topology may then be mapped to spatial coordinates. This provides an independence from the practice of describing functionality in terms of navigating three-dimensional space. (3) Control of computational models can be projected to high- dimensional manifolds. Arbitrary manifolds can be coupled using a manifold controller, however the optimal mapping of human movement space to computational manifold space requires further study.

Further research is needed regarding the numerical descriptions of transfer functions for the temporal management of networks of asynchronous processes, and the synchronization of signals transmitted from one process to another. We anticipate the refinement of stable properties of time and numerical projection

in observable space. The development of a notational system describing movement in virtual space and describing control signals in computational space, is being extended to depict the topology of the functionality of networks of processes supporting virtual environments. Our goal in developing these tools is to support a coherent rehearsal competence of an observer in a virtual environment.

6. ACKNOWLEDGMENT

We wish to thank Alex Betts for nurturing the ScoreGraph environment, John Wu for implementing physically-based simulations for real-time rendering, and Camille Goudeseune for his collaboration on the Manifold Control project.

7. BIBLIOGRAPHY

[ l ] Bargar, R., Das, S., Choi, I., and Goudeseune, C., “Model- based interactive sound for an immersive virtual environment,” Proceedings of the International Computer Music Conference, Aarhus, Denmark: International Computer Music Assn, 1994,

[2] Choi, I., and Bargar, R. “Interfacing sound synthesis to movement for exploring high-dimensional systems in a virtual environment,” Proceedings of the 1995 IEEE International Conference on Systems, Man and Cybernetics, 1995, pp. 2772 - 2777. [3] Choi, I., and Ricci, C. “Foot-mounted gesture detection and its application in a virtual environment,” This volume, 1997. [4] Defanti, T., Cruz-Neira, C., Sandin, D., Kenyon, R., and Hart, J., ‘‘The CAVE,” Communications of the ACM, V. 3 5 , No. 6, 1996. [5] Eshkol. N., Moving Writing Reading, Holon, Israel: The Movement Notation Society, 1973. [6] Freed, A., “Synthesis Platforms, Software Environments and Computing Issues,” In Cook, P., Bargar, R., Freed, A., and Serra, X., Creating and Manipulating Sound to Enhance Computer Graphics, 1996, pp. 95-1 14, SIGGRAPH 96 Course Notes vol. 17 and 18, 23rd International Conference on Computer Graphics and Interactive Techniques. New Orleans: Association of Computing Machinery. [7] Haken, H., Synergetic Computers and Cognition, Berlin: Springer-Verlag, 199 1 . [8] Harmon, S., Hritz, L., Solorzano, M., “Interoperability Between Distributed Simulations,” Proceedings of the I995 IEEE International Conference on Systems, Man and Cybernetics, vol. 5, 1995, pp. 4545-4550. [9] Helman, J., “Designing Virtual Reality Systems to Meet Physio- and Psychological Requirements,“ Applied Virtual Reality, Course 23 Notes, SIGGRAPH 93: 20th International Conference on Computer Graphics and Interactive Techniques, Anahiem, C A Association of Computing Machinery, 1993,

pp. 471-474.

. -

pp. 5-1 - 5-20. [lo] Pape, D., Cruz-Niera, C., and Czernuszenko,

User’s Guide, Chicago: University of Illinois Visualization Laboratory, 1996.

M., CAVE Electronic