performance and co-presence in heterogeneous haptic collaboration

7
Performance and Co-presence in Heterogeneous Haptic Collaboration Margaret McLaughlin, Gaurav Sukhatme, Wei Peng, Weirong Zhu, and Jacob Parks Integrated Media Systems Center, University of Southern California [email protected] Abstract Based on a distributed architecture for real-time collection and broadcast of haptic information to multiple participants, heterogeneous haptic devices (the PHANToM and the CyberGrasp) were used in an experiment to test the performance accuracy and sense of presence of participants engaged in a task involving mutual touch. In the experiment, the hands of CyberGrasp users were modeled for the computer to which the PHANToM was connected. PHANToM users were requested to touch the virtual hands of CyberGrasp users to transmit randomly ordered letters of the alphabet using a pre-set coding system. Performance accuracy achieved by a small sample of six pairs of participants was less than optimal in the strict sense: accurate detection of intended location and frequency of touch combined ranged from .27 to .69. However, participants accurately detected the intended location of touch in 94.7% of the cases. 1. Introduction In many applications of haptics it will be necessary for users to interact with each other as well as with other objects. We have been working to develop an architecture for haptic collaboration among distributed users. Our focus is on collaboration over a nondedicated channel (such as an Internet connection) where users experience stochastic, unbounded communication delays [1-5]. Adding haptics to multi-user environments creates additional demand for frequent position sampling for collision detection and fast update [6, 7]. It is also reasonable to assume that in multiuser environments, there may be a heterogeneous assortment of haptic devices (e.g. the PHANToM, the CyberGrasp, 2D haptic mice) with which users interact with the system. One of our primary concerns thus would be to ensure proper registration of the disparate devices with the 3D environment and with each other. Our technical progress in this work is reported in [8]. The architecture we have developed supports multiple users with heterogeneous haptic devices, over a non-dedicated channel, and adapts to network delay. Both objects and users’ hands can be touched. Hosts are assigned to “fast” or “slow” local groups based on the measured network latency between them. In our most recent work we have focused on improving our system by re-implementing the synchronization mechanism as a collaboration library. This was done primarily to facilitate pilot studies where a user manipulating a PHANToM could touch a remote user wearing a CyberGrasp. In the previous collaboration program, the hand was represented on the PHANToM computer by a simple cylindrical model. With the cylinder, only the coordinates of the wrist were transmitted to the computer to which the PHANToM was connected. In addition, only tactile data for one finger were transmitted back to the CyberGrasp. To achieve the desired visual and haptic capabilities for the collaboration program, a model of a human hand was developed for the computer to which the PHANToM was connected. Each segment of the hand is a 3D model. Each component of the hand (the palm, the three segments of each finger, and the two segments of the thumb) was imported separately into the haptic environment. The components were arranged and aligned visually to produce the image of the hand. In the code, the components were organized in a tree-like architecture. To maintain the shape of the hand as it moves during simulation, the movements of components more distal to the wrist are described in the reference frames of adjacent components more proximal to the wrist. For example, the tip of a finger moves with respect to the middle segment of the finger, the middle segment moves with respect to the first segment of the finger, and the first segment moves with respect to the palm. This utilization of relative motion reduces the amount of information required to describe the motion of the hand. Only the rotation angles of each component (relative to its ‘parent’ component) are needed. The x-y-z coordinates of each component relative to its parent are fixed, and therefore need not be updated. (The only rectangular coordinates that update are for the wrist.) The CyberGrasp computer sends updated transform parameters, which the PHANToM computer uses to generate new coordinate values for the hand. The other major issue concerning development of the hand model involved contact sensing by the PHANToM. In the haptic environment, regular, solid geometric shapes provide the PHANToM with substantial contact force. This was not so with the hand. Its two-dimensional triangular components would tend to let the PHANToM pass through, with little tactile indication of contact. This

Upload: usc

Post on 26-Apr-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Performance and Co-presence in Heterogeneous Haptic Collaboration

Margaret McLaughlin, Gaurav Sukhatme, Wei Peng, Weirong Zhu, and Jacob Parks Integrated Media Systems Center, University of Southern California

[email protected]

Abstract Based on a distributed architecture for real-time collection and broadcast of haptic information to multiple participants, heterogeneous haptic devices (the PHANToM and the CyberGrasp) were used in an experiment to test the performance accuracy and sense of presence of participants engaged in a task involving mutual touch. In the experiment, the hands of CyberGrasp users were modeled for the computer to which the PHANToM was connected. PHANToM users were requested to touch the virtual hands of CyberGrasp users to transmit randomly ordered letters of the alphabet using a pre-set coding system. Performance accuracy achieved by a small sample of six pairs of participants was less than optimal in the strict sense: accurate detection of intended location and frequency of touch combined ranged from .27 to .69. However, participants accurately detected the intended location of touch in 94.7% of the cases. 1. Introduction

In many applications of haptics it will be necessary for users to interact with each other as well as with other objects. We have been working to develop an architecture for haptic collaboration among distributed users. Our focus is on collaboration over a nondedicated channel (such as an Internet connection) where users experience stochastic, unbounded communication delays [1-5]. Adding haptics to multi-user environments creates additional demand for frequent position sampling for collision detection and fast update [6, 7]. It is also reasonable to assume that in multiuser environments, there may be a heterogeneous assortment of haptic devices (e.g. the PHANToM, the CyberGrasp, 2D haptic mice) with which users interact with the system. One of our primary concerns thus would be to ensure proper registration of the disparate devices with the 3D environment and with each other. Our technical progress in this work is reported in [8]. The architecture we have developed supports multiple users with heterogeneous haptic devices, over a non-dedicated channel, and adapts to network delay. Both objects and users’ hands can be touched. Hosts are assigned to “fast” or “slow” local groups based on the measured network latency between them.

In our most recent work we have focused on improving our system by re-implementing the synchronization mechanism as a collaboration library. This was done primarily to facilitate pilot studies where a user manipulating a PHANToM could touch a remote user wearing a CyberGrasp. In the previous collaboration program, the hand was represented on the PHANToM computer by a simple cylindrical model. With the cylinder, only the coordinates of the wrist were transmitted to the computer to which the PHANToM was connected. In addition, only tactile data for one finger were transmitted back to the CyberGrasp. To achieve the desired visual and haptic capabilities for the collaboration program, a model of a human hand was developed for the computer to which the PHANToM was connected. Each segment of the hand is a 3D model. Each component of the hand (the palm, the three segments of each finger, and the two segments of the thumb) was imported separately into the haptic environment. The components were arranged and aligned visually to produce the image of the hand. In the code, the components were organized in a tree-like architecture. To maintain the shape of the hand as it moves during simulation, the movements of components more distal to the wrist are described in the reference frames of adjacent components more proximal to the wrist. For example, the tip of a finger moves with respect to the middle segment of the finger, the middle segment moves with respect to the first segment of the finger, and the first segment moves with respect to the palm. This utilization of relative motion reduces the amount of information required to describe the motion of the hand. Only the rotation angles of each component (relative to its ‘parent’ component) are needed. The x-y-z coordinates of each component relative to its parent are fixed, and therefore need not be updated. (The only rectangular coordinates that update are for the wrist.) The CyberGrasp computer sends updated transform parameters, which the PHANToM computer uses to generate new coordinate values for the hand. The other major issue concerning development of the hand model involved contact sensing by the PHANToM. In the haptic environment, regular, solid geometric shapes provide the PHANToM with substantial contact force. This was not so with the hand. Its two-dimensional triangular components would tend to let the PHANToM pass through, with little tactile indication of contact. This

problem was solved in a makeshift fashion. A skeleton was constructed out of solid geometric shapes, and placed inside the hand model, to provide contact sensation for the PHANToM. In this solution, PHANToM users see the VRML hand, and feel the skeleton. (We will address this pass-through issue in future work by considering alternative surface representations.) Currently we have been directing our attention to evaluating our collaborative system. There have only been a few studies of cooperation/collaboration between users of haptic devices. Marsic, Medl, and Flanagan [9] integrated the Rutgers force-feedback glove into a multimodal system to enable collaborative environmental navigation, object manipulation, and object sensing in an Army simulation of asset deployment in a crisis. Sallnas, Rassmus-Grohn, and Sjostrom [10] found that the addition of force feedback to a cube manipulation task significantly improved performance and sense of presence. Designed with CSCW applications in mind, the inTouch system [11] employs the Synchronized Distributed Physical Objects concept to haptically couple sets of rollers so that persons separated by distance can manipulate the rollers collaboratively and passively feel each other’s “touch.” In a study by Basdogan, Ho and their colleagues [12, 13], partners at remote locations were assigned three cooperative tasks requiring joint manipulation of 3D virtual objects, such as moving a ring back and forth along a wire while minimizing contact with the wire. Experiments were conducted with visual feedback only, and with both visual and haptic feedback. Both performance and feelings of togetherness were enhanced in the dual modality condition. Performance was best when visual feedback alone was followed by the addition of haptic feedback rather than vice versa. Durlach and Slater [14] note that factors that contribute to a sense of co-presence include being able to observe the effect on the environment of actions by one's interlocutors, and being able to work collaboratively with co-present others to alter the environment. Touching, even virtual touching, is believed to contribute to the sense of co-presence because of its associations with closeness and intimacy. Our work differs from earlier instantiations of collaborative haptic environments in that it is grounded in a distributed architecture for real-time collection and simultaneous broadcast of haptic information to multiple haptic session participants with disparate haptic devices. Our work also allows users to touch each other as well as objects in the environment. Our design maintains stability by addressing the issue of latency; interaction between two hosts is decided dynamically based on the measured network latency between them. And finally, our new interface (see below, Figures 1 and 2) readily enables experiments on users’ ability to discriminate location and frequency of touch in collaborative environments.

Figure 1. PHANToM and CyberGrasp users engaging in mutual touch. We report here on some preliminary results from a small-scale study of heterogeneous haptic collaboration, in which one participant, wearing the CyberGrasp, interacts over the Internet with another participant at a remote location (another lab within the same building) who is assigned to a computer with the PHANToM haptic stylus attached. The participants are given an experimental task that assesses the ability of the person wearing the CyberGrasp to recognize and discriminate among patterns of passive touch with respect to frequency and location, and the ability of the PHANToM user to communicate information effectively through active touch. We can call the participants, respectively, P1 and P2. Participant P1’s monitor (see Figure 2) shows a model of P2’s (the CyberGrasp wearer’s) hand, which is updated in real time to reflect the current location and orientation of P2’s hand and fingers. P1’s task is to use a PHANToM haptic stylus, the tip of which is represented by an enlarged round ball, to communicate information to P2

through a tactile “Morse code” which maps the number of times a particular digit is “touched” onto the 26 letters of the alphabet. A keypad showing this mapping is visible on P1’s display. P2 holds her hand stationary during this process. P2 then enters (or has entered for her) the letters she thinks she has received into a keypad visible on her own display, from which they are logged into a word file. On P2’s monitor, only the keypad is visible; P2 is unable to see the hand display. The two participants can communicate with each other via a text-based messaging system, sending pre-programmed messages such as “Experiment begins,” “New word,” “New letter,” “Ready for the next letter,” and so on. All text messaging between the partners is logged and time-stamped.

Figure 2. Screen capture of Participant 1’s view. The ball atop the index finger represents the PHANToM tip as P1 is communicating the letter “B” to P2’s index finger. 2. Research Questions Although a complete account of the research issues raised by a haptics-enhanced collaborative virtual environment is beyond the scope of this paper, we propose that there are certain relationships among process and performance variables that merit more attention than others. Figure 3 is a simple diagram of these relationships as we might expect them to materialize. Success by the PHANToM user in communicating to the CyberGrasp user the frequency and location of touch should be achieved through consistently paced, firmly applied force, which increases predictability and shapes expectations for subsequent activity. Accuracy should be lower if the PHANToM user takes a long time to complete the task, or has frequent periods of apparent inactivity. Thus if there are many sampled points during the task when the PHANToM user appears inactive (there is no measurable

force), the CyberGrasp user will be left to his or her own interpretations of the absence of contact or of erratic contact. If force is inconsistently applied, it will make it more difficult for the CyberGrasp user to distinguish between an intended touch and incidental vibration. The relationships among performance accuracy, sense of co-presence, and liking for partner are expected to be positive based on previous literature, but are difficult to punctuate causally and in some cases may behave counterintuitively. For example, the CyberGrasp user may be unable to keep his or her hand still, which convinces the PHANToM user that there is indeed another person in the collaborative environment, increasing sense of co-presence, but which also makes the task more frustrating and thus negatively impacts liking. Similarly, firmly applied force may improve performance accuracy but may communicate unattractive personal traits such as aggression, thus negatively impacting liking.

Figure 3. Set of proposed relationships among selected performance variables, co-presence, and impression formation.

3. Methods 3.1. Participants Participants were recruited through notices placed on bulletin boards and circulated through the campus BBS and student mailing lists. The recruitment notice invited participation in a study that used “desktop robotics technology to ‘touch’ the fingertips of another person over the net and exchange information with that person.” Compensation for participation was offered in the form of a US$10.00 gift certificate to the campus bookstore. Prospective volunteers for the study were asked to read the consent form, posted online, prior to signing up for the study, and to contact the senior investigator if they wished to participate. For participants to be eligible they had to be right-handed, over 18, and unacquainted with any other person planning to volunteer for the study. Upon

acceptance into the participant pool, volunteers were randomly assigned to the “CyberGrasp” condition or the “PHANToM” condition and directed to one of two lab locations on campus. The target N of pairs for the study is 25; we are currently collecting data and report here on a very small sample of six pairs. 3.1. Procedures Upon arrival at their respective lab locations, participants were provided with a copy of the consent form and their signatures secured and countersigned by the experimenters. They were then provided with a PowerPoint tutorial tailored to the type of haptic device to which they were assigned. The tutorial introduced them to haptics and in the case of the PHANToM user provided an opportunity for them to practice using the device for active exploration of a digital object. The CyberGrasp user’s role in the experiment was to be a passive recipient of touch and as such s/he did not “practice” but was simply taken through the calibration process upon completion of the tutorial. The experimenters in their respective lab locations communicated with Motorola Talkabout radios to coordinate opening of the collaborative workspace. Participants joined in the workspace over an ordinary Internet connection. Next, they were provided with oral instructions which reinforced the explanations of the experimental task provided in the tutorials. Below are the instructions provided to the PHANToM user; similar instructions were provided to the CyberGrasp user: In this study you are going to be interacting with a partner who is located at a computer in another part of the building. He or she will be wearing the CyberGrasp, which is a device you saw pictured in the tutorial that looks like a glove with tendons. The hand is the graphical representation of your partner’s hand; it tracks the position and movement of his/her fingers precisely. The white ball is a cursor representing the movement of the stylus of the PHANToM device that you are using. It tracks the movement of your fingertip precisely. When you touch one of the five fingertips you see on the screen, your partner will feel your touch on the corresponding finger. What we have done is to make a kind of Morse code in which the letters of the alphabet are mapped onto your partner’s fingers. For example, when your press your partner’s thumb once, you are transmitting the letter ‘A.’. When you press your partner’s index finger four times, it means you are transmitting the letter ‘Q’. The other 24 letters will be transmitted in the same way. You won’t have to remember this code, as we will provide it for you on the keypad and also on a piece of paper. Just in case you are unsure, here are the thumb, index, middle, ring, and pinky fingers.

The PHANToM device only has a single point of contact, so you may have some difficulty at first making contact with the partner’s finger during the task. Be sure to press firmly; you will “feel” the “collision” when you make contact. You should press on the top third of the the finger, just above the joint, not the palm, or other parts. Otherwise, he or she cannot feel your touch and the information you transmit might be misleading. You are going to send your partner some three-letter words. These words may or may not spell something you recognize as a word. You will be given a list of 8 words, each with three letters, and a final word with two letters. Next to each letter in the word is the code, that is, which of your partner’s fingers you should touch and how many times you should touch it. After you have sent the last letter in the word, I will transmit the message “End word.” Your partner will then need a few seconds to move the letters from the message window where they have been entered into the Words log. When that has been done you will get the message “ready for new word.” You will repeat this process 8 more times, starting with the second word on your list. When you have sent all the words, I will send your partner a message that says “experiment ends.” Are you ready to begin? As an added check to ensure that any obtained errors in location or frequency of touch were attributable only to the haptic interaction and not to participants’ misreading of the code, participants were required to state aloud which finger they intended to touch and how many times they intended to touch it. When the oral instructions were completed, the investigator supervising the PHANToM user started a process for recording the haptic data stream (capturing and time-stamping applied force at 10ms intervals) [15] and signaled through the keypad that the experiment was to begin. The investigator supervising the CyberGrasp user minimized the collaborative workspace window so that the participant would respond only to the haptically communicated information. The pair of participants worked their way through a word list that contained all 26 letters of the alphabet, sorted randomly into nine sets of three- and two-letter words. 3.2. Administration of post-task measures At the conclusion of the trials, participants completed a post-task evaluation with items adapted from the Basdogan et al. measure of co-presence. Among the questions of interest: Did the subject feel present in the haptic environment? Did the subject feel co-present with the remote partner? Did the subject believe that the remote partner was a real person? Subjects were also asked to make attributions about their partners with respect to standard dimensions of perception (unsocial-

social; sensitive-insensitive; impersonal-personal; cold-warm) (Sallnas, Rassmus-Grohn, & Sjostrom, 2001). (Findings with respect to the person perception data will be reported elsewhere.) 4. Results 4.1. Accuracy Performance for all six pairs was seemingly poor. Accuracy of the best pair was 69.2%. The next best pairs got 42.3% of the letters correct, and the least effective pair only 26.9%. However, most of the confusions, with the exception of the thumb, resulted from an overestimate of the number of times the finger was tapped. Only 5.3% of the errors made were egregious, e.g., the recipient was confused about which finger was being tapped. We present two of the confusions matrices below, the matrix for the ring finger (Table 1) and for the thumb (Table 2). For the ring finger, most of the confusions appear above the main diagonal, indicating that the CyberGrasp wearer consistently perceived the number of taps to be greater than intended. There were no confusions with respect to location of touch. The pattern is opposite for the thumb. More than three times as many confusions appear below as above the main diagonal, indicating that the number of taps was consistently underestimated by the CyberGrasp wearer. There were three confusions with respect to location of touch. Confusions data for the index, middle, and pinky fingers follow the pattern of the ring finger in that there were approximately two to three times as many overestimations of the number of taps as underestimations. Table 1. Table of confusions for the ring finger.

Reported Number of Taps* 0 1 2 3 4 5 0 1 (1) (1) (1) 2 (1) 3 (3) 4 (2)

# of Taps In Code

5 *Number of confusions is in parentheses.

Table 2. Table of confusions for the thumb.

Reported Number of Taps*

0 1 2 3 4 5 6 Md Py 0 1 (2) (1) X X 2 (1) (1) 3 (3) (1) (1) 4 (2) (1) (1) X 5 (2) (1) (2)

# of Taps In Code

6 (1) (1) (2) *Number of confusions as to the number of taps is in parentheses. Confusions as to location of touch are represented in capital X’s at the right-hand side of the table. Md = middle finger; Py = pinky finger. 4.2. Performance and presence variables Performance values for the participant pairs for (a) the number of points at which there was measurable force, (b) minimum force (in Newtons), (c) maximum force, (d) force variability (standard deviation of sampled values), (e) time to task completion, and (f) accuracy (percentage of correctly decoded letters) are presented in Table 3 (a)(b). Co-presence data is also reported in Table 3 (b). Comparison of the most accurate pair (Pair 6) and least accurate pair (Pair 1) indicates that the more accurate pair had greater mean force and took far less time to complete the task (although taken over all pairs these relationships are not monotonic with respect to task completion time). Comparison of the most and least accurate pairs (Pair 6, Pair 1) indicated that perceived co-presence was rated higher by the more accurate subjects, by both members of the pair. However, taken over all pairs the relationship between co-presence and accuracy is not monotonic. Table 3 (a)(b). Performance variables and co-presence ratings for participant pairs.

Pair N Points Measurable Force

Min Force

Max Force

Mean Force

Force S.D.

1 0494 .0005 .8530 .2788 .2623 2 6607 .0005 .8974 .2278 .2501

3 3688 .0005 .9039 .1240 .1699

4 0467 .0005 .8558 .2559 .2840

5 0693 .0003 .8518 .3073 .2907

6 0458 .0005 .8100 .3578 .3054

(a)

Pair Task Time

Co-presence PHANToM User

Co-presence CyberGrasp User

Accuracy

1 27.40

3.80 3.60 .269

2 14.53

1.80 5.50 .384

3 19.19

4.60 5.70 .423

4 15.79

5.00 5.60 .423

5 23.59

5.10 3.60 .423

6 12.41

4.90 4.10 .692

(b) 5. Discussion Although the accuracy levels obtained in this small sample appeared to be low, it was in fact the case that in 94.7% of the instances the passive touch recipient (the CyberGrasp wearer) was able to distinguish which digit was being touched. The bulk of the obtained errors resulted from an overestimate by one of the number of times the digit was tapped. There are a number of possible explanations, two of which seem most likely. First, there may have been incidental vibration of the CyberGrasp unrelated to the intended activity of the PHANToM user. If that were the case, however, there would have been more reports by the CyberGrasp user that his or her partner was “trying to touch several fingers” at once. Our experience to date indicates that touch on some of the middle digits can produce vibration in neighboring digits. What we did observe is that the PHANToM user, having only a single point of contact, often “missed” connecting with the partner’s fingers on the first several tries at a new letter, and would frequently follow up on a light application of force to the finger with a stronger one, even though the deformation of the finger model on the first occasion of contact was clearly visible on the monitor. Further practice and more intensive coaching in the tutorial on interpreting the visual indicators of contact should improve results in future experiments. Incorporation of immediate feedback about performance accuracy on initial trails should also improve performance on subsequent trials.

Poorest performance was obtained for information communicated through taps to the thumb. In addition to the three cases in which the CyberGrasp wearer perceived a tap intended for the thumb to be directed to another digit, the thumb confusions matrix shows clearly that the passive recipient consistently underestimated the number of taps relative to the number intended by the active partner. The most plausible explanation has to do with the comparative difficulty given the orientation of the hand of contacting the palmar face of the thumb with the PHANToM. Many of the taps were applied to the dorsal side. The strongest predictor of performance accuracy was task completion time (r =.-72. p =.053). Higher performance accuracy was associated with shorter completion times. We expected that greater mean force would be associated with greater accuracy; however, no significant relationship was obtained in this small sample (r = .448, p = .187), although the trend was in the predicted direction. There may be a force threshold which once met is adequate for detection, or it may be that the necessary level of force varies with receivers. And it may be that mean force over multiple trials is not a meaningful measure. We hope to sort out these issues as we collect additional data. Our expectations with respect to number of points of measurable force and force variability were not confirmed. If anything, we obtained a slight trend in the opposite direction, that is, for higher accuracy to be associated with higher force variability and fewer points of measurable force, but the obtained correlations were very small and not statistically significant. Participants in the study reported comparatively strong feelings of co-presence (PHANToM users, M = 4.20, t = 8.123, p = .000; CyberGrasp users, M = 4.68, t = 11.218, p = .000). No evidence was obtained that greater accuracy covaries with sense of co-presence, as predicted, but the PHANToM user’s assessment of co-presence was negatively correlated with the number of sampled points of measurable force (r = -.818, p = .023). The cases contributing to this relationship appeared to be outliers with respect to measurable applications of force and we are unable to account for this finding. The CyberGrasp wearer’s assessment of co-presence varied negatively with the partner’s mean applied force (r = -.730, p = .05). Again, this finding is counterintuitive and may be mediated by some other factor which will be made manifest as we collect additional data. We are far more comfortable looking for trends in performance over multiple trials with a handful of participants than we are in looking for relationships between performance variables and subjective, retrospective assessments of the collaborative environment given only the six pairs.

Acknowledgments The research reported here was funded by the Integrated Media Systems Center, a National Science Foundation Engineering Research Center, Cooperative Agreement No. EEC-9529152. References [1] Hespanha, J., McLaughlin, M. L., & Sukhatme, G. (2002). Haptic collaboration over the Internet. In McLaughlin, M. L., Hespanha, J., & Sukhatme, G. (Eds.). Touch in Virtual Environments: Haptics and the Design of Interactive Systems. Prentice Hall. [2] Hespanha, J., Sukhatme, G., McLaughlin, M., Akbarian, M., Garg, R., & Zhu, W. (2000). Heterogeneous haptic collaboration over the Internet. Proceedings of the Fifth Phantom Users’ Group Workshop, Aspen, CO. [3] McLaughlin, M. L., Sukhatme, G., Hespanha, J., Shahabi, C., Ortega, A., & Medioni, G. (2000). The haptic museum. Proceedings of the EVA 2000 Conference on Electronic Imaging and the Visual Arts. Florence, Italy. [4] Shahabi, C., Ghoting, A., Kaghazian, L., McLaughlin, M., & Shanbhag, G. (2001). Analysis of haptic data for sign language recognition. Proc. First International Conference on Universal Access in Human-Computer Interaction (UAHCI), New Orleans, Louisiana, 5-10 August 2001. Hillsdale, NJ: Lawrence Erlbaum. [5] Sukhatme, G., Hespanha, J., McLaughlin, M., & Shahabi, C. (2000). Touch in immersive environments. Proceedings of the EVA 2000 Conference on Electronic Imaging and the Visual Arts, Edinburgh, Scotland. [6] Buttolo, P., Oboe, R., & Hannaford, B. (1997). Architectures for shared haptic virtual environments. Computers and Graphics, 21 (4), 421-429. [7] Buttolo, P., Oboe, R., Hannaford, B., & McNeely, B. (1996).

Force feedback in shared virtual simulations. Paper presented at MICAD, France. [8] Sukhatme, G., et al. (2002). Distributed haptic environments. Annual report to the National Science Foundation. Integrated Media Systems Center, University of Southern California. [9] Marsic, I., Medl, A., & Flanagan, J. (2000). Natural Communication with Information Systems. Proceedings of the IEEE, 88(8), 1354-1366. [10] Sallnas, E. L., Rassmus-Grohn, K. & Sjostrom, C. (2001). Supporting presence in collaborative environments by haptic force feedback. ACM Transactions on Computer-Human Interaction, 7 (4), 461-476. [11] Brave, S., Ishii, H., & Dahley, A. (1998). Tangible interface for remote collaboration and communication. Proceedings of CSCW ’98. [12] Basdogan, C., Ho, C., Srinivasan, M. A., & Slater, M. (2000) An experimental study on the role of touch in shared virtual environments. ACM Transactions on Computer Human Interaction, 7(4), 443-460. [13] Ho, C., Basdogan, C., Slater, M., Durlach, N., & Srinivasan, M. A. (1998).The influence of haptic communication on the sense of being together. Paper presented at the BT Workshop on Presence in Shared Virtual Environments, June 1998. [14] Durlach, N., & Slater, M. (n.d.). Presence in shared virtual environments and virtual togetherness. www.cs.ucl.ac.uk/staff/m.slater/BTWorkshop/durlach.html. [15] Shahabi, C., Kolahdouzan, M., Barish, G., Zimmermann, R., Yao, D., & Fu, L. (2001). Alternative techniques for the efficient acquisition of haptic data. ACM SIGMETRICS/ Performance 2001, Cambridge, MA.