[ieee 2008 grid computing environments workshop - austin, tx, usa (2008.11.12-2008.11.16)] 2008 grid...

7
1 Abstract—In this paper we present results on the design and testing of new middleware services for cross-reality data/insight transfer between the real-world and 3D virtual environments. We highlight the architecture that is used along with the implementation of a cyberinfrastructure toolbox containing a set of tools in a virtual environment. The toolbox itself uses a service- oriented architecture to implement a variety of services – such as Location Service, Social Tagging Service, Interaction with real- world compute resources, and real-time cross-reality sensor data transfer. The driving concept behind the middleware implementation is the use of proxy services with straightforward REST interfaces. This paper expands on prior work and shows progress towards the integration of education and research in a common environment. Index Terms—Learning systems, cyber-enabled education, future virtual environments, virtual worlds, informal science education, beyond web 2.0, design, experimentation, theory. I. INTRODUCTION In this paper, we report results of a project called RE@L – Research Environments @ssociated with Learning through Social Networks – that serves as a prototype to study how compute, data, and other remote resources can be utilized to place learning within environments familiar to students. This project attempts to include social-behavioral characteristics of learners and researchers as a part of the environment within which they work. Our prototype focuses on Second Life™ as the platform for the implementation effort. While initial results are reported based on Second Life™, the findings we report here are generalizable to other virtual worlds or 3D virtual environments. Our second goal is to foster a new paradigm for learning wherein cyberinfrastructure (CI) itself would be the focus of this project where students learn CI by working actively on CI. One of the core arguments of this project and the resulting middleware is that current generation of middleware technologies used for engineering and science has to be extended into the realm of social networks and virtual 3D environments. Simultaneously, our goal is to leverage previous middleware work reported in [1]. Our goal is to Manuscript received September 22, 2008. This work was supported by the National Science Foundation Grant OCI/EHR-0726023 and EEC-0747795. Krishna P.C. Madhavan is with Clemson University, Clemson, SC 29634 (phone: 864-656-5874; fax: 864-656-0145; e-mail: [email protected]). Sebastien Goasguen is with Clemson University, Clemson, SC 29634 (phone: 864-656-6753; fax: 864-656-0145; e-mail: [email protected]). explore a new framework for how cyberinfrastructure can enable bridging research and education in forms that fit with students’ lifestyles, technology choices, and indeed learning styles. Simultaneously, we set out to understand how future virtual 3D environments could be instrumented to gather data about student activities and used to make intelligent decisions that could guide students in their learning efforts. We presented the theoretical background for this work and the need for such work in [2]. We did not report on architectural and middleware results in the previous paper. In this paper, we report on results directly addressing the problems of how virtual 3D environments can be used to enable cross-reality scientific and engineering activities. We adopt the term “cross-reality” from [3] and define it as the interchange of scientific and educational data/insights between virtual environments such as Second Life™ and the real world without regard to the direction of such transfer. For example, in this paper, we report on the ability to transfer sensor data from a real-world watershed sensor test-bed and Second Life. Such a service, would qualify as a cross-reality data transfer service. Furthermore, we report data transferring from within Second Life – such as tagging data – to external RE@L middleware. This would also qualify as cross-reality data interchange – as direction of data transfer does not have impact in the definition of this term. We view cross-reality has a key concept to define a feedback mechanism between virtual environments and the real world. In the long term, this feedback mechanism through instrumentation and monitoring of the virtual environment should enable us to self-configure the students and researchers world based on their behaviors and needs. The paper is organized as follows: Section II presents some short background on virtual worlds while section III describes the middleware architecture and the various services that were developed for cross reality: A Location Tracking Service (LTS), a Social Tagging Service (STS), a Compute Resource Monitoring Service (CRMS) and finally a Streaming Data Service (SDS). II. BACKGROUND Bainbridge [4] states that “Online virtual worlds, electronic environments where people can work and interact in a somewhat realistic manner, have great potential as sites for research in the social, behavioral, and economic sciences, as well as in human-centered computer science.” Virtual Cross-reality Services for 3D Virtual Environments Krishna P.C. Madhavan, Member, Jordan Upham, Benjamin Sterrett, John Fisher and Sebastien Goasguen, Senior Member, IEEE

Upload: sebastien

Post on 26-Feb-2017

215 views

Category:

Documents


3 download

TRANSCRIPT

1

Abstract—In this paper we present results on the design and

testing of new middleware services for cross-reality data/insight transfer between the real-world and 3D virtual environments. We highlight the architecture that is used along with the implementation of a cyberinfrastructure toolbox containing a set of tools in a virtual environment. The toolbox itself uses a service-oriented architecture to implement a variety of services – such as Location Service, Social Tagging Service, Interaction with real-world compute resources, and real-time cross-reality sensor data transfer. The driving concept behind the middleware implementation is the use of proxy services with straightforward REST interfaces. This paper expands on prior work and shows progress towards the integration of education and research in a common environment.

Index Terms—Learning systems, cyber-enabled education,

future virtual environments, virtual worlds, informal science education, beyond web 2.0, design, experimentation, theory.

I. INTRODUCTION In this paper, we report results of a project called RE@L –

Research Environments @ssociated with Learning through Social Networks – that serves as a prototype to study how compute, data, and other remote resources can be utilized to place learning within environments familiar to students. This project attempts to include social-behavioral characteristics of learners and researchers as a part of the environment within which they work. Our prototype focuses on Second Life™ as the platform for the implementation effort. While initial results are reported based on Second Life™, the findings we report here are generalizable to other virtual worlds or 3D virtual environments. Our second goal is to foster a new paradigm for learning wherein cyberinfrastructure (CI) itself would be the focus of this project where students learn CI by working actively on CI. One of the core arguments of this project and the resulting middleware is that current generation of middleware technologies used for engineering and science has to be extended into the realm of social networks and virtual 3D environments. Simultaneously, our goal is to leverage previous middleware work reported in [1]. Our goal is to

Manuscript received September 22, 2008. This work was supported by the

National Science Foundation Grant OCI/EHR-0726023 and EEC-0747795. Krishna P.C. Madhavan is with Clemson University, Clemson, SC 29634

(phone: 864-656-5874; fax: 864-656-0145; e-mail: [email protected]). Sebastien Goasguen is with Clemson University, Clemson, SC 29634

(phone: 864-656-6753; fax: 864-656-0145; e-mail: [email protected]).

explore a new framework for how cyberinfrastructure can enable bridging research and education in forms that fit with students’ lifestyles, technology choices, and indeed learning styles. Simultaneously, we set out to understand how future virtual 3D environments could be instrumented to gather data about student activities and used to make intelligent decisions that could guide students in their learning efforts. We presented the theoretical background for this work and the need for such work in [2]. We did not report on architectural and middleware results in the previous paper. In this paper, we report on results directly addressing the problems of how virtual 3D environments can be used to enable cross-reality scientific and engineering activities.

We adopt the term “cross-reality” from [3] and define it as the interchange of scientific and educational data/insights between virtual environments such as Second Life™ and the real world without regard to the direction of such transfer. For example, in this paper, we report on the ability to transfer sensor data from a real-world watershed sensor test-bed and Second Life. Such a service, would qualify as a cross-reality data transfer service. Furthermore, we report data transferring from within Second Life – such as tagging data – to external RE@L middleware. This would also qualify as cross-reality data interchange – as direction of data transfer does not have impact in the definition of this term. We view cross-reality has a key concept to define a feedback mechanism between virtual environments and the real world. In the long term, this feedback mechanism through instrumentation and monitoring of the virtual environment should enable us to self-configure the students and researchers world based on their behaviors and needs.

The paper is organized as follows: Section II presents some short background on virtual worlds while section III describes the middleware architecture and the various services that were developed for cross reality: A Location Tracking Service (LTS), a Social Tagging Service (STS), a Compute Resource Monitoring Service (CRMS) and finally a Streaming Data Service (SDS).

II. BACKGROUND Bainbridge [4] states that “Online virtual worlds, electronic

environments where people can work and interact in a somewhat realistic manner, have great potential as sites for research in the social, behavioral, and economic sciences, as well as in human-centered computer science.” Virtual

Cross-reality Services for 3D Virtual Environments

Krishna P.C. Madhavan, Member, Jordan Upham, Benjamin Sterrett, John Fisher and Sebastien Goasguen, Senior Member, IEEE

2

environments are not only great platforms which hold tremendous potential for research and education, but on a more macro level, have significant capability to serve as environments for fostering creativity. They are in effect, platforms for “creative production” [5].

There are significant problems that deter the adoption of virtual worlds as serious problems for research, education, and indeed fostering creativity. Many of the problems are identified in [6]. The availability of appropriate tools is a significant research problem that remains to be addressed. [6] states that “By having a robust set of building tools, the ideal virtual world allows its participants to actively participate in the co-creation of the environment and in their own expression of complex concepts.” In effect, this is one of the major challenges that this paper addresses.

There is one other aspect of RE@L (Research Environments @ssociated with Learning through Social Networks) that is very interesting. [7] states that “It’s almost unfortunate that we talk and think about virtual worlds as a kind of “technology” [quotes in original] application rather than as an exciting new laboratory, or as a giant sandbox to test new theories, or as a way to step into our collective and individual imaginations in a manner that we have never been able to do before.” RE@L applies well-known middleware technology to develop a prototype that allows us to study not only future scientific virtual work environments, but also treats it Second Life™ as a sandbox for future technologies.

III. RESULTS AND DISCUSSION Our approach to RE@L is strategic and covers areas that are

of fundamental importance to examining 3D virtual worlds as potential platforms for conducting research while simultaneously providing students an opportunity to learn cyberinfrastructure. In the following sections, we highlight our work in three strategic areas: (1) Middleware for utilizing virtual worlds for research and education; (2) New paradigms for instrumenting virtual worlds to evaluate how learning and research can happen (3) Providing an environment where students learn CI by hands-on work with CI.

A. Middleware for utilizing virtual worlds for research and education Significant progress has been made in understanding what

type of generalizable middleware architectures would be useful for utilizing virtual worlds for research and education. While different virtual worlds have different architectures, we believe that the service-oriented architecture [8] that we have identified best fits the needs of future middleware developers and architects. Virtual worlds, particularly Second Life, impose significant demands on the underlying network infrastructure upon which they depend heavily to move data in and out of the virtual environment. Also, while virtual world environments are fairly good at handling complex 3D scenes, they slow down significantly when handling large amounts of data that need to be visualized. In the case of Second Life, performance degradation with increased loads is a well-documented issue [9]. Furthermore, in the case of Second Life

performance is not only affected due to increased loads, but also varies significantly depending on the architecture and configuration of the system on which the Second Life viewer is run [10]. Therefore, our approach constructs and deploys much of the services outside the virtual world. The external infrastructures that we use take on the data processing heavy-lifting to more robust server environments, while providing appropriate data feeds back to the virtual environment. Figure 1 below shows the overall architecture. Please note that this is almost exactly the overall architecture that was predicted in [2].

The architecture shown in Figure 1 is a straightforward proxy based system, where clients in second life call REST based services on the RE@L servers that in turn call other local or remote services. In Figure 1, the user interacts primarily with a client viewer that provides them with a view of the virtual environment. Most of the regular user interactions such as changing islands/locations, viewing what and who is around, how the environment looks etc. are handled by the viewer. The state of the viewer itself is a reflection of the parameters stored in the Second Life Server sitting at position (step 1). The viewer and the server do not offer any additional engineering and science services that would make the suitable for use in research and education. All these services need to be built on top of the 3D Virtual Environment Client and Server.

As part of the RE@L project, we are starting to explore how new services can be developed and deployed – for example, a location tracking service, a tagging service that

Fig. 1. Overall RE@L architecture currently implemented (Simplified).

3

will allow users to collaboratively tag and describe objects/locations, a data service that will allow us to interact real-time with servers and sensors deployed in the field, and a scientific molecular simulation service. When the user wants to use any of these services, they initiate these services from within the RE@L island. As shown in Step (2), the user interacts directly with the RE@L Middleware Servers, which in our implementation run PHP with Apache 2.0 and a mySQL database. This standard setup is used to deploy REST based services that can be called via http method from within Second Life. These services in turn call real-world services. The RE@L middleware, step (3), talks to the appropriate service

provider, who responds with the data that is requested (step 4). The middleware server then formats the data using a simple XML formatter and returns the data into the virtual environment (as in step 5). Once the data reaches the RE@L island, it is then formatted by the Second Life viewer into appropriate forms using scripts written in the Linden Scripting Language (LSL) for that particular service. The scripts for various services can be attached to most objects within Second Life as long as the script creator has appropriate permissions to attach it to those objects. The island owners as well as the Second Life Servers enforce authorization policies through avatar groups and privileges. At this stage of our work however, most real world services used do no necessitate authentication. In the future however an identity mapping service will be needed to link avatar identities to real world ones.

In the next sections, we demonstrate a set of services that are critically needed to demonstrate how a standard virtual environment can be transformed into one that can help build the next generation of science gateway that not only provides 3D interactions, but also bridges research and education.

B. Location Tracking Service (LTS) One of the biggest advantages that a virtual environment

like Second Life offers is the ability for users to move around in 3D space. Given this mobility, it is very difficult to provision and maintain quality of service. To this end, it is

important to provide a mechanism for identifying the location of the user. Also, this mechanism must be designed such that it behaves consistently with the security policies of the Second Life environment. To this end, we have designed a Bracelet (such as shown in Figure 2). The users add this Bracelet to their inventory with a simple click of a button and then can wear the Bracelet on their body. Once added, the bracelet starts to transmit the location of the user continuously every 10 seconds. The interval for transmission can be adjusted automatically on-the-fly with a simple configuration setting. Information is transmitted in (x, y, z) coordinates that

correspond to their actual x, y, and z coordinates on the Second Life grid. We also translate this to global coordinates so that we can track any user on a standard Google Maps interface. Figure 3 shows the locations of a user as tracked by the Location Tracking Service. The map shows green dots that indicate the coordinates that the users have visited as well as a numerical value indicating how many times the user has been to a certain location.

Fig. 2. Bracelet transmits location and provides location-aware services.

Fig. 3. Location service providing the location of a user along with a listing on the right side with the places visited by the user and the number of visits.

Fig. 4. Google Maps overlay showing the location and the associated tags. Also, please note the list of locations on the right that show collaborative tags as a mouse-over option.

4

The Location Service can be easily extended to form the basis of other services for location aware education and research activities where objects can be located in location that will enhance various activities. In the next section we discuss the Social Tagging Service (STS) that allows social tagging within the virtual world.

C. Social Tagging Service (STS) The stated goal of this project is to build tools that can

enable the development and deployment of the next generation of virtual organizations for science and engineering. To this end, we are implementing a prototype social tagging system that allows users to collaboratively tag objects, locations, or other entities within the Second Life environment. The Social Tagging Service relies heavily on the Location Service to accurately relate tags to locations and objects. The STS is also implemented within the Bracelet. This essentially means that once the user wears the Bracelet, STS also becomes active immediately. We are working on enhancing the user experience when using the STS service.

In addition to seeing tags from within the Second Life environment, we are also thinking ahead in terms of ease of use for educators when using the virtual world. For example, a faculty member could, as a homework assignment, ask students to visit the NOAA island and tag what they think are the salient parts of the Tsunami simulation. The student would be needed to wear the Bracelet, which would then transmit location data and tags about their homework to the instructors. In order to make it easy for faculty members to monitor their students, we are allowing instructors to simply go to a website and see which areas students are visiting and what tags they use. Figure 4 shows tags overlaid on to Google Maps. Please

note in addition to locations, we see the collaborative tags. We are working on showing information as a tag cloud.

The overall workflow for the Location Service and the Social Tagging Service are presented in Figure 5. Both services are implemented within the bracelet as explained in the previous section. By wearing the bracelet, the user activates both services simultaneously. As shown in step 1, the “object wear action” activates the chat channel listener. The location is transmitted to RE@L database automatically independent of any other user action. Using the action word “tag” in the chat channel followed by comma separated tags as

Fig. 5. Workflow for Location and Social Tagging Service. Wearing the Bracelet activates both services and allows users to interact immediately with them.

Fig. 6. Prototype ERD diagram used for the Social Tagging Service.

5

illustrated in step 2 above. The tags are then passed to the RE@L middleware server (step 3) through a simple http request. The database structure for the tags is right now in its prototype stage. The Entity Relationship Diagram is presented in Figure 6. The RE@L middleware not only checks to see the tags already exist, but also track which avatar added a specific tag (step 4). We are working on a smarter tagging system that

will not only take into account frequency of term occurrence in order to present tag clouds, but also a method that priorities strength or user prioritization of these tags. We will report on our more accurate presentation of tag clouds in the near future.

Retrieving location tags is fairly easy. We use the same open chat channel listener to act on the command “show tags” (step 5). Once this command is received, the command is passed to the RE@L middleware along with the location for which the tag is requested (step 6). The middleware server, not only retrieves the user’s individual tags, but can also show all the tags that have been entered for a specific location (steps 8 and 9). The tags are then formatted appropriately for viewing and presented to the user as a chat response (step 10). The next stage of the project for us is to be able to tag specific objects within the Second Life environment. These first two services are mostly needed to create new learning environments that will present the learners with location aware socially meaningful content. The next two services represent the effort to incorporate resources from within the fabric of cyberinfrasutrcture within a virtual environment, Their

unifying concept is the monitoring of resources which simplifies our prototypes by not needing authentication procedures.

D. Compute Resource Monitoring Service (CRMS) Most virtual organizations for engineering and science depend on significant data and compute resources to achieve their scientific goals. To this end, we are working on RE@L middleware that will allow us to build powerful tools within Second Life that enable interaction with compute resources. Our end goal is to be able to submit, monitor, and retrieve the results of jobs run on grids such as the Open Science Grid or the Teragrid. The early prototype presented here is a monitoring service that visualizes the status of a condor pool deployed campus wide at Clemson University.

Visualizations can take all of the information about the compute resources and turn it into something much more manageable and much more informative. The initial approach was to create the visualization first with much emphasis on Linden Scripts. The idea was to make a virtual server rack that grew or shrunk based on how many computers were on the queue and which state they were in. Objects/boxes representing the machines in the pool were to make simple server calls, pull in data, parse it, and behave (move to the appropriate location and color themselves according to which state - claimed, unclaimed, owner, and backfill - they were in) accordingly.

However, Second Life (SL) limits the amount of data that it can receive from external sources during a certain time period. This is a major data bottleneck. The maximum that SL can receive per server request is 1K, and there is an embedded delay in the function call used to get data. Given the fact that there were approximately 2,000 computers to describe, this 1K limitation was a crippling factor. It would take minutes to pull in all the data and then parse it natively in SL. Given this problem, the data had to be parsed beforehand, with small bits being sent on-demand. The solution was to send totals instead of information on each computer. For example, consider the situation that there are 2000 computers, 50 of them unclaimed by the Condor central manager. When the user interacts with an object in SL to get the number of unclaimed machines, it queries the server and the server returns just the number 50. Fifty virtual “unclaimed nodes” are then created and given an index, while using the index for moving nodes into their appropriate positions on the virtual server rack to represent information visually.

One other aspect of this problem is that there are too many nodes in the Condor pool to individually display each of them without losing the value of the visualization. The solution to this was to simply group each virtual node to be a representation of 50 computers on the pool. This keeps the data passed into Second Life low while keeping the visualization manageable and useful.

In Figure 7, we show two ways of displaying real-time stats from the Clemson University Condor pool. The top picture projects the stats on a screen, which acts as a projector. The processing of the data is done externally and is passed back to

Fig. 7. Graph showing real-time usage statistics of the Clemson University condor pool. On the top you see the images on a projection screen that are non-native (meaning rendered outside). But, at the bottom, data visualization is native to Second Life and allows users to walk around the data and interact with data in new forms.

6

the RE@L environment. However, in the bottom picture, the monitoring information is being rendered native through our visual metaphor. We believe that the ability to render information natively within the 3D virtual environment allows users to interact with the data directly. The users can walk around a certain data point and examine it closely. We are working on adding more details to the rendering within Second Life, so as to allow users to select various depths of data presentation. This is a new paradigm in data presentation within Second Life in specific, and virtual environments in general.

E. Streaming Data Service (SDS) Another example of monitoring is the one of sensor

networks. Environmental data acquired and streamed through sensors directly to the scientist’s desktops are key to new discoveries. Indeed, data and data acquisition methodologies are an important part of science, engineering, and social sciences research. 3D virtual worlds are ideal environments to incorporate this data in visually understandable forms. Once we are able to tap into rich data streams, we can plan and execute complex workflows by manipulating objects in a virtual environment instead of running advanced software applications.

Figure 8 represents the middleware architecture that was implemented as a proof of concept. Sensors deployed at watersheds in Myrtle Beach and Lake Issaqueena (South Carolina) are the data sources. The sensors developed and deployed in the field by the Clemson University Forestry Department all send their data to a base station (step 1), which is a part of a Narada Brokering network [11] and runs a

publisher service. The RE@L server has a constantly running Narada Brokering subscriber (step 2) connected to the broker and is supplied with real-time data from the sensors, which it stores locally into a text-based data store (step 3) that is updated frequently. When the Second Life client requests sensor data from within the virtual environment, the Linden Script deployed within the virtual sensor contacts the middleware server (step 4). The RE@L Middleware Server

Fig. 9 Snapshot of the sensor data being streamed within the virtual environment. The avatar is wearing the sensor bracelet and has access to an interface that allows him to select which sensor he wants to get the data from.

Fig. 8. Streaming data cross-reality. Physical sensors deployed at two locations stream their monitoring information through a narada brokering infrastructure. A virtual sensor activated through a SL client calls a service deployed outside SL that runs a narada subscriber node.

7

selects the appropriate sensors and requests data from the data store (step 5). This data is then parsed and returned to the client in appropriately viewable formats (step 6). Figure 9 below shows the virtual sensors within Second Life receiving real-time data from the field sensors.

IV. CONCLUSIONS We presented various cross-reality services to instrument, monitor and provision education and research environments within virtual worlds. These services have been implemented through Second Life and make use of a service-oriented architecture. The middleware concept is based on REST services deployed in the real-world and that can serve as proxy services for existing middleware services. The ability to program any objects in Second Life allows us to create a cyberinfrastructure toolbox made of these services. A Location Tracking Service and a Social Tagging Service have been developed and will be use to design and monitor the next generation educational environment. A Compute Resource Monitoring Service and a Data Streaming Service have also been developed and deployed to bring the fabric of cyberinfrastructure within a virtual environment and together with the educational services provide a context aware education and research world.

ACKNOWLEDGMENT The authors also would like to acknowledge the significant

contributions of Kristen Hardwick for her contributions to visualization objects as well as Chris Minor, for help with land management within Second Life™.

REFERENCES [1] S. Goasguen, K.P.C. Madhavan, D. Wolinsky, R. Figueiredo, J. Frey,

A. Roy, P. Ruth, and D. Xu, “Middleware integration and deployment strategies for cyberinfrastructures,” Proceedings of the 3rd International Conference on Grid and Pervasive Computing, Kunming, China. (2008).

[2] K.P.C. Madhavan and S. Goasguen, “Integrating cutting-edge research into learning through web 2.0 and virtual environments,” Proceedings of the Grid Computing Environments, Reno, NV,2007.

[3] J. Jonas. “Virtual reality: There is no place like home,” Available http://jeffjonas.typepad.com/jeff_jonas/national_security/index.html, (April 25, 2008).

[4] W.S. Bainbridge, “The scientific research potential of virtual worlds,” Science, vol. 317, no. 5837, pp. 472 – 476, July 2007.

[5] A. Balsamo with C. Wallis. “Virtual environments for learning: A report on a summit,” Special report commissioned by the National Science Foundation. Available online at http://www.designingculture.net/resources/VELsummitReport2008, August 1, 2008.

[6] C. Johnson, “Drawing a roadmap: Barriers and challenges to designing the ideal virtual world for higher education,” Educause Review, vol. 43, no. 5, September/October 2008.

[7] C. Collins, “Looking to the future: Higher education in the Metaverse,” Educause Review, vol. 43, no. 5, September/October 2008.

[8] Available at http://msdn.microsoft.com/en-us/architecture/aa948857.aspx

[9] Available at https://jira.secondlife.com/browse/VWR-864 [10] Available at http://blog.secondlife.com/2007/11/15/typical-frame-rate-

performance-by-graphics-cardgpu/ [11] Available http://www.naradabrokering.org