folk.uio.nofolk.uio.no/noruns/thesisfinal.pdf · abstract characterised by being highly dynamic,...

246
UNIVERSITY OF OSLO Department of Informatics Network Wide Information Sharing in Rescue and Emergency Situations PhD thesis Norun Christine Sanderson January 2008

Upload: others

Post on 29-Oct-2019

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

UNIVERSITY OF OSLODepartment of Informatics

Network WideInformation Sharingin Rescue andEmergencySituations

PhD thesis

Norun Christine

Sanderson

January 2008

Page 2: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts
Page 3: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts where access to accurate and relevant information at the right time, as well as good communication between participating personnel, is of great importance. Sharing and distributing relevant information between personnel carrying mobile devices in such situations will likely be very beneficial. This kind of information sharing can take place at the rescue site by utilising mobile ad-hoc network technology to provide an infrastructure for communication during rescue operations. Focusing on sharing knowledge about what information resources are available for sharing in the network and where these resources are found, this thesis addresses issues of metadata and knowledge management for information sharing in rescue and emergency situations using sparse mobile ad-hoc networks as infrastructure. Time is critical and resources may be limited, thus it becomes important to avoid information overload, and to make the best use of available resources. In addition, as there are several organisations participating, we have to handle information sharing across different domains and organisations. Information filtering and personalisation in combination with awareness of context and establishing who needs what information can be utilised, but this is complicated by the characteristics of rescue operations and sparse mobile ad-hoc networks. During the operation, information to be shared is continually added and modified, and these changes have to be updated through the network, which is made more difficult by the dynamicity and heterogeneity in such environments. When sharing information in resource weak environments, efficient metadata management is important for utilising available resources in the best possible way. Solutions for these challenges have to take into consideration issues from diverse areas; knowledge management, sparse mobile ad-hoc networks, and information sharing.

In this thesis we have approached the problem through analysis of rescue operation reports and studying literature from the related areas. In the context of the Ad-Hoc InfoWare project, we have analysed an example rescue scenario for challenges and requirements. Based on this requirements analysis we have designed a knowledge management middleware component, the Knowledge Manager, to support requirements for knowledge and information management in this application scenario. This component has a set of sub-components handling different issues and aspects of our solution. In our solutions for handling information overload we utilise technologies from metadata and knowledge management. Filtering and personalisation is solved using profiles and context descriptions, and linking users to relevant information in relation to context and profiles defined in ontologies. Vocabulary mapping supporting information sharing across organisations is solved through using context modelling and ontologies, together with a three-layered data dictionary approach for efficient metadata management. Ontologies are utilised in a solution for dynamic update where we use prioritised update messages, supporting both the dissemination of updates and the organisation of the rescue operation. Sharing knowledge about what information is available and where this is found is solved by the propagation of links containing terms and location extracted from metadata descriptions of information registered to be shared in the network.

Page 4: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts
Page 5: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

Acknowledgements I would like to thank my supervisors, Ellen Munthe-Kaas and Vera Goebel, for all their advice, support and encouragement while working with this thesis, and I am especially grateful to Ellen for her contribution and advice regarding the ontologies. I would also like to thank all other participants of the Ad-hoc InfoWare project for discussions and cooperation on joint work in the project, particularly professor Thomas Plagemann and fellow PhD students Ovidiu Valentin Drugan, Matija Pužar, and Katrine Stemland Skjelsvik.

Page 6: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts
Page 7: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

i

Contents Chapter 1 Introduction ..................................................................................................... 1

1.1 Motivation ............................................................................................................. 1 1.2 Problem Description .............................................................................................. 3 1.3 Claims and Contributions ...................................................................................... 5 1.4 Approach and Method ........................................................................................... 6 1.5 Outline ................................................................................................................... 7 1.6 Publications ........................................................................................................... 8

Chapter 2 Application Scenario and Requirements Analysis ..................................... 11 2.1 Application Scenario ........................................................................................... 11

2.1.1 Organisations Involved ................................................................................ 12 2.1.2 Rescue Operation Organisation ................................................................... 12 2.1.3 Example Rescue Scenario Railway Accident .............................................. 14 2.1.4 Rescue Scenario Phases ............................................................................... 17

2.2 Overall Requirements Analysis ........................................................................... 18 2.2.1 Organisational Intra- and Inter-operability .................................................. 19 2.2.2 Security Aspects .......................................................................................... 20 2.2.3 Network Characteristics .............................................................................. 20 2.2.4 Communication Issues ................................................................................. 22 2.2.5 Development Constraints ............................................................................ 23

2.3 Ad-Hoc InfoWare Middleware ............................................................................ 24 2.3.1 General Middleware Requirements ............................................................. 24 2.3.2 Architecture Components ............................................................................ 26

2.4 Summary .............................................................................................................. 30

Chapter 3 Knowledge Management: Background and Related Work ....................... 31 3.1 Information and Knowledge ................................................................................ 32 3.2 Knowledge Management ..................................................................................... 33

3.2.1 Knowledge Processes .................................................................................. 34 3.2.2 Distributed Approaches ............................................................................... 36 3.2.3 Knowledge Representation .......................................................................... 37

3.3 Metadata Management ........................................................................................ 37 3.3.1 Different Classifications of Metadata .......................................................... 38 3.3.2 Standards for Metadata Management .......................................................... 40 3.3.3 Ontologies .................................................................................................... 41 3.3.4 Profiles and Context .................................................................................... 43 3.3.5 Modelling Languages and Technologies ..................................................... 45

3.4 Information Integration and Sharing ................................................................... 48 3.4.1 The Interoperability Problem....................................................................... 49 3.4.2 Solutions for Handling Heterogeneity ......................................................... 51 3.4.3 The Semantic Web....................................................................................... 54

Page 8: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

ii

3.5 Related Work ....................................................................................................... 54 3.5.1 Shark ............................................................................................................ 55 3.5.2 DBGlobe ...................................................................................................... 56 3.5.3 DIANE ........................................................................................................ 57 3.5.4 MoGATU .................................................................................................... 58 3.5.5 AmbientDB ................................................................................................. 58 3.5.6 Discussion ................................................................................................... 59

3.6 Summary ............................................................................................................. 60

Chapter 4 Knowledge Manager High Level Design ..................................................... 61 4.1 Requirements and Design Overview ................................................................... 62

4.1.1 Knowledge Manager Requirements ............................................................ 62 4.1.2 Design Overview ......................................................................................... 65

4.2 Metadata Handling Components ......................................................................... 66 4.2.1 Data Dictionary Manager ............................................................................ 66 4.2.2 Semantic Metadata and Ontology Framework ............................................ 70 4.2.3 Profile and Context Manager ...................................................................... 73

4.3 Tool Components ................................................................................................ 74 4.3.1 Query Manager ............................................................................................ 74 4.3.2 XML Parser ................................................................................................. 76

4.4 Discussion ........................................................................................................... 77 4.5 Cooperation in the Knowledge Manager ............................................................ 78 4.6 Cooperation with Ad-Hoc InfoWare Middleware Components ......................... 79 4.7 Summary ............................................................................................................. 81

Chapter 5 Metadata Management: A Three-Layered Approach ............................... 83 5.1 Overview of Our Approach ................................................................................. 84

5.1.1 The Three Layers ........................................................................................ 84 5.1.2 Dynamic Update .......................................................................................... 87

5.2 Data Dictionary Manager .................................................................................... 90 5.2.1 DDM Requirements and Functionality ....................................................... 90 5.2.2 The Data Dictionaries .................................................................................. 93 5.2.3 Metadata Exchange ..................................................................................... 99

5.3 Summary ........................................................................................................... 105

Chapter 6 Ontology Based Dynamic Updates ............................................................. 107 6.1 Description of Approach ................................................................................... 108 6.2 Example Rescue Ontology ................................................................................ 111

6.2.1 Profile Descriptions ................................................................................... 112 6.2.2 Dynamic Context ....................................................................................... 118

6.3 Populating the Knowledge Base ....................................................................... 119 6.4 Handling Profile Ontologies in Our Architecture ............................................. 121 6.5 KM Components in Ontology Based Update .................................................... 121 6.6 Summary ........................................................................................................... 125

Chapter 7 Using the Knowledge Manager in a Real Scenario .................................. 127 7.1 Analysis of Railway Accident Scenario – The Phases ...................................... 127 7.2 Using the Ad-Hoc InfoWare Middleware ......................................................... 131 7.3 Using the Knowledge Manager ......................................................................... 133 7.4 Examples from Relevant Scenario Phases ........................................................ 133

7.4.1 Phase 1 – Initial Content of Knowledge Base ........................................... 134 7.4.2 Phase 2 – Briefing Phase ........................................................................... 136 7.4.3 Phase 4 – Running Phase .......................................................................... 139

Page 9: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

iii

7.5 Summary ............................................................................................................ 146

Chapter 8 Proof-of-Concept Implementation ............................................................. 147 8.1 Overview of Classes .......................................................................................... 151 8.2 Database Schema ............................................................................................... 154 8.3 Modifications to Derby Code ............................................................................ 154 8.4 Implemented Functionality ................................................................................ 155

8.4.1 Sequence Diagrams ................................................................................... 156 8.4.2 Functionality Implemented in Package km ............................................... 159 8.4.3 Functionality Implemented in Package ddmgr .......................................... 163 8.4.4 Testing ....................................................................................................... 178

8.5 Summary ............................................................................................................ 178

Chapter 9 Evaluation .................................................................................................... 179 9.1 Review of Contributions .................................................................................... 179 9.2 Fulfilment of Requirements ............................................................................... 182 9.3 Critical Evaluation of Claims ............................................................................ 186 9.4 Summary ............................................................................................................ 188

Chapter 10 Conclusion .................................................................................................... 189 10.1 Summary of Contributions ................................................................................ 189 10.2 Critical Assessment ........................................................................................... 190 10.3 Future Work ....................................................................................................... 191

References ....................................................................................................................... 195

Appendices ....................................................................................................................... 203 Appendix A – Modifications to Derby .......................................................................... 203 Appendix B – Example Details Chapter 7 .................................................................... 210 Appendix C – Example Rescue Ontology in OWL ....................................................... 222

Page 10: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts
Page 11: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

v

List of Figures Figure 1.1 The three context areas for this work. .................................................................. 5 Figure 2.1 Organisation and structure in rescue operations. ............................................... 14 Figure 2.2 Middleware areas of consideration. ................................................................... 25 Figure 2.3 Ad-Hoc InfoWare Middleware Architecture. .................................................... 26 Figure 3.1 Nonaka's SECI model for knowledge creation. ................................................. 35 Figure 3.2 Approaches to ontologies in information integration......................................... 53 Figure 4.1 The Knowledge Manager. .................................................................................. 66 Figure 4.2 Nodes and links. ................................................................................................. 68 Figure 4.3 Find registered shared items in SDDD............................................................... 69 Figure 4.4 Retrieve metadata descriptions in LDD. ............................................................ 70 Figure 4.5 Find synonyms. .................................................................................................. 73 Figure 4.6 Forward query. ................................................................................................... 76 Figure 4.7 The Knowledge Manager component. Internal dependencies. .......................... 78 Figure 5.1 Three perspectives on our approach. .................................................................. 84 Figure 5.2 Three-layered approach – levels of information. ............................................... 85 Figure 5.3 Three-layered approach - realisation. ................................................................. 86 Figure 5.4 Three-layered approach - data modelling technology. ....................................... 87 Figure 5.5 Overview of kinds of dynamic update. .............................................................. 89 Figure 5.6 Data Dictionary Manager (DDM). ..................................................................... 90 Figure 5.7 Example LDD contents. ..................................................................................... 96 Figure 5.8 Extracting links to SDDD. ................................................................................. 97 Figure 5.9 Example SDDD contents. .................................................................................. 98 Figure 5.10 SDDD after merge. ........................................................................................ 101 Figure 5.11 Contents of Nod-1 LDD and SDDD. ............................................................. 102 Figure 5.12 Nod-2 before merge. ...................................................................................... 103 Figure 5.13 Nod-2 after merge. ......................................................................................... 104 Figure 5.14 Nod-1 after merge. ......................................................................................... 104 Figure 5.15 Nod-2 list of previous exchanges, before and after merge with Nod-1. ........ 105 Figure 6.1 Simple model of rescue operation roles. .......................................................... 110 Figure 6.2 Upper ontology. ............................................................................................... 111 Figure 6.3 User profile ontology. ...................................................................................... 112 Figure 6.4 Information profile ontology. ........................................................................... 114 Figure 6.5 Example information priorities. ....................................................................... 114 Figure 6.6 Rescue scenario profile ontology. .................................................................... 115 Figure 6.7 Device Profile. ................................................................................................. 117 Figure 6.8 Derived relations to information profile. ......................................................... 118 Figure 6.9 Example context ontology................................................................................ 119 Figure 6.10 Rescue scenario timeline - populating the knowledge base. .......................... 120 Figure 6.11 Knowledge base update.................................................................................. 123 Figure 6.12 Send prioritised update message. ................................................................... 124 Figure 8.1 Derby System from [DerbyGuide]. .................................................................. 150

Page 12: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

vi

Figure 8.2 Packages overview. .......................................................................................... 151 Figure 8.3 KM and DDM classes overview. ..................................................................... 151 Figure 8.4 Classes in package km. .................................................................................... 152 Figure 8.5 Classes in package ddmgr. ............................................................................... 153 Figure 8.6 Register new item in LDD. .............................................................................. 157 Figure 8.7 Search SDDD for related items. ...................................................................... 157 Figure 8.8 Retrieve metadata for item registered in LDD. ................................................ 158 Figure 8.9 Find synonyms and query for related nodes. ................................................... 159

Page 13: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

vii

List of Tables Table 2.1 Overview of participating personnel involved in the Åsta accident. ................... 12 Table 4.1 KM Interface. ...................................................................................................... 79 Table 4.2 Different types of subscriptions........................................................................... 80 Table 6.1 Addition to KM Interface. ................................................................................. 122 Table 8.1 How Derby fulfils our wish list for a DBMS. ................................................... 148 Table 8.2 Correspondence between implementation and design. ..................................... 153

Page 14: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts
Page 15: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

ix

List of Abbreviations DBMS Database management system DDM Data Dictionary Manager DENS Distributed Event Notification Service GPS Global Positioning Service KB Knowledge Base KM Knowledge Manager LDD Local Data Dictionary MANET Mobile Ad-hoc Network MTV Mobile Topic map Viewer OiC Officer in Charge OSC On-site commander (rescue site leader) OWL Web Ontology Language PCM Profile and Context Manager PDA Personal Digital Assistant QM Query Manager RCC Rescue Coordination Centre RDF Resource Description Framework RDFS RDF Vocabulary Description Language (RDF Schema) RM Resource Management RSC Rescue Sub-Centre SAR Search and Rescue Service SDDD Semantic Distributed Data Dictionary SMOF Semantic Metadata and Ontology Framework TM Topic Maps XML eXtensible Markup Language XML-P XML Parser XTM XML Topic Maps

Page 16: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts
Page 17: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

1

Chapter 1 Introduction Rescue and emergency situations are resource and time critical cooperative efforts where timely access to correct information may be mission critical. Efficient sharing of relevant information among participants in such operations is likely to be highly beneficial, provided that the actual discovery and exchange of this information could take place with minimal effort on the part of the rescue personnel. Mobile computing devices carried by personnel, such as mobile phones, PDAs and laptops, can use wireless networks, e.g., WLAN and Bluetooth, for communication. Research in the area of mobile ad-hoc networks aims to provide technical solutions to enable spontaneous collaboration between mobile devices that come into communication range of each other. Utilising mobile ad-hoc network technology could be beneficial for emergency and rescue personnel by providing an infrastructure for communication and information sharing during rescue operations. The information to be shared needs to be registered and its availability advertised to users. To avoid information overload, available information needs to be filtered, and on request, relevant information should be delivered and presented to users in an applicable way. To be of support to rescue and emergency situations, these and other tasks have to be handled adequately in a hectic, dynamic, and possibly challenged environment, where resources may be scarce and where there may be frequent interruptions in communication. In addition, to be able to share information efficiently in this environment, it is necessary that the users have knowledge about what information resources exist for sharing in the network and how to obtain these resources. This thesis focuses on how this kind of knowledge can be shared and distributed efficiently among participants in rescue and emergency operations where personnel carry appropriate mobile devices forming a mobile ad-hoc network.

1.1 Motivation Rescue personnel from several organisations participate in rescue and emergency operations, cooperating to help affected people and save lives, as well as limit damages to, e.g., infrastructure, buildings, and nature. These operations are characterised by being very hectic and dynamic, and the personnel are assigned to specific tasks appropriate to their expertise and organisation affiliation. To organise such operations, communication between participants is needed; they need to inform each other of tasks done, discuss what decisions to make as well as inform about decisions made, what to prioritise, make organisational decisions about usage of work forces, assign personnel to tasks, as well as

Page 18: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

2

exchange information and provide information to the public. The quality of communication and information flow may impact the effectiveness of the rescue operation itself; according to the International Federation of Red Cross and Red Crescent Societies, “The flow of information throughout the disaster cycle is crucial for effective humanitarian operations. ...” [IFRC2005]. Benefits of information sharing in these situations may include improving the quality of, e.g., on-site medical treatment and personnel safety, better utilisation of resources, as well as support of the organisation of the operation itself. A number of applications could be useful in this scenario, e.g., for registering and monitoring casualties for medical treatment, keeping overview of tasks, personnel and resources, and on-site identification of passengers, filter information according to the role of the person carrying the device, to store data, process data, make computations and do data mining to provide additional information. Applications can have user interfaces tailored for the device and the role of the person carrying the device. Sensors may also be deployed in the area, providing additional information to help in the decision making process.

The key infrastructure to make this possible is provided by Mobile Ad-hoc Networks (MANETs). Among the characteristics of a MANET are its changeable topology and node mobility. As it is comprised of devices of various capabilities and resources, heterogeneity is another of its characteristics; the devices may range from simple sensors and mobile phones to powerful laptops. The smaller devices have limited resources, e.g., battery, bandwidth, storage space and processor power. Due to the unstable environment and possibly poor infrastructure in the geographical area of a rescue operation, a MANET in this kind of scenario has to cope with a high level of mobility and frequent partitions. An effect of this is that the MANET has to be delay tolerant, what is termed a Sparse Mobile Ad-hoc Network (Sparse MANET). An infrastructure based on a Sparse MANET, allowing applications network wide access to all available information automatically shared among participants and organisations, would be of great benefit to the coordination and organisation of such operations.

The geographical area and environment that the rescue and emergency operation takes place in may vary from central areas, like an inner city area with good infrastructure, to remote areas with minimal infrastructure (or possibly none at all). Existing infrastructure may be destroyed or not working properly. There may be high node mobility and the nodes sparsely distributed in the area, causing frequent disconnections, network partitions and merging. Such sparse networks require delay tolerance in handling messages. Thus, what is needed to face these challenges and support applications for information sharing in rescue scenarios, is a Sparse MANET with middleware services designed to handle network partitions and offering delay tolerant services [Skjelsvik2006] [Zhao2004].

The work presented in this thesis has been conducted in the context of the Ad-Hoc InfoWare Project1 [Plagemann2003] [Plagemann2004]. The aim of this project is research into middleware solutions for sharing and integration of information in Sparse MANETs, using rescue and emergency operations as application scenario. The characteristics of heterogeneity and dynamicity in this environment mean that creating applications in this scenario can be complicated. Middleware can solve problems of heterogeneity through providing location transparency and independence from communication details, operating systems, and computer hardware [Colouris2001]. Using middleware services supporting information sharing in this application scenario with Sparse MANET infrastructure would simplify application development and reduce time needed to produce new useful applications. The project focus has been directed into four areas: communication 1 The Ad-Hoc InfoWare project is funded by the Norwegian Research Council (NFR) under the IKT-2010 Program, project no. 152929/431, 2003-2007.

Page 19: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

3

infrastructure, resource management, security and privacy, and knowledge and context management. This is reflected in the Ad-Hoc InfoWare middleware architecture, consisting of the following components: Distributed Event Notification Service, decoupling subscribers and publishers through mediator nodes; Watchdogs monitoring changes and issuing notifications on change; Resource Manager keeping track of nodes and their resources; Security and Privacy Manager, assuring secure communication and access control, and Knowledge Manager for handling and integrating ontologies, metadata and information. A figure illustrating the Ad-Hoc InfoWare middleware architecture is found in Chapter 2, Section 2.3.2.

The work presented in this thesis centres on the Knowledge Manager in the Ad-Hoc InfoWare architecture. As part of the requirements analysis, a six phase scenario lifecycle for rescue and emergency operations using Sparse MANETs has been developed in the project, and used as a common basis in the research work. The Ad-Hoc InfoWare project issues and middleware architecture are further described together with the application scenario and rescue phases in Chapter 2 of this thesis.

1.2 Problem Description Supporting network wide information sharing is an important problem in rescue and emergency scenarios as sharing context relevant information is mission critical; it is crucial that personnel have access to the right information at the right time; it may even be a lifesaver. In addition, information sharing can be a support for the organisation and coordination of the operation. Time and resources are critical factors in rescue operations; this means that avoiding information overload becomes very important, both with regard to the end user and limited resources like network bandwidth and energy. In addition, each organisation is responsible for handling particular aspects of the rescue operation, which implies that they need different information and possibly also disparate views of the same information. These issues can be handled through the use of filtering, personalisation and context awareness.

The challenges caused by the characteristics of rescue and emergency operations and Sparse MANETs, as well as the necessary cross organisational administration, make attaining automatic sharing and filtering of information among all parties a difficult task. As (device) resources may be limited, simply propagating all information/data to all network nodes would be too expensive resource wise. By sharing extracts of metadata descriptions rather than the full descriptions, knowledge of available information resources can be propagated through the network while saving resources like bandwidth and energy, thus allowing a more efficient metadata management. Metadata descriptions of information items for sharing can be enriched with concepts from ontologies and vocabularies, and profiles and contexts can be used for semantic filtering according to user preferences and device capabilities. The dynamic environment of Sparse MANETs and rescue scenarios can cause information sources to become inaccessible in an unpredictable fashion, thus it would be advantageous to keep track of what information is available at all times. How the shared resources can be discovered, queried and retrieved in efficient ways are also important questions to address.

As the participants in these operations belong to different organisations that may use different data models, standards, languages and vocabularies, which is further complicated by the fact that information sharing can take place both within and across organisational borders, we need to find solutions supporting intra- and inter-organisational information exchange, in addition to a need for standardised formats for information

Page 20: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

4

exchange. This problem can be partly solved through exchanging agreed-upon standards, vocabularies and policies among organisations a priori to the rescue and emergency situation, but there may still be a need for mapping between vocabularies at run-time, much due to the fact that some devices may have limited storage space.

The information that is to be shared is heterogeneous, and in addition it is dynamic; with the exception of what may have been exchanged among the organisations prior to rescue operations, the information is continually changed through additions and updates during the operation, thus dynamic update becomes an important issue. In addition to the unstable availability of information resources, this may cause frequent updates, thus increasing communication needs and raising issues of consistency, making these important issues to solve regarding dynamic updates. Each node can only manage to keep a partial view of what is available globally. Consequences are that it is not possible to achieve an accurate, all-including view of all available information through the network, nor that all nodes have identical views of what is available for sharing. By utilising ontologies in dynamic updates, rescue operation organisation can be accommodated as well as ensuring a better dissemination of the most important information through the network. As several organisations are involved, there typically exist a number of cross-organisational procedures and policies to support the cooperation and integration. These can be described in an ontology and utilised to improve update procedures. Further, ensuring better dissemination of the most important information will enable updates of high priority information to get through first, which is essential in a rescue operation where distributing urgent information to participants as fast and efficient as possible, is of vital importance. To support this, update priorities reflecting rescue operation policies and information importance can be used. This may also alleviate some of the limitations of scarce resources in this scenario.

Traditional data management, which is mainly targeted towards the management of storage and distribution of data, does not provide adequate solutions for handling semantic heterogeneity caused by the presence of different organisations and domains. To find solutions for issues regarding sharing knowledge in heterogeneous environments, we have to look towards metadata and knowledge management. Ontologies can be utilised for sharing vocabularies and semantic contexts between domains and (metadata) models can be used to describe the information items/resources that are to be shared. Metadata management gives possibilities for solutions to challenges in sharing information resources in heterogeneous environments, for instance in sharing vocabularies and semantic contexts between both users and organisations. Work focusing on related problems relies to a great extent on there being stationary network servers available to provide the necessary services, which is not possible given the limitations and conditions we face in our application scenario. Existing solutions in metadata and knowledge management do not meet challenges faced in rescue and emergency operations or the limitations in using Sparse MANETs as infrastructure.

From the above description it is clear that to find solutions that can take into consideration all aspects of these issues with respect to our application scenario and infrastructure, we have to look at the intersection of three areas/domains: Knowledge Management and Representation, Sparse Mobile Ad-Hoc Networks, and Information Sharing, as shown in Figure 1.1.

Page 21: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

5

Figure 1.1 The three context areas for this work.

1.3 Claims and Contributions To be able to achieve network wide information sharing given the conditions and limitations given in the problem description, this thesis provides solutions for the following claims. Main Claim: Information overload/overflow has to be avoided through establishing who needs what information. This thesis shows how this can be achieved through utilising technologies from metadata and knowledge management. From the main claim, we derive the following sub-claims: Claim 1: Information overload can be handled through the use of filtering and personalisation. This thesis shows how this can be handled through using profiles and context descriptions and by linking relevant information to users in relation to a given context and appropriate profiles (user, device), defined in an ontology. Claim 1 is addressed in Chapter 6 and Chapter 4. Claim 2: Vocabulary sharing or mapping has to be enabled to allow cross organisational information sharing. This thesis shows how this can be supported through a combination of context modelling and ontologies, together with our multi-layered data dictionary approach. Claim 2 is addressed in Chapter 4. Claim 3: Efficient metadata management is essential in a solution for information sharing in resource limited environments. This thesis shows how this can be achieved through a three-layered architecture for efficient metadata management and our solution for dynamic updates. Claim 3 is addressed in Chapter 5. Note that our use of the term ‘efficient’ applies to making the best use of available resources for information sharing in accordance with the definition given in Oxford Dictionary of English (2nd edition): “Efficient [...] applies to someone or something

Page 22: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

6

able to produce results with the minimum expense or effort, as a result of good organization or good design and making the best use of available resources.” Claim 4: Ontologies can be utilised in dynamic update to accommodate both the organisation of the rescue operation and the dissemination of updates. This thesis shows how this is achieved through a combination of profile ontologies, context modelling and priorities, in our architecture for metadata management. Claim 4 is addressed in Chapter 6. Claim 5: Sharing metadata about what information is available and where it can be found is essential for efficient knowledge and information sharing. This implies that sharing information about which nodes have what information, i.e., ‘who knows what’, is as important for sharing knowledge in a resource weak environment as sharing the actual information itself. This thesis shows how this can be solved using a solution of exchanging links of concept terms together with location of related information. Claim 5 is addressed in Chapter 4 and Chapter 5. The contributions of this thesis are (1) a three-layered architecture for efficient metadata management including design of a Data Dictionary Manager; (2) an approach to ontology based dynamic update; and (3) high level design of a Knowledge Manager component targeted at meeting challenges of information and knowledge sharing in dynamic and heterogeneous environments.

1.4 Approach and Method The work in this thesis is conducted in the context of the Ad-Hoc InfoWare project. The problems and issues have been approached by studying (public) reports from rescue and emergency operations and exercises, background literature in the relevant three areas, and related work on knowledge management and information sharing in MANETs. Based on this background study, we have found a set of requirements that has been the guiding lines for the design of the Knowledge Manager (KM) and sub-components presented in this thesis. As proof of concept, a part of the KM design has been implemented in addition to an example showing how the whole KM component would be used in a real rescue and emergency situation. The focus of the implementation is on metadata management, that is the Data Dictionary Manager component and metadata exchange, which is a central function in sharing information in this scenario. To show how ontologies can be utilised in dynamic updates, an example ontology has been developed. The questions asked in relation to resource-efficient metadata management, which has been the main focus of this thesis, are as follows:

• How can we achieve resource-efficient sharing of knowledge about what information/resources are available in the network?

• How can we achieve keeping track of the availability of these resources? • How can dynamic updates be handled in an efficient way? • What data modelling technology is suitable in our application scenario?

The work has been evaluated with respect to how the requirements have been fulfilled, and in relation to the thesis claims.

Page 23: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

7

1.5 Outline The thesis is organised as follows. Chapter 2 – Application Scenario and Requirements Analysis. In this chapter, the context for the work in this thesis is presented through describing the application scenario and a six phase scenario for rescue operations, issues and architecture of the Ad-Hoc InfoWare project, as well as the overall requirements analysis for information sharing in rescue and emergency situations using Sparse MANETs. Chapter 3 – Knowledge Management: Background and Related Work. This chapter presents background literature and related work that form the basis for the work in this thesis. Chapter 4 – Knowledge Manager High Level Design. In this chapter, we present a high level design of the Knowledge Manager. Although all claims in this thesis are addressed by the Knowledge Manager, the main focus in this chapter is on Claim 1, Claim 2 and Claim 5. Chapter 5 – Metadata Management: A Three-Layered Approach. In Chapter 5 we present our three layered approach to efficient metadata management addressed from three perspectives; levels of information, realisation, and data modelling technology. The Data Dictionary Manager design and our approach to metadata exchange are also presented here. The claims we address are Claim 3 and Claim 5. Chapter 6 – Ontology Based Dynamic Updates. In this chapter we describe our approach to ontology based dynamic updates. The motivation for using ontologies in updates is presented as well as our proposed solution. An example ontology that can be used for this purpose is presented. The claims addressed are Claim 4 and Claim 1. Chapter 7 – Using the Knowledge Manager in a Real Scenario. This chapter describes a detailed scenario example of how our design would function at run-time in a rescue and emergency operation. The aim is to show how the solutions presented in Chapters 4, 5, and 6, fit together into a coherent whole supporting personnel in efficient information sharing in a rescue and emergency situation. Chapter 8 – Implementation. In this chapter, an implementation of the Knowledge Manager demonstrating central parts in our approach to metadata management is given. The focus has been on the Data Dictionary Manager component. Chapter 9 – Evaluation. In this chapter, we review contributions and central concepts, and evaluate the contributions in relation to the requirements and claims.

Page 24: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

8

Chapter 10 – Conclusion. In this chapter we conclude the thesis by giving a summary of contributions of this thesis and a critical assessment of the work, before we explore possible future work. Appendix A contains information about modifications to the DBMS that was used in the example implementation in Chapter 8. Appendix B contains more details for the examples given in Chapter 7. Appendix C contains the full ontologies for Chapter 6.

1.6 Publications In this section, we list a set of peer refereed papers and a technical report published as part of this thesis and the Ad-Hoc InfoWare project. Refereed Publications This paper was published early in the project and describes the kind of services a middleware used in a rescue operation should provide:

Middleware Services for Information Sharing in Mobile Ad-hoc Networks Plagemann, T., Andersson, J., Drugan, O., Goebel, V., Griwodz, C., Halvorsen, P., Munthe-Kaas, E., Puzar, M., Sanderson, N., and Stemland Skjelsvik, K. IFIP World Computer Congress (WCC2004), Workshop on Challenges of Mobility, August, 2004.

In this paper, we introduce the overall Knowledge Manager design:

Knowledge Management in Mobile Ad-hoc Networks for Rescue Scenarios Sanderson, N., Goebel, V., and Munthe-Kaas, E. Workshop on Semantic Web Technology for Mobile and Ubiquitous Applications, 3rd International Semantic Web Conference (ISWC2004), November, 2004.

In this paper, our layered approach to metadata management is presented together with use cases illustrating information item registration and metadata exchange:

Metadata Management for Ad-Hoc InfoWare - A Rescue and Emergency Use Case for Mobile Ad-Hoc Scenarios Sanderson, N., Goebel, V., and Munthe-Kaas, E. International Conference on Ontologies, Databases and Applications of Semantics (ODBASE05), Meersman, R. and Tari, Z. (Eds.): CoopIS/DOA/ODBASE 2005, LNCS 3761, pp.1365-1380, 2005. November, 2005.

This paper presents ontology based dynamic updates together with an example profile based rescue scenario ontology used for this purpose:

Ontology Based Dynamic Updates in Sparse Mobile Ad-hoc Networks for Rescue Scenarios Sanderson, N., Goebel, V., and Munthe-Kaas, E. International Workshop on Managing Context Information and Semantics in Mobile Environments (MCISME 2006), In conjunction with 7th int. conf. on Mobile Data Management (MDM), Nara, Japan, May, 2006.

Page 25: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

9

This paper describes how subscription language independence is obtained in DENS and discusses why it is useful in rescue operations. It describes and illustrates how the KM contributes to multiple subscription language support through vocabulary mapping/synonym lookup, relevant for Chapter 4 in this thesis:

Supporting Multiple Subscription Languages by a Single Event Notification Overlay in Sparse MANETs Skjelsvik, K. S., Lekova, A., Goebel, V., Munthe-Kaas, E., Plagemann, T., and Sanderson, N. Proceeding of the ACM MobiDE 2006 Workshop, June, 2006.

We describe the overall Ad-Hoc InfoWare-architecture motivated by rescue operation requirements in this chapter in the book Mobile Middleware:

Mobile Middleware for Rescue and Emergency Scenarios (book chapter) Munthe-Kaas, E., Drugan, O., Goebel, V., Plagemann, T., Puzar, M., Sanderson, N., and Skjelsvik, K. In: Bellavista, P., Corradi, A. (Eds.), Mobile Middleware, CRC Press, ISBN 0-8493-3833-6, September, 2006.

Technical Report In this report, we describe possible rescue operation scenarios, and what kind of middleware requirements we can deduce from them. We also present an overview of the Ad-Hoc InfoWare architecture:

Developing Mobile Middleware - An Analysis of Rescue and Emergency Operations Sanderson, N. C., Stemland Skjelsvik, K., Drugan, O. V., Puzar, M., Goebel, V., Munthe-Kaas, E., and Plagemann, T. Research Report #358, ISBN 82-7368-316-8, ISSN 0806-3036, June, 2007.

Page 26: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts
Page 27: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

11

Chapter 2 Application Scenario and Requirements Analysis The purpose of this chapter is to describe the application scenario and requirements that are the basis for the work in this thesis. The application scenario is rescue and emergency operations where information is shared using mobile devices connected in a Sparse MANET. The contents of this chapter are related to the overall issues for the Ad-Hoc InfoWare project, although the focus here is on the parts most relevant for this thesis. The content is partly based on joint work in the project; previously published work include [Sanderson2007], [Munthe-Kaas2006], and [Plagemann2004].

Rescue and emergency operations are characterised by very hectic and dynamic environments, and with an inherent heterogeneity both in the variety of organisations participating, different types of operations, as well as the use of different mobile devices. To be able to offer useful solutions in such a scenario, it is important to analyse requirements. As base for the following application scenario description we have studied public reports from rescue operations in Norway [NOU2000:30] [NOU2001:9] [NOU2001:31], and a presentation of a rescue operation exercise [AHIW2003].

This chapter is structured as follows. Section 2.1 contains the application scenario description in the form of characteristics of rescue and emergency operations, possible applications, and an example of how rescue and search operations are structured and organised in Norway. We also describe an example scenario of a railway accident, and present six general rescue operation phases for this kind of application scenario. In Section 2.2, an overall requirements analysis for middleware solutions for sharing information in Sparse MANETs is presented. The Ad-Hoc InfoWare project and its middleware solution for information sharing in Sparse MANETs are described in Section 2.3.

2.1 Application Scenario In this section we describe the characteristics of the chosen application scenario. To do this, we first look at the different organisations that may be involved in a rescue and emergency operation and what overall responsibilities each type of organisation has in the operation. Then we continue by examining the structure and organisation of the rescue operation itself, different types of rescue operations, the rescue operation roles and their responsibilities in the operation, as well as the lines of reporting. The latter is significant as

Page 28: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

12

it may influence the information flow during the operation. Finally, we present a set of six rescue scenario phases (a result from joint work in the Ad-Hoc InfoWare project), describing the life cycle of a rescue operation seen in the context of using Sparse MANETs as infrastructure.

2.1.1 Organisations Involved Typically, several organisations are involved in the operation, some of which may be voluntary organisations. In Table 2.1, we can see a list of the organisations typically participating in a rescue operation organised by the Norwegian Search and Rescue Service (SAR) [NorwSAR]. The list is taken from the report from a serious train accident in Norway in January 2000 [NOU2000:30].

Table 2.1 Overview of participating personnel involved in the Åsta accident.

Participating personnel Number Police and bailiffs 73 Fire department/brigade 70 Medical and paramedics personnel 150 Air ambulance and helicopter personnel 13 The Norwegian Civil Defence 5 The Norwegian Armed Forces 203 The Norwegian Red Cross 29 Crises and care teams 43 The Church of Norway 12 Norwegian Rescue Dogs 2 Others (civilian volunteers) 4 Total ca. 600

Source: Police/Kripos – adopted from [NOU2000:30] Each organisation has its own set of rescue operation procedures and guidelines which it must follow. The cooperation incentive in this situation is very strong since all participants share the same overall goal; to rescue people, and to limit the impact of the disaster. Additionally, cross-organisational interaction procedures involving governmental organisation and other authorities have been defined. These procedures can be expected to include rules on how to establish a command and coordination structure for the scene of the accident. The coordination structure to some extent shapes the flow of information on the scene. It is however important to note that some deviations in the shape of information flow will always be present, due to cooperation among team members and on the spot decision making during the operation. Thus, spontaneous communication and information flow between rescue personnel during the operation should not be limited by a predefined structure, as personnel are likely to perceive this as a hindrance and a source of frustration. It has been found that feelings of not being able to access (in time) existing and possibly helpful information reduces the state of cognitive absorption and creates a feeling that there is little effective control. The threat rigidity syndrome – additional stress caused by loss of control of the situation or reduced understanding of reality – has been observed in the emergency field [Carver2007].

2.1.2 Rescue Operation Organisation Governmental authorities give cross-organisational guidelines regarding the operational structure and organisation of rescue and emergency operations, and the resulting procedures and policies imply rules regarding responsibility and the reporting chains in the

Page 29: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

13

organisations. As an example of an operational structure of rescue operations, we give a very brief description of the guidelines given by the Norwegian authorities for the Norwegian SAR [NorwSAR] for land rescue operations and present an example model. Both the description and the model are restricted to include only what is relevant for our example.

Of the organisations typically involved in a Norwegian rescue operation, the police have the main responsibility for rescue site coordination and management, and calling in other governmental departments/services as necessary. They are also responsible for ensuring communication with the county governor for establishing and maintaining mutual understanding and cooperation. Norwegian military forces are required to assist the police. Fire brigades have special equipment for rescue work in tunnels. The person responsible for handling the accident is also responsible for giving information to relatives, media, and central authorities. Rescue operations are coordinated by the (Joint) Rescue Coordination Central, and a Rescue Sub-Centre is set up to handle regional rescue operation management. Helicopter resources and pre-hospital services are expected to be requested directly by the rescue operation management, and the county is expected to support with several supplementing special services.

Land operations are usually handled by the Rescue Sub-Centre (RSC), which has regional responsibility and appoints the on-scene coordinator/commander (OSC) at operation initiation. For larger operations there is usually an on-scene coordinating team consisting of a police officer in charge of public order, a fire officer in charge of fire control, and a medical officer in charge of medical treatment. All these officers report directly to the OSC, which is the leader of the on-scene coordination team (OSC-team). There may also be a level of team leaders in each organisation (e.g., medical) reporting to the officers in charge (OiC). Operational command is organised on three levels: the OSC – operational direction and coordination on the scene; the RSC – local resource coordination; and the top-level coordination of the entire operation by a (Joint) Rescue Coordination Centre (RCC). In most rescue operations, only the first two levels will be needed.

The main role and responsibility of the OSC is to coordinate resources and support at the site, and to accommodate the efforts of personnel from the participating organisations. It is of vital importance that the OSC has full overview of all available resources contributed from all participating teams and organisations at all times. The main role of the RSC is that of assisting and relieving the OSC-team, as well as coordination of resources, e.g., when there is more than one rescue operation in the area. The role of the RCC is mainly to monitor the operation and give advice. In Figure 2.1, we give an example model illustrating the organisational structure of the Norwegian SAR. The figure shows role hierarchy and lines of reporting.

Page 30: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

14

Figure 2.1 Organisation and structure in rescue operations.

Norwegian health personnel follow directives for evaluating patients/casualties and appropriate action as given by the Norwegian Index for Medical Aid [NIME1994]. The emergency codes are Acute (Red), Urgent (Yellow), and Regular (Green); it is common to use coloured tags during rescue operation/ on site. The Naca scale [Vaardal2005] – an eight level Severity of Injury of Illness Index for grading injuries or diseases – is another way of categorising casualties, ranging from 0 (no injury or disease) to 7 (lethal injuries or diseases).

2.1.3 Example Rescue Scenario Railway Accident The scenario describes a medium sized railway accident of limited impact, set in a remote area of very poor infrastructure, with difficult weather conditions and access possibilities. It is loosely modelled on reports from an actual railway accident that took place in Norway in 2000 [NOU2000:30]. In Chapter 7 we analyse this example scenario in detail into the six rescue scenario phases (presented in the next section), and this analysis is used as basis for a large example. The example scenario has previously been published in [Sanderson2007]. Scenario Description A serious railway accident in inaccessible terrain can be caused by, e.g., landslides/rockslides, technical failure, sabotage or collisions. If the accident is at a mountain pass, it will normally be in areas with limited infrastructure and scattered buildings. As every train can carry close to 500 passengers, such an incident at an inaccessible mountain pass will cause acute, extraordinary need for transportation, and an immediate need for transporting injured passengers from the accident site to a collection site for pre-hospital treatment, or directly to a hospital. The rescue service has established routines for this, and there are helicopters for airlifting patients out of inaccessible areas. In addition, there will be a need for transporting other (not injured) passengers to an appropriate collection place, and alternative transport bypassing the accident site has to be established for a period. There may also be requirements for assistance in delivery of

Page 31: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

15

equipment to the accident site after the evacuation phase. Relevant organisations and governmental departments have certain responsibilities and roles in connection with limiting damage of accidents as was described earlier. The consequences of such an accident will have effects on both health and environment; there may be a large number of casualties and local pollution, e.g., oil spills.

The railway accident in this scenario is located at the Bergensbanen railway in Hordaland county, in a tunnel at a mountain pass, e.g., in or close to Raundalen. The area has weak infrastructure and a need for special services. Jernbaneverket (the Norwegian National Rail Administration) is responsible for the national railway network, which is 4087 km long with a total of 704 tunnels (of which the longest is Romeriksporten (14 580 m)) [JBV] [Westrheim2006].The Bergensbanen (Oslo – Bergen) is 482 km long and has 155 tunnels between Hønefoss and Bergen (372 km). Yearly, about 600 000 passengers are transported across the Hardangervidda. Accident Description The accident is caused by a rockslide just outside of a tunnel; large blocks of stone are lying on the tracks. The rockslide has weakened the ground beneath the tracks, and the tracks are partly broken. The temperature is minus 10 degrees Celsius, with deep snow in the area. A nearby mountain lodge can be used for collecting evacuated train passengers. The lodge can be accessed by a mountain road which can be opened up for accessing traffic. The railway tracks can not be used due to the damage to its structure and danger of repeated rockslides, and the fact that it is blocked. A train between Oslo and Bergen, carrying about 400 passengers, runs into rocks when coming out of a small tunnel, it goes off the tracks and the train engine and some of the carriages are lying on their side. The train is partly inside the tunnel. The locomotive has gone off the tracks and is lying on its side, smashed. There are a number of casualties in the train. Some people in shock have walked out of the tunnel themselves. Others are still inside the carriages, some trapped under luggage and train parts due to the crash. One of the train carriages is completely crashed. The train has a diesel locomotive. Diesel cannot take fire unless it is already preheated to a temperature above a certain point (depends on the diesel, say about 60-70 degrees Celsius). Unless exposed to open flame or spark, it will not take fire until a temperature of 235-245 degrees Celsius, at which point it will light immediately without any spark or open flame [NOU2000:30]. This means that the diesel can ignite in this scenario either if nearby fire heats it up to its critical temperature, or preheated diesel catches fire from an open flame or sparks from a nearby fire or explosion.

After the accident, the train driver follows the appropriate procedure, and reports the incident and location to the train control centre. The person on duty at the control centre contacts ambulance and the fire brigade through an emergency call, starts the internal emergency procedures for Jernbaneverket [Westrheim2006], and organises for other trains on the same track to be kept on hold (waiting), stopped, or redirected if possible. The emergency central is alerted, and a rescue operation is initiated. The emergency central and participating organisations start gathering information about the area and available resources, e.g., maps of the area, weather condition information, available personnel and equipment, etc. All personnel get relevant parts of this information to their devices before/on leaving for the accident (briefing personnel). The incident is reported to the rescue service. RCC together with RSC launches/starts a rescue operation to evacuate those in need of acute medical treatment. Remaining and not injured, or slightly injured, persons are, due to weather conditions and terrain, also in acute danger and need to be taken care of by the rescue services.

Page 32: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

16

The leader of the team arriving first has the role of temporary rescue operation leader or on-site-commander – OSC. Once the police arrive, the higher ranked police officer will take this role. The OSC sets up a place of command, and tries to get an overview of the situation based on information from the train control centre, and the train driver, e.g., the number of carriages, passengers, any goods, etc. The OSC coordinates equipment and personnel as they arrive. As the fire brigade arrives, fire fighters are going inside the tunnel - they are wearing sensors that monitor their heart rate and temperature.

All personnel are involved in evacuating people from the carriages, and they are moved away in safe distance from the wreck. Medical personnel start evaluation of medical state, register all persons, and mark each with a colour tag showing the degree of injury and need for acute treatment (red: acute/immediate, yellow: not immediate, green: no injury). They are then transported to the mountain lodge for further treatment. Sensors are placed on patients in stable conditions to monitor their state (e.g., heart rate, oxygen flow, blood pressure, temperature, etc.). Team leaders receive information about the location of their subordinates. They may have received a task list and can check out the tasks as they are completed. This information is then sent to the work leader. The police start gathering evidence to investigate causes of the accident. Effects on Rescue Operation – Description of Area, Accessibility, Dangers, etc. The tunnel, rocks, and train carriages hinder communication. The mountain pass is a difficult accessible area, which puts extra demands and limitations on the rescue operation, personnel, and equipment. The low temperature and the deep snow in the accident area create extreme conditions, which in addition to implications for the rescue operation also may have an impact on how well the devices will function. There is also a danger of repeated rockslides in the area. The lack of accessibility by road means that special vehicles – snow scooters or snow mobiles and helicopters – are needed for evacuation and other transport in/out of the actual accident area (both to the collection place and directly to hospital). As noted above, there is a high risk of fire, especially as the crash may have caused diesel from the locomotive to spread outside of the tank and this may have created a kind of “diesel fog”, consisting of very fine diesel drops and oxygen, which may ignite very easily. Also mentioned, diesel itself needs to reach a certain temperature before it will take fire. Thus, avoiding fire to spread close to the locomotive is very important. Sensors can be placed around the diesel tank and locomotive to monitor the temperature so personnel can be alerted in case of increased danger of fire or explosion. Rescue Operation Communication and Tasks Sharing information in this scenario may be very useful for personnel, and a Sparse MANET can provide the needed infrastructure so that devices can communicate. If a Sparse MANET is to be used, nodes and routing daemons will have to be started up at the site to set up the Sparse MANET. The devices connect on arrival and become nodes in the network. Nodes inside the tunnel cannot communicate with nodes outside the tunnel, and because of hindrances, there is limited communication range inside of the tunnel. In addition, due to the activity on site and possibly poor infrastructure in the mountain area, there may be frequent network partitions, and a node may be disconnected from the network for periods.

Possible sources for information include both mobile devices carried by personnel, stationary devices, PCs in ambulances and rescue helicopters, and sensors. Another possible information source is an Internet gateway. The information from these sources can be shared, but there are cases when sharing of sensitive information is not desired. As sensitive information, we can classify medical records of injured persons, environmental

Page 33: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

17

data, layout of buildings and installations, information about dangerous goods, collected evidence, available resources, and status reports.

There are a number of possible communication flows in this scenario. We list three examples of such communication flows here: The first is among team members from the same organisation, e.g., between doctors sharing registration and medical information about patients they share responsibility for, or fire fighters (devices) sharing temperature information from sensors in monitored area. The second example is among members of task oriented ad-hoc teams (can be cross-organisational or not), for instance, in a railway accident a team targeted for going through a certain train carriage to report the situation, etc., consisting of paramedic, police, fire fighter, and possibly rescue dogs (to find people trapped or hidden in the wreck). The third example is communication between different levels in the rescue operation organisational hierarchy, e.g., RSC and OSC, team members and team leaders, and team leaders and OiCs.

The landscape where this accident happened has a big impact on the way the topology of the communication network is shaped; specifically it can create partitioned networks such as:

• Inside the tunnel, formed by devices carried by the rescue personnel and sensors placed in the tunnel.

• At the un-blocked end of the tunnel where most of the teams may be situated and active. For example, first aid might be given to the injured here, the on-site command centre may be situated here, or resources needed during the rescue operation may be stored here.

• At the blocked end of the tunnel where personnel try to clear the tunnel opening. • At the lodge where a transportation hub is established for taking the injured to the

hospital and distributing resources for the accident scene. In such a network, data and services must be made available to all the possible network partitions. This requires finding the best way of using the available resources on the nodes to run services that help the rescue intervention. In addition, the mobility of the nodes must be used in order to transmit data between the network partitions.

2.1.4 Rescue Scenario Phases Rescue scenarios may differ according to a number of factors, e.g., which type of rescue operation, geographical area of the accident, available resources, etc. There are however some commonalities that can be grouped into a set of rescue scenario phases. In the Ad-Hoc InfoWare project, we have identified six rescue scenario phases, which are described in the following. This description was previously published in [Sanderson2007], [Munthe-Kaas2006], and [Sanderson2005].

Phase 1 – A priori: This phase is before an accident has happened, when the relevant organisations – in cooperation with the authorities – exchange information on data formats and shared vocabularies, and make agreements on procedures and working methods. Required certificates would be installed in this phase, and applications can be installed and run so as to allow completion of an initial self-configuration phase. A communication and knowledge environment tailored to relevant applications can be prepared by the middleware, and data replication strategies chosen. In a context aware system, contexts reflecting different scenarios can be prepared, group memberships based on user profiles set up.

Phase 2 – Briefing: This phase starts once the incident has been reported. The briefing involves gathering of information about the accident, e.g., weather, location,

Page 34: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

18

number of people involved, and facilities in the area. Some preliminary decisions about rescue procedures and working methods are also made at this stage. Based on information gathered during this phase, applications can be configured further, security levels chosen, and, if applicable, relevant rescue contexts and profiles put in force.

Phase 3 – Bootstrapping the network: This phase takes place at the rescue site, and involves devices joining and registering as nodes in the network on arrival. In addition, the appointing of rescue leaders takes place in this phase. By preparing communication and taking care of security restrictions in force, the middleware can improve the working environment of the applications.

Phase 4 – Running of the network: This is the main phase during the rescue operation. Events that may affect the middleware services include nodes joining and leaving the network and network partitioning and merging. Information is collected, exchanged and distributed. There may be changes in the roles different personnel have in the rescue operation, e.g., change of rescue site leader. New organisations and personnel may arrive and leave the rescue site, new groups of an ad-hoc, task-oriented kind may form, possibly involving people from different organisations. Applications communicate about available resources and capabilities of the nodes in the network, using whatever knowledge is provided by the middleware. It can update to changes in available resources as the network is evolving, query for more data or information as it becomes available, and adjust its configuration and behaviour accordingly. Computing resources, processing environments and applications situated at neighbours can be utilised, using resource information provided by the middleware and obeying accepted policies for resource sharing. Replicas and proxies can be placed at strategic nodes in the network, and nodes can receive event notifications based on relevance and priority. As nodes join and leave the network the middleware can keep track of available resources and adjust its communication and knowledge environment accordingly.

Phase 5 – Closing of the network: At the end of the rescue operation all services must be terminated. Applications can adapt to the closing of the network by acting on received information about degradation of the capabilities and resources of the network.

Phase 6 – Post processing: After the rescue operation, operation specific data, e.g., resource use, user movements, and how and what type of information was shared, may be analysed to gain knowledge for future situations. Depending on the nature of the application, it may have gathered statistical or other information for post scenario analysis or for future use.

Fixed networks cannot be relied on during the rescue operation itself. In the opening phases (phases 1-2), there are however no such restrictions, which give possibilities for preparations that to some degree can compensate for a lack of resources during the rescue operation. An account of KM actions and tasks related to each phase is detailed in Chapter 7 in relation to a larger example rescue scenario.

2.2 Overall Requirements Analysis In this section, we discuss various issues from the application scenarios that influence the development of the middleware services. The challenges and requirements analysis are presented from five different perspectives; organisational, security, network, communication, and application development constraints. A more extensive version of the overall requirements analysis presented here has previously been published in [Sanderson2007].

Page 35: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

19

2.2.1 Organisational Intra- and Inter-operability Rescue operations are characterised by cooperating personnel from a number of participating organisations, and a hectic and dynamic environment. Each organisation handles different aspects of the operation, and needs different information or possibly disparate views of the same information. We have seen that a rescue and emergency operation is an organised venture, with a certain structure of responsibility and reporting, and that specialised procedures and policies are followed. For the coordination and organisation of the operation, it would be beneficial if applications can access available information shared automatically among the participants and across organisations. Examples of such applications mentioned in the scenarios are dispatching of personnel and equipment, on-site identification of passengers, and registering casualties for medical treatment.

To accommodate the heterogeneity of organisations involved, the information should be presented in a way that all organisations can understand. This implies supporting functionality similar to high-level distributed database system functionality, querying available information and keeping track of what information is available in the network. The middleware must account for different domain ontologies and standards that might be used by the organisations. A major challenge is to support information sharing across organisations such that they understand each other’s structure and data descriptions. There are cross-organisational procedures involving governmental and other authorities as well as internal procedures that each organisation has to follow in a rescue operation. Thus, there should be some support for both intra- and inter-organisational structure in the middleware. Contextual support may be beneficial as contexts can be used for reflecting specific rescue procedures in force, for supporting user role profiling and personalisation, device profiling, providing temporal and spatial information, movement patterns, etc.

The organisations involved, each covering different areas of expertise, may use different data models, structuring and syntax, as well as appropriate domain ontologies for their respective areas. Thus, to achieve cross-organisational information sharing and interoperability we need support for handling structural, syntactic, and semantic heterogeneity. The domain of distributed databases has solutions for structural and syntactic heterogeneity. For semantic heterogeneity, two issues need addressing; the first relates to understanding, and concerns mapping and translation between different vocabularies (and data models), for instance through using a common base minimal model; the second concerns the serialisation of the vocabularies in agreed-upon standard languages and syntax, e.g., XML and RDF.

Organisations working towards common terminology and standards in the health sector include CEN/TC251 [CEN/TC251] (Europe) focusing on standards in areas including information models, terminology, security, and interoperability; HL7 [HL7] (USA) focusing on standards for the exchange, management and integration of electronic healthcare information; and KITH [KITH] (Norway) working on standards in areas including information security, message exchange, and terminology.

Work within the area of knowledge management has found that sharing knowledge in the form of metadata about where knowledge resides, often can be as important as the original knowledge itself [Alavi2001], for instance as organisational knowledge maps, i.e., sharing the “what and where/who” of information available in an organisation. The fact that a rescue and emergency operation is organised with designated roles, lines of reporting, procedures and a certain structure, and as such can be seen as an ad-hoc organisation created for the purpose of the operation, means that it may be beneficial for information sharing in rescue operations to share this kind of metadata.

Page 36: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

20

2.2.2 Security Aspects Use of wireless networking presents the system with a series of security related challenges. The term security comprises four main categories: authenticity, integrity, confidentiality and non-repudiation. Starting from the medium itself, one of its main characteristics is that anyone with a tuned wireless device can hear the traffic in the network, as well as induce traffic into it. Since neither of the two issues can be efficiently prevented in a rescue operation, they introduce problems into at least three of the security categories. For authenticity, mechanisms need to be present to ensure that only authorised devices can be part of the network. For integrity, it must be ensured that no one can manipulate the network traffic, e.g., by changing its contents before (re-) transmitting it. Related to confidentiality, some data must be kept confidential, i.e., not available to unauthorised recipients. Another issue not directly related to data security but still directly related to the efficiency of the network (and as such to the efficiency of the rescue operation itself) is signal jamming. Unlike wired networks where a device needs physical access to the medium to perform any action, wireless networks are highly prone to such attacks and there is no real solution other than locating the perpetrator and disabling the source of jamming. All these issues are considered external attacks, i.e., attacks coming from nodes that are not (or not supposed to be) part of the network.

A much different, and in some cases more dangerous type of attacks are so called internal attacks. These attacks come from nodes that have already been authenticated and as such considered a legitimate member of the network, but at some point have become compromised. This can happen if devices are stolen, lost and then found by a malicious person, or even worse, if their legitimate owner becomes compromised. Detecting and excluding such nodes is a much more difficult process and requires specialised ideas and solutions.

Other than attacks, the system must take care of confidentiality within the network. Inter-organisational collaboration is one of the key functionalities in a rescue operation. Nevertheless, different organisations may have different security requirements and policies and, in addition, not all data within the network should be seen by every member. Examples of such data may be medical records, police records, other personal or confidential information, etc. Additional challenges are imposed by different organisational structures and levels of confidentiality within the organisations themselves, which could also change dynamically.

2.2.3 Network Characteristics Deployment of communication networks in the affected area must be quick, and should not make any assumptions about the available infrastructure in the area. Therefore, utilising MANETs that are formed by the wireless devices brought into the area and carried by the rescue personnel, is the most promising approach for this application domain. However, the layout of the affected area, physical obstacles in the area, and the mobility of the network nodes might lead to frequent and/or long term network partitions. We call this kind of networks Sparse MANETs, which can be seen as a combination of MANETs and Delay Tolerant Networks. These are typically highly dynamic networks in terms of available communication partners, available network resources, connectivity, etc.

Sparse MANETs are relatively small wireless networks of limited range/short distance, using protocols like IEEE 802.11, Bluetooth. A fixed network or base station may not necessarily be available at the rescue site, as it may be situated in a remote area. Devices are part of the network if they are in close proximity to the network. The devices are heterogeneous, ranging from small mobile phones to laptops, and of differing

Page 37: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

21

capabilities and resources, e.g., smaller devices may have very limited battery power/energy, memory, and storage capacity. CPU storage space, bandwidth, and battery power represent important resources. These networks are self-organising decentralised systems, like peer to peer systems. They are dynamic; nodes move in and out of network range, and may be switched off. As there is no fixed topology, and the neighbourhood changes dynamically, there is no preset message routing table. Similarly, related to information sharing and retrieval, there is typically no global catalogue or routing table for query routing, and query results may depend on specific location and time of placing the query. In high mobility conditions there is no guarantee that a device/node will be able to access information on neighbouring devices/nodes.

In general, cooperation among nodes in a Sparse MANET is not guaranteed, and incentives may be needed to assure a sufficient level of cooperation and sharing. However, in rescue and emergency operations, all participants are working towards a common goal, so there is a strong incentive for sharing and cooperation, which can be utilised when developing solutions for information sharing in this environment. Devices and Network Topology In a train accident, as in our example scenario, injured passengers may be leaving the tunnel through both ends, in which case rescue personnel have to be at both ends to take care of them. Additionally, some rescue personnel might be in the middle of the tunnel to search for passengers and to extinguish the fire in the train. In this situation, the network might be split into three partitions, i.e., one at each end of the tunnel and one in the tunnel. Obviously, these network partitions must function independently as good as possible and provide the rescue personnel access to all mission critical services and information.

The end-user devices are very heterogeneous, ranging from high-end laptops to low-end PDAs and mobile phones. CPU, storage space, bandwidth, and battery power represent important resources. Finally, many application scenarios, like coordination of rescue teams, have also quite hard non-functional requirements such as availability, efficient resource utilisation, security, and privacy. Both the heterogeneity of devices and the broad range of functional and non-functional requirements, impose the need for resource management mechanisms.

In Sparse MANETs, the middleware for resource sharing based on traditional resource reservation will not work in a proper manner, and guarantees for resource availability cannot be given. Instead, best effort resource reservation might be treated as a soft-state which is only valid for a specified time, either for the time period the resources are needed exclusively by a process, or for the time period the resources are (with high probability) accessible. Resource management can benefit from predicting future availability of resources, not only to establish meaningful time-outs for soft-state reservations, but also to increase the availability of information and services through replication and graceful degradation. There are several approaches to addressing prediction of future connectivity and by this, future access to resources and services. One approach is to analyse location and movement of nodes with GPS information. The most common approach is to assume that at least a fraction of the devices are location-aware, e.g., with GPS, in order to improve information accessibility in MANETs. However, a GPS based solution is not reliable enough, especially for critical services like rescue services. The GPS receivers do not work in many locations, for example inside buildings, tunnels, close to large buildings, and even dense forests. Therefore, it is important to be able to predict network connectivity in the absence of GPS support. Another solution for location-awareness is to combine sensor data from sensors such as accelerometers, compasses or

Page 38: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

22

acoustic sensors. This requires that each device has access to such detailed sensor data, which is highly unlikely due to the heterogeneity of the devices. User Mobility In a rescue situation, the movement of the nodes can be classified using different criteria; we consider physical, social and network mobility. The physical mobility of the node distinguishes mobile nodes that change their physical position with respect to the other nodes, and stationary nodes that move insignificantly after their deployment. The social mobility of the nodes considers the relationships of a node to other nodes, meaning a node can be a singular node or part of a group of nodes. Additionally a node might have stable, variable, random, or periodic group membership relations with other nodes. The network mobility of a node describes the way a node is available for communication, in other words if it is continuously connected to a network or not. The continuously connected nodes are available for communication all the time and they can participate in synchronous communications. The intermittently connected nodes participate to communications only occasionally due to the device’s physical constraints, for example sensor nodes, or due to physical mobility, for example cars. In many cases, the only way nodes can communicate is in an asynchronous manner.

We can assume that the nodes have a very strong incentive to collaborate. The nodes have various resources and services they are willing to share and they provide services to each other. We also assume that it is a negligible percentage of selfish nodes, i.e., not willing to collaborate and share resources and services. This means that in the worst case, the nodes collaborate only by sharing bandwidth and providing routing. In the ideal case, nodes share resources in such a manner that storage of information and computation becomes pervasive. Unfortunately, the way the groups are formed might not be fully known to the middleware because there might be “off-line” agreements between the rescue personnel. This might lead to teams and actions unforeseeable for the middleware, i.e., not according to knowledge available on the nodes.

In conclusion, the middleware needs to reduce its communication overhead in the network. It also needs to reduce the requirements for specific functionalities from the devices. However, it can take advantage as good as possible of the information extracted from the routing protocol. This information is “up-to-date”, it is available on all nodes, and it comes at no additional communication cost from the middleware.

2.2.4 Communication Issues The user devices can connect and exchange information using, e.g., wireless network technology. If the MANET is fully connected, any two nodes can communicate synchronously, using some other nodes as routers, or directly, if they are one-hop neighbours. Information can also be flooded into the network. The task of finding paths between two nodes is performed by a routing protocol. There are two main groups of routing protocols: reactive and proactive. The reactive protocols find a path between a source and a receiver when the source wants to send a message, the proactive protocols keep routing tables up-to-date at all times.

We may not always be sure that the network is fully connected at all times. Tunnels, mountains, or buildings may block the signals, or the area may be too large. In addition, devices may be switched off for a time period to save battery. Thus, we need a communication service that is able to handle Sparse MANETs where any two nodes may not always be able to communicate directly. In publish/subscribe systems a subscriber subscribes to events of interest and a publisher publishes notifications about such events. The subscribers and publishers usually connect to a node or nodes running the service,

Page 39: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

23

often called broker or mediator, and they are thus de-coupled in time and space. If a subscriber is disconnected, it may receive notifications when it reconnects if the service provides storage of undelivered notifications. However, it may not always be the case that network partitions merge; the communication service should therefore also support the possibility of routing in time. This has been described as the store-carry-forward paradigm, or epidemic routing. Epidemic routing may result in too much information being replicated. In Message Ferrying [Zhao2004], the randomness is less as some dedicated nodes are used and following pre-defined paths for doing the store-carry-forward-operations. This lowers the amount of data carried and replicated. Thus, the publish/subscribe paradigm provides a means for asynchronous communication. For delivery of notifications, the service can use unicast or multicast as provided by an underlying routing protocol, (or flooding), or it can use store-carry-forward using all the nodes, or a subset of the nodes, as in message ferrying.

The subscriber expresses its interest according to the supported subscription language. In a rescue operation, all kinds of events can be of interest such as sensor data, new resources, new nodes, availability of a map, updated information on patients, and health sensor data. Different applications have different needs for subscription languages, thus a variation of subscription languages and query approaches of differing levels of complexity may have to be supported. The more complex the subscription language is, the more specific and to-the-point-subscriptions the application is able to make. The disadvantage is that the filtering algorithm will be more complex as well, and doing routing optimisations becomes more difficult.

Personnel from different organisations cooperate by sharing information. With respect to the current state-of-the-art, it is reasonable to assume that each organisation is using its own standardised vocabulary and data model. Therefore, it is also desirable to support subscriptions that are targeting different vocabularies. This support must also include the translation or mapping between different vocabularies since information in rescue operations is shared also across different organisations. For example, the coordinators of fire fighters and police officers might be interested in the temperature in a certain room. They may use different terms for temperature even if the sensor supports only the fire fighters’ vocabulary.

2.2.5 Development Constraints Development of applications and middleware for rescue and emergency operations faces a set of reality-oriented constraints such as real movement of personnel, i.e., nodes’ movement on the scene, real network traffic and load, i.e., communication needs and patterns. Additionally, the realistic software development and test environment constitute a problem. To see how well a solution can handle/cope with these problems, different kinds of testing can be performed during the development phase.

Testing to see if the solution can handle the dynamicity of the system can be achieved using mobility traces obtained with synthetic mobility models, computer simulated rescue operations, and real rescue scenarios. Synthetic mobility generators can be used, providing random movement for different single mobility models (e.g., Random Walk Mobility, Steady-State Random Waypoint Mobility, and Random Direction Mobility) and group mobility models like the Random Point Group Mobility Model. In a real rescue situation, nodes have different movement patterns depending on groups of nodes and individuals. In lack of publicly available movement traces of personnel in the disaster area during a real rescue operation or exercise, mobility traces instrumented from an agent-based simulation environment can be used, e.g., the RoboCupRescue Simulation Project. The best solution for testing would however be using mobility traces from real rescue

Page 40: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

24

exercises or operations, which would allow the middleware to be tuned to adapt very well to a given scenario.

How well the solution handles data traffic, i.e., what type of traffic the nodes will generate during a rescue operation, implies finding parameter limits of the traffic that nodes will be able to generate during a rescue operation. The traffic in a disaster area has different properties than traffic in cellular networks. Communication can be modelled as voice traffic, or by using periodic communication with multiple transmission sources. The organisation of communication flows is modelled using either flat communication models or structured communication models. The latter takes into consideration inter-organisational hierarchies and intra-organisational interactions to shape the communication flows, which is particularly interesting in relation to the presence of several cooperating organisations in rescue and emergency operations. In this model, most communication takes place between the nodes in a group. One node plays the role of communication gateway and group leader. Additionally, group leaders of different groups communicate periodically with each other. This is similar to real life communication in rescue scenarios, i.e., team members report to the team/group leader, and group leaders coordinate their actions.

2.3 Ad-Hoc InfoWare Middleware The challenges presented in the overall requirements analysis are reflected in a number of requirements to the middleware. In the following, which is partly adopted from [Munthe-Kaas2006], we summarise the resulting requirements and present a middleware architecture for Sparse MANETs for rescue and emergency scenarios. The description of the middleware framework and solution proposed in Ad-Hoc InfoWare is previously published in [Sanderson2007], [Munthe-Kaas2006], and [Plagemann2004].

2.3.1 General Middleware Requirements In the middleware requirements analysis in the previous section, we looked at different aspects of the application scenario. From the overall analysis, we have found a set of needs and requirements to the middleware, as summarised in the following. We need support for intra- and inter-organisational information flow and knowledge exchange, as well as means to announce and discover information sources. Contextual support enables applications to adapt better to particular scenarios and allow them to fine-tune according to spatial and temporal data. Profiling and personalisation can assist in filtering and presenting information in accordance with the needs of users and devices, as well as displaying their capabilities. The middleware should provide support for organisational structure and for creating groups on-the-fly. Security must be dynamic, enabling privileged users to grant group memberships at the rescue site, as well as to influence changes to the security regime when circumstances demand it. Communication must be available, reliable and efficient even in the presence of frequent network partitioning. There should be extensive support for resource sharing between devices, including ways to register and discover available resources of different types. To allow graceful degradation the middleware must do monitoring and be prepared for it. This leaves us with ten articulated requirements and goals for the middleware, which we list in the order of appearance:

• Intra- and inter-organisational information flow • Service availability • Context management

Page 41: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

25

• Profiling and personalisation • Group- and organisational support • Dynamic security • Communication • Resource sharing • Graceful degradation • Data sharing and storage

The last requirement, data sharing and storage is not addressed in the Ad-Hoc InfoWare project. The remaining nine requirements are addressed by six areas of consideration for the middleware. Figure 2.2 gives an overview of these areas. The areas constitute a foundation for a middleware framework covering the required services for Sparse MANETs in rescue scenarios:

• Knowledge Management – to handle ontologies, support metadata integration and interpretation;

• Context Management – to manage context models, context sharing, profiling and personalisation;

• Data Management – to cater for capabilities similar to those of distributed databases;

• Communication Infrastructure – for supporting distributed event notification, publish and subscribe services, and message mediation;

• Resource Management – to register and discover information sources and web services as well as resources available, to handle neighbour awareness, computation and application sharing, mobile agents, proxy and replica placement, and movement prediction;

• Security Management – for access control, message signing and encryption, supporting group- and organisational structure, group key assignment, and dynamic security services.

Figure 2.2 Middleware areas of consideration.

Data Management is handled by a Data Management component that is not addressed by the Ad-Hoc InfoWare project, thus is considered to be outside of the middleware

Page 42: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

26

architecture as included here. This is because the nodes are autonomous, and different nodes may use various systems for data management and storage, and therefore viewed as part of the local node. The remaining areas in the framework correspond to the components of the Ad-Hoc InfoWare architecture in the following manner: Communication Infrastructure to the Distributed Event Notification Service component; Resource Management to the Resource Manager; Security Management to the Security and Privacy Manager, and the concerns Knowledge Management and Context Management are handled by the Knowledge Manager. The middleware components all rely heavily on each other’s services.

2.3.2 Architecture Components Here we present each of the components in the Ad-Hoc InfoWare middleware architecture. The architecture is shown in Figure 2.3.

Figure 2.3 Ad-Hoc InfoWare Middleware Architecture.

Knowledge Manager This component corresponds to the concerns Knowledge Management and Context Management. The purpose of this component is to provide flexible services that allow relating metadata descriptions of information items to a semantic context and support management of knowledge sharing and integration in a rescue operation scenario. As the KM is the focus of this thesis, a more detailed requirements analysis and component description is presented in Chapter 4. The KM offers support for the dissemination, sharing and interpretation of ontologies, and querying of ontologies and ontology contents. Therefore, we need distributed knowledge base functionality and a global view of what knowledge is available in the network.

The overall issues that need addressing to support organisational intra- and inter-operability in information sharing in this environment include understanding across domains and organisations through use of knowledge management techniques, avoiding information overflow through content filtering and personalisation, managing availability of information, metadata and ontologies, offering information query and retrieval services, and supporting information exchange.

Overall, we can differentiate required functionality into those handling the structure, content and meaning of information, and those that have a supportive role. The groups of services are handled by different KM sub-components. The first group of

Page 43: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

27

services all have to do with handling metadata at some level, ranging from data structure definitions/descriptions to conceptual definitions of some domain. This group of components we have termed metadata handling components. The components in this group include a Semantic Metadata and Ontology Framework to deal with sharing and interpretation of ontologies, Data Dictionary Management for management of metadata in local and global data dictionaries, and Profile and Context Management supporting filtering and personalisation. The components handling the group of supportive services/functions are termed tool components. These are the Query Management for querying ontology and metadata as well as retrieval of relevant information, and XML Parsing for information exchange in a standardised format.

We use the term global (distributed) data dictionary in a more extended sense than the traditional, as our approach does not have a global conceptual schema. This is possible through (1) making the assumption that organisations have agreed upon a basic common ontology or vocabulary, as well as exchanged information about standards and data models in use in the organisation, a priori to the rescue operation; and (2) using a standardised format for information exchange. Such agreements and exchanges regarding vocabularies, data models and standards would take place in the a priori phase of the general rescue scenario phases described in Section 2.1.4.

The KM adopts the hierarchical view of knowledge, i.e., data, information, and knowledge seen as levels of increasing semantics. Our main concern is management and sharing of explicit knowledge, i.e., factual knowledge found for instance in documents, databases, files, and models. Tacit knowledge, e.g., in mental models, procedures and skills, is not handled, although sharing this kind of knowledge may be valuable in rescue operations. The knowledge management processes in focus are knowledge storage/retrieval and knowledge transfer/sharing. We do not address aspects of knowledge learning (creation) and application, as these are not directly relevant in the application scenario. An overview of different kinds of knowledge and knowledge management phases is found in [Alavi2001].

In our approach for metadata management, metadata descriptions of information items for sharing are enriched with concepts from ontologies and vocabularies. Enhancing metadata with concepts from domain ontologies is an approach also used in Kashyap and Sheth [Kashyap1998] and in solutions for the Semantic Web [Stuckenschmidt2005]. As device resources, e.g., memory, processor, energy, and bandwidth, may be limited, simply propagating all information/data to all participants would be too expensive resource wise. By sharing extracts from the metadata descriptions, rather than full metadata descriptions, knowledge of available information resources can be spread through the network while saving resources like bandwidth and energy, thus contributing to a more effective use of available resources.

The creation and maintenance of ontologies will happen outside of the rescue operation itself. Thus, our use for ontologies is of already existing ontologies utilised in the following way: domain ontologies and/or vocabularies from relevant domains, e.g., medical domain; as support in sharing vocabularies, e.g., as a bridge or an upper level ontology; and for enhancing metadata descriptions with terms/concepts from the ontologies. The reasoning tasks needed during the operation can be limited to instance reasoning. Given the resource limitations of our scenario and depending on the context in an ongoing rescue operation, e.g., resources and capabilities on the currently available devices, it is possible that only a limited set of instance reasoning services can be offered. Powerful devices/laptops will be the only nodes able to offer such services, and smaller devices will have to request these services from these, as resource weak devices will not have the capabilities to perform such inference tasks.

Page 44: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

28

Some awareness of context is necessary to handle the changing conditions and dynamicity, as well as the heterogeneity inherent in the diversity of organisations, domains, and devices. Support for context management must be flexible enough to allow several possible interpretations. It should offer a uniform context model (through a context ontology) that facilitates context sharing and semantic interoperability, and that is flexible enough to accept user-defined models. In addition, it should provide the means by which to collect context data and transform it to fit the context ontology, thus providing a sharable, semantics-based format. For supporting context aggregation, discovery, query, reasoning, and dissemination, the context management can build on solutions from the other frameworks.

The KM depends on services from all the other components in the architecture, particularly the Distributed Event Notification Service for subscribing to event notifications. The Resource Manager provides information about changes in the network, events that the KM subscribes to via the Distributed Event Notification Service. It relies on services of security and privacy provided by Security Management. The KM supports the other components with metadata management, query/search, and vocabulary matching. For example, the Distributed Event Notification Service uses services from the KM to find relevant nodes in handling certain types of subscriptions. Distributed Event Notification Service The Distributed Event Notification Service (DENS) [Skjelsvik2006] provides a publish/subscribe service and delay-tolerant delivery of notifications in case of, e.g., network partitioning. In the publish/subscribe service a subscriber subscribes for information, and the publishers publish information, independently. If DENS cannot deliver a notification to a subscriber, the service will store the notification and try to deliver the notification using the store-carry-forward paradigm. The main design goals for DENS are to support a flexible subscription model, ensure high delivery ratio, as close to at-least-once semantics as possible, and use a-priori information and information collected at run-time to best configure the system. Providing an asynchronous communication service makes sense because of the nature of Sparse MANETs. Only providing synchronous communication would be too limiting because of communication disruptions, for instance, when devices are suddenly out of reach or turned off. In addition, having such a service makes it easier for the application to receive information by just sending subscriptions to the service and getting notified when the event takes place, and therefore not having to find the information and communicating directly with the node.

To save resources, DENS filters events at the source instead of sending all potentially interesting updates to the service. A monitoring agent filters events based on the subscriptions relevant to the local node. The DENS supports different subscription languages at run-time, to support different application needs regarding the level of language sophistication. With respect to the need of handling resources efficiently, simple languages are better than complex ones. Each organisation sharing information in the rescue operation may have its own vocabulary and DENS therefore supports subscriptions that are targeting different vocabularies. This support must also include the translation or mapping between different vocabularies since information in rescue operations is shared also across different organisations.

In summary, the DENS supports the following functionality: content-based filtering at source nodes, flexible definition of source nodes in subscriptions, multiple vocabularies, and multiple subscription languages at run-time. All nodes run part of the service, but some of the nodes are chosen to run the full-fledged service and act as mediators, keeping information as consistent and in an as up-to-date state as possible. Information about

Page 45: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

29

subscriptions and notifications is exchanged in a gossiping fashion between these nodes, and this implements a delay-tolerant delivery of notifications in case ordinary unicast provided by the routing protocol is not possible. On the publisher nodes, there are monitoring agents to filter what is of interest. The size of the overlay, and the degree of replication used, is dependent on the mobility scenario and the application scenario. Resource Manager Component The component for Resource Management [Drugan2005] aims at enabling best possible resource sharing among the devices involved in the network. This requires gathering and disseminating information on availability of resources, and to facilitate resource access and sharing. During a rescue operation the involved personnel has a very strong incentive to collaborate and cooperate across organisations. This requires that they share knowledge and resources in order to fulfil their tasks. Remote access to resources has the potential to improve availability of data and services and provide graceful degradation or migration for services. This is especially useful for applications running in a network of many devices with limited resources where it is necessary to make use of all available resources. To keep track of available resources and services and enable resource sharing, a distributed solution is necessary.

A resource manager’s main duties in resource constrained environments are registration and discovery of services and data sources, and making the information available through the network. For this, each node can maintain a sharing profile with information about locally available resources and running services. The physical resources need to be frequently monitored, which can be achieved by using mechanisms provided by the operating system.

In MANETs, the most important resources are the network bandwidth and the energy of the mobile devices; it is the duty of the resource manager to preserve as much as possible of both of these resources. In general most of the energy of mobile devices is consumed during communication; therefore the resource manager should minimise the communication overhead of the middleware. This can be achieved by predicting future availability of resources, not only to establish meaningful time-outs for soft-state reservations but also to increase the availability of information and services through replication and graceful degradation. The problem of predicting resource availability can be approximated by predicting adjacency of nodes for the network topology. Security and Privacy Manager Security Management [Munthe-Kaas2006] has a direct impact on the functionality of all the other components and therefore has to be considered from an early stage of development. It has to make sure that all the security requirements are fulfilled during the other components’ operation. Security Management depends on some of the other components for services such as key distribution, storage of keys and certificates, getting information on the neighbourhood, etc. This poses additional security issues that need to be taken care of. Security can be implemented in either the traditional layered approach or a more adaptive cross-layered one, both having their advantages and disadvantages. The layered approach offers a high level of security by putting clear borders to data flow. In our case, a more flexible approach might be a better choice, provided that it does not significantly weaken the overall security level.

Page 46: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

30

2.4 Summary In this chapter, the background context for the work in this thesis has been described through application scenario description of rescue and emergency operations, their structure and organisation, as well as an example scenario illustrating a railway accident. Challenges related to information sharing in rescue and emergency situations are examined in the overall requirements analysis, where we look at issues from different perspectives; organisational interoperability, security, network, communication, and application development. The analysis results in a set of overall requirements for the middleware framework, which is presented together with the Ad-Hoc InfoWare architecture components. The overall requirements that are relevant to the KM are:

• support for intra- and inter-organisational information flow • support for profiling and personalisation • support for context management • group- and organisational support

Page 47: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

31

Chapter 3 Knowledge Management: Background and Related Work In this chapter, we present background literature and related work in fields directly related to this work. Recalling from the introduction chapter, we are dealing with the intersection of three areas; Knowledge Management and Representation, Information Sharing, and Sparse Mobile Ad-hoc Networks. Characteristics and limitations of Sparse MANETs relevant for this thesis are described in Chapter 2. In this chapter we focus on knowledge management and representation, and information sharing.

In our application scenario, we want to share information about what is available for sharing in the network. To be able to search and retrieve information and knowledge, it has to be represented in a way that can be queried, and maybe reasoned about. Metadata describe various aspects of information and data, and is useful for this purpose. Ontologies can give a common vocabulary, and more information around how different metadata is to be interpreted, for example in relation to a specific domain. Contributing rules, constraints, and definitions, ontologies define a context for the terms in metadata descriptions. Storage is also part of knowledge representation; in addition to storing the actual data or information, metadata and ontologies also have to be stored. Knowledge bases (KBs) are essentially databases that store knowledge, i.e., information and its context – the metadata and ontologies that describe the context(s) that allows information to be interpreted in a certain meaning, the definitions, assumptions, rules and constraints that may apply for a certain interpretation to be true.

Although there is a lot of research work going on in the area of information sharing and in mobile environments, we found relatively few systems/approaches that focus on knowledge management and knowledge sharing for MANETs, and that we found suitable as related work for this thesis. In addition, most of the approaches presented as relevant work cover more than one of the areas of background literature. Therefore, we have chosen to present related work in a separate section in this chapter, after having presented the relevant background literature. Aspects from these approaches are however included as part of the background literature where appropriate.

Page 48: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

32

3.1 Information and Knowledge The terms knowledge and information are often used interchangeably. In relation to knowledge management, there is a clear distinction between the terms. In this section, we take a look at what the term knowledge means, and the differences between information and knowledge. We will also clarify our use of the terms in this thesis.

The most common view of knowledge used in the field of informatics/IT is the hierarchical, i.e., data – information – knowledge as increased levels of semantics. In this view, data is seen as symbols not yet interpreted; when assigned meaning it is transformed to information. Knowledge is that which enables people to assign meaning to data and thereby generate information [Spek1999]. Schwotzer et al. [Schwotzer2006A] [Schwotzer2003] [Schwotzer2002B] describe knowledge as information related to a context, i.e., information with its semantic information can be called knowledge. Alavi and Leidner [Alavi2001] point out that there are alternative ways of viewing knowledge, and that these have an impact on knowledge management in that they influence which strategies are focused upon, and what role information technology knowledge management systems will have in knowledge management. Alavi and Leidner give a summary of alternative views of knowledge, their implications of knowledge management and the role of knowledge management systems (and information technology). In this thesis, we adopt the hierarchical view of knowledge and see knowledge as information with its context and semantics, similar to the definition used in Shark [Schwotzer2002B]. In [Spek1999] a complementary definition of knowledge is given:

- knowing which information is needed (‘know what’) - knowing how information must be processed (‘know how’) - knowing why which information is needed (‘know why’) - knowing where information can be found to achieve a specific result (‘know

where’) - knowing when which information is needed (‘know when’)

According to Nonaka [Nonaka1994], there is a clear distinction between information and knowledge. Information can be viewed from both syntactic and semantic perspectives, and is a necessary medium for formalizing knowledge. Information is a flow of messages, and knowledge is created and organised by this information flow. Spek and Spijkervet [Spek1999] states that information has only limited validity, and is always linked to a specific situation. It is concrete and “stand-alone”, and although information may be knowledge, it does not necessarily lead to action. Knowledge qualifies for action, and includes “know-how”, best practice, experience, etc. It is always applicable in a whole range of situations and a longer period of time than information. An essential aspect of knowledge is that it enables people to act and deal with all available information sources in an intelligent way, i.e., it is basis for actions and to achieve a result. Another feature of knowledge is that it is adaptable, and typically specific to a particular context or domain, e.g., to a specific company or organisation. As knowledge constitutes the whole set of insights, experiences and procedures that are considered correct and true, it can guide people’s thoughts, behaviour and communication.

There are many types of knowledge. Many classifications rely on the tacit-explicit and individual-collective knowledge classifications. Nonaka [Nonaka1994] refers to Polyani’s classification of knowledge into tacit and explicit knowledge. Alavi and Leidner [Alavi2001] list ten types of knowledge that are used in different taxonomies. In addition to explicit and tacit knowledge, their taxonomy includes the following kinds of knowledge:

Page 49: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

33

individual (inherent in person), social (inherent in collective actions of a group), declarative (know-about), procedural (know-how), and conditional (know-when), causal (know-why), relational (know-with), and pragmatic (useful knowledge for an organisation). For our purpose, the division of knowledge into explicit and tacit is sufficient. Explicit knowledge can be found in documents, records, models, pictures, etc., and is the kind of knowledge we deal with in this thesis. Easily captured in records and documents, explicit knowledge is generalised, codified knowledge, and exists in a transmittable, formal, systematic language. Tacit knowledge is defined as knowledge rooted in actions, experience, and involvement in specific contexts. Examples are mental models and know-how about specific work (skills). This kind of knowledge can be very difficult to capture and represent in a storable and shareable format. It is typically implicit, and can have a personal quality making it hard to formalize and communicate. Although sharing tacit knowledge may be very valuable in a rescue scenario, we do not deal with this kind of knowledge in this thesis. Tacit and explicit knowledge are mutually dependent and reinforcing qualities of knowledge. To some extent, tacit knowledge is a prerequisite for understanding explicit knowledge, particularly in situations of knowledge sharing.

3.2 Knowledge Management There are several definitions for knowledge management from different domains. Definitions for organisation and business management may have a somewhat different focus than definitions from the domain of information technology. In [Davies2003A], knowledge management is defined as”…the tools, techniques and processes for the most effective management of an organization’s intellectual assets.” Another definition/description from [Alavi2001] is more from the organisation and business management perspective; “Knowledge management is a dynamic, continuous organisational phenomenon of interdependent processes with varying scopes and changing characteristics.” The first definition has a more technological focus; knowledge management as a set of tools, techniques and processes, while the second focuses on the organisation, seeing knowledge management as a set of processes within organisations.

The focus of knowledge management is a number of issues [Spek1999]: to formulate a strategic policy for developing and applying knowledge; to implement a knowledge policy (supported by all parties in the organisation); improve the use and adaptation of knowledge in the organisation; and monitoring and evaluating assets and management related to knowledge terms. Knowledge management sees knowledge as an important factor in production, and aims to improve the performance of processes, organisations and systems, as well as to integrate strategy formation and implementation.

In knowledge management, knowledge organisation has two dimensions [Spek1999]: knowledge management processes and knowledge organisation structure. Knowledge management processes are the basic operations required in knowledge management: developing new knowledge; securing new and existing knowledge; distributing knowledge; and combining available knowledge. Knowledge organisation structure involves the carriers of knowledge, e.g., people or computers, their characteristics and relationships. It is characterised by form – the medium/carrier of knowledge, people or documents, tacit or explicit; location – the position of the knowledge carriers (in the organisation); time – availability and use of knowledge related to time (always vs. temporarily); and content – how procedures and experiences can be applied (concrete expressions).

Page 50: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

34

There are many different approaches to knowledge management [Spek1999], the two most prominent are the system-oriented approach and people’s behavioural criteria and cultural context. The system-oriented approach considers knowledge as a production factor that can be analysed in isolation from knowledge carriers, and aims to improve insight into the supply and demand of knowledge and the quality of the organisation as a “knowledge system”. This is achieved through the analysis and documentation of processes, actors, and knowledge carriers involved. In the approach considering people’s behavioural criteria and context, knowledge is considered inseparable from human beings. The focus here is on the improvement of professional organisations. The actions on improvement are based on abstract concepts, and the emphasis is on facilitating professionals to apply their knowledge to the best of the organisation.

Knowledge management is a learning process [Spek1999], and applicable in various situations. It can be used at all levels in an organisation (strategic, tactical and operational), and is also applicable across a range of organisations. It offers the necessary methods and techniques and provides support when structuring activities.

A problem oriented conceptual model of knowledge management, showing knowledge management activities (activity circle), is provided in [Spek1999]. The knowledge management activities according to this model are to conceptualise what knowledge is and what role it plays in the organisation through investigation, clarification and modification; to reflect on which improvements are to be implemented through assessment, evaluation, and planning; to act – implement the decided actions to improve knowledge through formalisations, standardisation and knowledge management processes like developing new knowledge, distribute, combine and facilitate; and to review – determine the effects of the implemented actions. These activities produce the following: a set of objectives, risk assessment, required conditions for improvement, instruments for achieving the objectives, and criteria for measurements.

There are three groups of techniques and instruments in knowledge management [Spek1999]: (1) Management, culture and personnel, e.g., education or training, strategy development, and recruitment. (2) Organisational adjustment, e.g., redesigning business processes. (3) Information technology, including documentation technology, information systems, groupware, telematics, workflow management systems, personnel information system in which knowledge profiles are stored, knowledge-based systems, data mining, and intranets. Our focus is on the third group, information technology. Knowledge management requires a multi-disciplinary approach, e.g., business economics, human resource management, organisational psychology, communication science, computer science, and operations research.

3.2.1 Knowledge Processes There are four main knowledge management processes: knowledge creation; knowledge storage and retrieval; knowledge transfer and sharing; and knowledge application. These will be described below. The processes most relevant to the work in this thesis are knowledge storage/retrieval and knowledge transfer/sharing. Knowledge Creation. A significant part of knowledge management is the element of learning, and of developing and creating new knowledge. According to Nonaka [Nonaka1994], there are two dimensions to knowledge creation: the Knowledge dimension, differentiating between tacit and explicit knowledge; and the Ontological dimension, which has to do with the level of social interaction. Nonaka’s model for knowledge creation builds on the assumption that knowledge is created through the conversion between tacit and explicit knowledge. Based on this, four modes of knowledge

Page 51: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

35

creation (or knowledge conversion) have been identified: socialization, externalization, combination, and internalization. Socialization involves conversion from tacit knowledge to tacit knowledge, and takes place through (social) interaction between individuals, e.g., shared experience, and does not necessarily need a language, for instance in the case of observation or practice. Externalization converts tacit knowledge to explicit knowledge, e.g., through articulation of lessons learned. Combination means creating new explicit knowledge from existing explicit knowledge through a re-configuration of existing information, e.g., re-categorization, synthesising, sorting, or adding, and may also happen in knowledge sharing through, e.g., meetings or emails. Internalization has to do with creating new tacit knowledge from explicit knowledge, e.g., learning through reading or discussion. When all four of these modes are managed by an organisation, there is organisational knowledge creation.

Figure 3.1 Nonaka's SECI model for knowledge creation.

Knowledge Storage and Retrieval. The storage, organisation, and retrieval of the knowledge in an organisation are also termed organisational memory. Alavi and Leidner [Alavi2001] refers to a definition by Stein and Zwass for organisational memory: “the means by which knowledge from the past, experience, and events influence present organizational activities.” The knowledge can exist in various forms, e.g., documentation, structured information in databases, as well as organisational procedures and processes. Organisational memory is classified into semantic and episodic memory. Semantic memory refers to general, explicit and articulated knowledge, while episodic memory is context-specific and situated knowledge, e.g., the context, time, place and outcome of decisions made in the organisation. Databases, query languages and groupware are examples of effective tools for this knowledge management process. Knowledge Transfer and sharing. Knowledge transfer and sharing may occur between and among individuals, within and among teams, as well as within and among organisations. Improving and maximising these flows are a major challenge of knowledge management. A major distinction between knowledge sharing and knowledge transfer is that knowledge sharing can happen unintentionally, without a specific objective, and in multiple directions, while knowledge transfer has a clear focus and objective, and is unidirectional [King2006]. The transfer of knowledge to locations where it is needed and can be used is important. Knowledge transfer is driven by communication processes and information flows. Alavi and Leidner [Alavi2001] describes four forms of knowledge transfer: informal/formal and personal/impersonal. Which is the most effective knowledge transfer will depend on which kind of knowledge is being transferred. Informal mechanisms, e.g., a conversation at lunch, can be effective in promoting socialization, but may close any wide dissemination. Formal transfer, e.g., a lecture, may put a restraint on creativity, but ensuring greater distribution of knowledge. Personal channels, e.g.,

Page 52: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

36

apprenticeships, may be effective for distributing highly context specific knowledge, while impersonal channels, like knowledge repositories, more effective for knowledge that easily generalises to other contexts. Knowledge Application. This concerns integrating the knowledge into the existing knowledge in the organisation, and using the new knowledge. Information technology can support knowledge application by embedding knowledge into organisational routines. There are three primary mechanisms for integrating knowledge to create organisational capability [Alavi2001]; directives (e.g., set of rules, standards, procedures, instructions), organisational routines (e.g., task performance and coordination patterns, interaction protocols), and creation of self-contained task teams (specialist teams for problem solving).

3.2.2 Distributed Approaches In their approach to Distributed Knowledge Management (DKM), Bonifacio et al. [Bonifacio2002] view an organisation as a constellation of organisational units. Such a unit is called a knowledge node, which is an autonomous and locally managed knowledge source. The approach is based on two principles; the principle of autonomy, mainly semantic autonomy through creating ones own contexts, and the principle of coordination through exchange and context mapping. Here, knowledge management becomes a tool that has to support two processes that are qualitatively different; the autonomous management of knowledge produced locally on a single knowledge node, and the coordination of different knowledge nodes without any centrally defined semantics. Each knowledge node represents a knowledge owner within an organisation. A knowledge owner is an entity, individual or collective, having the capability of managing its own knowledge from both a conceptual and technological perspective. Knowledge is not independent from the interpretative schema, thus different schemas give different interpretations of the “same” situation. They differentiate between absolute knowledge, which is an ideal, objective image of the world, and local knowledge, which can be different from absolute knowledge, is dependent on perspective, and is possibly approximate and partial interpretations of the world. Local knowledge is generated by individuals and within groups of individuals, e.g., organisational units, through a process called meaning negotiation, which is a process of extracting a schema which will make sense for that unit. Thus, knowledge appears as a heterogeneous and dynamic system of multiple “local knowledge systems”. Context in DKM is viewed as an explicit representation of a community perspective. Context is extracted from local public directories (functioning as a kind of local classification), and is used in document/information search and sharing. A linguistic based algorithm is used for meaning negotiation (semantic matching) in this process. The high level architecture of a knowledge node consists of local applications, contexts, and agents. Contexts range from being a category system (simple situations) to an ontology (complex situations), a collection of guidelines, or a business process. A context manager allows users to browse and edit local contexts through a simple interface. The agents have two main functions, to support the knowledge node users to compose outgoing queries and answering incoming queries from other knowledge nodes.

A peer-to-peer model for knowledge management is described in [Schwotzer2006B]. In this model, the peers are the individuals in an organisation. Knowledge exchange happens between the layers in an organisation (on the level of individuals, group level, and organisational level), and to others (external). Knowledge filters for incoming knowledge filter out incoming knowledge, while for outgoing knowledge a knowledge strategy is applied. An individual can have different roles – acting

Page 53: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

37

and communicating on behalf of oneself, the group, or the organization. This requires different knowledge filters and strategies according to the current role. This is further complicated in that an individual may have – and act under – several roles simultaneously, and there may be role changes. This model fits well with our application scenario, described in Chapter 2, where rescue personnel take on specific roles in the rescue operation, and also have particular roles in their organisation. Shark, particularly in its use of knowledge ports, is based on this peer-to-peer model for knowledge management. Shark is presented in Section 3.5.1.

3.2.3 Knowledge Representation The goal of knowledge representation is to model information so that it can be efficiently stored and reasoned with. A knowledge representation scheme describes how a program can model knowledge about a domain. There are several knowledge representation paradigms [Heflin2001]: semantic networks, frame systems, first order logic, ontologies, and context logic. Semantic networks consist of concepts and connections, represented as nodes and arcs (that may be labelled) in a graph. The meaning of a concept is implied through the way it is connected to other concepts in this graph. A frame system consists of data objects (frames) that have slots representing its attributes/properties. The slots contain one or more values, some of these values may be pointers to other frames. Frame systems are similar in form to semantic networks, but considered to be more structured. First-order logic (predicate logic) views the world as a set of objects and relations that hold between these. It is a well known formalism for reasoning, and its expressiveness makes it a powerful knowledge representation language. Context logic places assertions (made about something in the world) in a context that includes assumptions that are necessary for the assertion to be true. These contexts are first-class objects that can be used in propositions. The knowledge representation paradigm we use in this thesis is ontologies, thus we look more closely at ontologies in a dedicated section.

3.3 Metadata Management A large part of this thesis is devoted to our approach for metadata management, which forms the basis in our approach for network wide information sharing. Everything that is to be shared with other nodes in the network, i.e., information items, vocabularies and ontologies, resources, etc., is described using metadata and registered in a dedicated metadata directory for sharing among nodes. Our three layered approach to metadata management founds the base of the KM. Corresponding to the layers of Semantic Web usage [Schwotzer2003], the layers in our approach can be said to cover different abstraction levels of metadata. Therefore, background literature regarding metadata and metadata management has been given a relatively prominent position here. The organisation of this sub-chapter somewhat corresponds to the three layers in our approach, as do the three types of metadata we have found necessary in our application scenario.

In its most general form, metadata can be explained as data describing data or information, or in short: data about data. Metadata describes aspects about data that are needed for administrators and system management [Schwotzer2003], as well as enabling users, seekers and owners of information resources to find and manage them [Haynes2004]. An example is the database schema in a structured database. But a more expanded notion of metadata can also be used, where the schema is only a small part of the metadata; it can also describe or be a summary of the information content of the data, and

Page 54: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

38

may also be used to represent properties of individual objects of heterogeneous types and media as well as relationships between these [Kashyap1998].

There are many definitions of metadata, a good overview can be found in [Haynes2004], where the definition below is taken from. This definition corresponds to some extent with the view of metadata that is taken in this thesis.

“Metadata is data that describes the content, format or attributes of a data record or information resource. It can be used to describe highly structured resources or unstructured information such as text documents. Metadata can be applied to description of: electronic resources; digital data (including digital images); and to printed documents such as books, journals and reports. Metadata can be embedded within the information resource (as is often the case with web resources) or it can be held separately in a database.” [Haynes2004]

Among the advantages of using metadata is that it enhances retrieval performance, provides a way to manage electronic digital objects, and can help determining the authenticity of data. Metadata may also provide valuable support in solving interoperability through exchanging metadata about the nature of the data to be transferred and how it should be handled. There are two contexts for metadata and interoperability; metadata as a tool to facilitate exchange of information between interoperating systems, and interoperability of metadata schemas themselves [Haynes2004]. Kashyap and Sheth [Kashyap1998] state that domain specific metadata can be used to handle information overload as well as being the key to solving semantic heterogeneity problems.

For metadata to be useful, efficient metadata management is important, and different aspects of metadata management are best handled at different stages. Haynes [Haynes2004] gives an overview of what needs to be handled at different stages of metadata management: First, metadata requirements analysis has to be conducted to clarify what the purpose of the metadata is, e.g., retrieval or resource description. Then metadata schemas have to be selected and developed before handling encoding and maintenance of controlled vocabularies. Further, content rules have to be handled, as well as issues of interoperability and quality management. Finally, search aids and user education have to be administered.

3.3.1 Different Classifications of Metadata There are many aspects that need to be described in our application scenario to support information sharing. First, we need to describe the information items and resources that are to be shared in the network. Second, as there are several organisations participating, and to improve search and integration of information, we need to be able to describe the semantic aspects of information items, so that the information can be used in its correct meaning (its correct (semantic) context). Third, to support filtering to avoid information overload, and as the scenario involves users (from different organisations), having different roles in the rescue operation, and carrying different types of devices, we need to describe characteristics and current context of both users and devices.

We have chosen to call all these kinds of descriptions ‘metadata’, thus in our application scenario, we have found that there are three kinds of metadata that need to be handled. The first category is information structure and content description metadata; this kind addresses information item contents, structure and localisation. ‘Information item’ here means anything (that carries information) that is registered to be shared in the network. For example, this implies that also vocabularies and ontologies can be registered with a metadata description for sharing in the network, as can profile and context descriptions.

Page 55: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

39

The second kind, semantic metadata, covers concepts and relations in a domain, providing a semantic context for the information item (in the wider sense described above). We include domain ontologies here, as these can define/describe a semantic context for some information item. This is relevant in our scenario as we have several different domains represented through the participation of different organisations.

The third class of metadata is profile and context metadata. This class of metadata includes profiles for users and devices, and context-related information, typically including location, time, and situation (e.g., the current rescue operation). A user can have different roles in different contexts, and use different devices. A device is of a particular type, and has a set of resources, capabilities, an owner (e.g., organisation), etc. In addition, there may be profiles describing information interests for an entity, e.g., for a particular role in a rescue operation. The use for this kind of metadata is filtering and ranking to avoid information overload. Context is typically location, time, and situation (e.g., which rescue operation or the current task). In the following we will describe three different classifications of metadata, and present how our metadata categories correspond to these metadata classes.

Haynes [Haynes2004] gives an overview of different classes of metadata, the metadata types here are administrative metadata for managing and administering information resources, e.g., location; descriptive metadata for describing or identifying information resources, e.g., specialised indices or user annotations; preservation, which is metadata related to the preservation of information resources, e.g., documentation regarding physical condition preservation management of resources; technical metadata which is related to system functionality and metadata behaviour, e.g., compression ratios, system response times, authentication and security data; and use, which is related to the level and type of information resource usage, e.g., multi-versioning information or use and user tracking.

Kashyap and Sheth [Kashyap1998] categorise metadata into content independent metadata and content dependent metadata. Content independent metadata captures information that is not dependent on the content of the associated document, and is useful for retrieving information from physical location and for checking if the information is up-to-date, e.g., time-last-update, location. Content dependent metadata does depend on the content of associated documents, e.g., size and max-colour for describing an image. This category is further categorised into direct content-based metadata, which is directly based on the contents of the document, e.g., full text indices using document vectors of terms, and content-descriptive metadata, which describes the content without using the document contents directly, for instance in annotations describing the contents of an image. Content-descriptive metadata is further differentiated according to whether the metadata representation is dependent on the application or subject domain of the information. Domain independent metadata has no direct link to the application or subject domain, e.g., XML document type definitions. Domain specific metadata can represent information that is more meaningful in relation to a specific application or domain, e.g., rescue operation type. Here, the use of domain specific terms makes issues of vocabulary use important; domain specific ontologies can be used as vocabulary for the metadata. Kashyap and Sheth propose domain specific metadata as the most appropriate for dealing with issues related to semantic heterogeneity, which we describe in Section 3.4, and also for handling information overload.

The different classes of metadata can represent different types of information. Content independent metadata capture content independent information, which helps encapsulating information into units of interest and can be represented as objects in a data model. Representational information is captured by content dependent metadata, which

Page 56: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

40

together with domain independent metadata (structural organisation of the data), enables interoperability through some approaches to navigation and browsing. Information content is captured by content dependent metadata, particularly domain specific metadata, which capture the information meaning in relation to a specific domain. Ontologies are here viewed as a special case of domain specific metadata that can be used for constructing more domain specific metadata as vocabulary for information content characterisation.

Kashyap and Sheth handle information overload by capturing information content using domain specific metadata in the form of terms taken from domain-specific ontologies to construct metadata. This metadata is used to construct both data context and query context.

In their approach for metadata handling in a global computing environment, Pfoser et al. [Pfoser2002] give a categorisation of data where they differentiate between content data and profile data, that both are captured by what is called essential metadata. Content data is descriptive data and information, the actual data registered by the user on a device, and can be spatially and/or temporally refereed, i.e., where and when the data was seen or recorded. Content data can be handled using XML [XML] for structure, RDF [RDF] for expressing meaning, and RDF Schema [RDFS] for simple ontology. Profile data includes user and device profiles, and movement data. The user profile describes the user and its preferences, and represents the choices and the needs of each individual user. A device profile contains device-related information, e.g., the type of device, information availability, group memberships, etc. Movement data are coordinates of current and historical locations. Essential metadata is excerpts of profile data and abstractions of content data. It is based on the semantics of content data, and the device and user profiles. As it is a higher abstraction data than content and profile data, it is not so detailed. Pfoser et al. make the assumption that the user has provided semantic mark-up of the content data. It is necessary to automatically abstract the essential metadata. Alternatively, the user can assist through providing keywords or select predefined categories. But as data should be “registered” by its environment, it is an objective to keep user involvement at a minimum.

Our three categories of metadata correspond to the above classifications in the following way:

• Information structure and content description metadata covers both content dependent (content descriptive and direct content based) and content independent metadata in the classification from Kashyap and Sheth above, and corresponds roughly to the metadata part of content data in the approach from Pfoser et al., and may also partly cover their essential metadata.

• Semantic metadata corresponds to domain-specific metadata from Kashyap and Sheth as it includes ontologies and vocabularies.

• Profile and context metadata corresponds to what Pfoser et al. term profile data above. As Kashyap and Sheth do not discuss metadata describing context and profiles of users and devices, there is no clear correspondence between any class of metadata in their metadata classification and this metadata category (but it may possibly be considered content independent metadata?).

3.3.2 Standards for Metadata Management The development of standards is fundamental for the successful operation of metadata systems [Haynes2004]. Standards addressing important aspects of metadata management have been developed [Stuckenschmidt2005]: Syntactic standards define syntax for encoding metadata so that it can be processed/operated on, e.g., by a web browser. Examples are HTML with meta-tags, and RDF as XML application (RDF/XML).

Page 57: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

41

Structural standards support metadata development, and have been defined on top of existing syntactic standards. Examples are RDF Schema and Topic Maps. The model structures defined by RDF Schema [RDFS] are similar to frame-based knowledge representation systems. Topic Maps [Pepper2002] determine representation elements for describing information content. Content standards can give some guidelines regarding what data to store for efficient organisation and use of metadata. An example of such a standard is Dublin Core [DublinCore] for describing document object content, which define a set of metadata elements that can be encoded using various syntactic standards. Another example of a standard for content is the MPEG-7 [MPEG-7] standard for describing complex multimedia document content and structure. Problems not addressed by structure and content standards are completeness ensuring that all the information has been annotated with corresponding metadata to avoid erroneous indexing and missing results in search, consistency – the content has to be correctly described or it is not useful or could be misleading, and accessibility of the metadata, which has to be described for both users and providers for it to be useful.

3.3.3 Ontologies Ontologies are widely used in several domains, e.g., knowledge management, knowledge representation, artificial intelligence, intelligent information integration, information retrieval, library sciences, database design and integration, to name a few, and a key element in the Semantic Web. In essence, an ontology defines what exists (reality) within some limited area/domain of concern (Universe of Discourse – UoD), in other words it is a model of some part of the world. Ontologies define a common vocabulary for sharing information in a domain through specifying terms for classes/concepts and relations between these. The specification can be in informal text or using formal language, e.g., predicate logic. There is agreement about that an ontology must include a vocabulary and corresponding definitions, but there is no consensus on a more detailed characterisation of what it should contain. Ontologies can be said to try to capture the consensual knowledge in a domain, and they can be reused and shared across applications and by groups of people in different locations.

Several definitions of ‘ontology’ exist, tailored to the domain and its use, and the term ontology is used to denote anything from a simple list of terms (controlled vocabulary) to highly expressive and complex formal ontologies. A widely cited, and the most common definition used in computer science is from Gruber [Gruber1993]: “An ontology is an explicit specification of a conceptualization”. Fensel et al. [Fensel2002], describe ontologies as consensual, shared, formal descriptions of important concepts in a domain that defines classes, properties for classes and relations, and organise these hierarchically. Another definition from Kiryakov et al. [Kiryakov2003] says that an ontology is ”… the basic knowledge formally defining the model (schema, conceptualization) relevant for a certain knowledge domain.” For our use in this thesis, all three definitions are applicable.

Types of ontologies can be classified according to several characteristics [Gomez-Perez2004]. In the following, we have chosen to look at categorisation into two dimensions; (1) based on the richness of the internal structure, and (2) based on the subject of the conceptualisation. McGuinness [McGuinness2003], differentiates between simple and structured ontologies according to their internal structure and expressiveness, and gives a good overview of different definitions and types of ontologies, requirements to what properties must be held to be considered an ontology, and different uses for ontologies. There are several classes of ontologies according to internal structure and expressiveness, ranging from a controlled vocabulary (catalogue), a list of terms with their

Page 58: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

42

meaning (glossary), or a thesaurus (terms with simple semantics), to highly expressive variations with strict subclass hierarchies, formal instance relationships, value restrictions, and logical constraints at the most expressive end. Examples of such highly expressive ontologies are Cyc [OpenCyc] and Ontolingua [Ontolingua]. McGuinness describes three universal properties that must hold in order that an ontology can be considered an ontology proper and used as basis for inference; a finite, controlled vocabulary, unambiguous interpretation of classes and term relationships, and strict hierarchical subclass relationships between classes. According to this, catalogues, glossaries, and thesauri are not considered to be proper ontologies, as they do not have all three properties (all lack a strict subclass hierarchy). Another distinction is between lightweight and heavyweight ontologies [Gomez-Perez2004]. Lightweight ontologies can contain concepts, concept taxonomies, relationships between concepts, and properties describing concepts. Heavyweight ontologies add axioms and constraints to lightweight ontologies, clarifying the intended meaning of the terms, and can be used as basis for inference. Thus, heavyweight ontologies correspond to ontology proper, i.e., meet at least the three universal properties for ontologies.

Based on the subject of the conceptualisation, i.e., what the ontology models, there are several types of ontologies. Perez et al. [Gomez-Perez2004] give an overview of such ontologies, and we will mention a few that are relevant for our work here. Top-level ontologies, or Upper-level ontologies, contain general concepts that existing ontologies can link their root terms to. It is desirable that such ontologies are articulate and universal. There is a problem that there are several top-level ontologies and these use different criteria for classifying the most general concepts. Examples of top-level ontologies are Cyc containing common sense knowledge, and SUMO. SUMO [SUMO] was created to solve the problem of different criteria used in top-level ontologies. It is a large general-purpose, formal ontology with a structure and a set of general concepts that can be used as basis for constructing domain ontologies. Domain ontologies provide vocabularies of concepts and relations tailored for a specific domain, thus are reusable in that domain. Examples are medical ontologies for reuse, sharing and transmission of patient data, and ontologies for chemistry, engineering, etc. The UMLS [UMLS] project provides a large database designed for integrating terms from different sources, e.g., classifications and clinical vocabularies. The concepts found in domain ontologies are usually specialisations of general concepts defined in upper-level ontologies. Application ontologies are dependent on a particular application domain, and often extend and specialise the vocabulary of that domain. SOUPA (Standard Ontology for Ubiquitous and Pervasive Applications) [SOUPA] is a set of ontologies supporting pervasive computing applications.

Ontologies can be modelled using different knowledge modelling techniques and implemented in various languages. According to Perez et al. [Gomez-Perez2004], heavyweight ontologies can be modelled using approaches from the domain of Artificial Intelligence (AI) that combine frames and first-order logic, and are best implemented using AI based languages, e.g., Ontolingua and LOOM [LOOM], or ontology mark-up languages like RDFS [RDFS], DAML+OIL [DAML+OIL], or OWL [OWLGuide], which is a revision of DAML+OIL – a semantic mark-up language for Web resources. Lightweight ontologies can be built using less expressive techniques, from software engineering and databases, e.g., UML, Entity-Relationships, or SQL-scripts.

In relation to knowledge management on the Semantic Web, ontologies offer a way to cope with heterogeneous representations of information resources, for knowledge representation, and they also support a common understanding within a domain, and are something that can be communicated between people and application systems. Solutions from information technology used in knowledge management are normally built around an

Page 59: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

43

organisational structure, integrating knowledge of different levels of formality to facilitate its access, sharing and reuse [Davies2003A].

In our application scenario, our focus in relation to ontologies will be on the use of existing ontologies, as ontology development and maintenance will take place outside of the rescue operation. The following is based on [Kiryakov2003]. Ontology development will need what is termed terminological reasoning, i.e., check class definitions for consistency, generality/specificity, and hierarchy constrictions. For ontology use, which involves already developed ontologies with already defined classes, relations and instance data, instance reasoning is sufficient. The inference services needed for instance reasoning are realisation – finding the most specific classes in an ontology that describe a given instance resource, instance checking – to see if a given instance is of a given class, retrieval – find the set of individuals (instance names) and their components that are described by a given class, model checking – to see if a set of instances is a correct model of a given ontology, which is useful for compatibility checks between different versions of data and ontologies, and minimal sub-ontology extraction – finding the minimal sub-ontology that is a correct model for a set of instance statements, which is useful for determining the scope of an ontology exchange. In addition to finding classes and instances, these services cover support for complex querying for answers to questions involving instance data, e.g., retrieving instance pairs that have a given relation in common, and consistency checking of instance data with reference to the ontology [Kiryakov2003].

Due to the presence of possibly resource weak devices in our application scenario, it is necessary that some resource-demanding reasoning services are offered at more powerful devices. In [Kiryakov2003], two approaches are described for implementing an architecture where information is sent to an external reasoner for processing when more expressive (than storage and querying of ontologies and instances) reasoning is needed than a device can provide. The first of these is to only send relevant parts of ontology and instance data to the external reasoner. This minimises the exchange overhead, but imposes the question of how to determine which parts are relevant. The problem of such fragmentation into non-interacting chunks may also require a lot of reasoning. The second approach is that the external reasoner provides its own copy of ontology and/or instance data, but this means there may be a lot of duplicates.

3.3.4 Profiles and Context An important issue in relation to the KM and sharing information in our application scenario, particularly with regard to saving resources in a resource-weak environment, is the importance of handling information overload. Information filtering, i.e., filtering out irrelevant information, and personalisation, i.e., specifying preferences and capabilities of the user (or device or application), are ways to handle information overload, as is knowledge about the current situation and environment of the user of a device. Information items can exist in different kinds of media and formats, and this can be matched to device capabilities and user preferences. The use of contexts and profiles is useful for achieving this. As will be seen in this section, the term context is used in many different ways in the literature. Profiles viewed as contexts give information of the “what” and “who” of an entity or person, and are fairly static. Spatio-temporal and situation context information is dynamic and concerns the “where”, “when” and “why” of an entity. Such contexts may have a short lifespan and not be subject to any persistency actions. Another frequent use of the term is to denote the domain knowledge and background situation where an information object gets semantic meaning. Information may be interpreted different in different contexts, i.e., it may have different meaning, thus semantic context of information

Page 60: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

44

is also important. In our approach, semantic context is handled by semantic metadata and ontologies, covered elsewhere in this chapter. In the following we will examine different views of what context and profiles are, what they may contain, and explain our terminology.

Brezillion [Brezillion1999] states that there are different types of context with respect to what is being considered (knowledge, reasoning, interaction), and which domain we are in, and that all these contexts are interdependent. Context has a static and a dynamic side; static context is known a priori, modelled before some event or time x, dynamic context is modelled during x, thus is known a posteriori, i.e., from observation or experience (as/when it happens). Static aspects of context, termed static contextual knowledge, may be coded at design time, and is attached to domain knowledge. This knowledge remains constant during an interaction. Dynamic aspects of context is knowledge that changes throughout an interaction, thus it has to be considered during use, e.g., in problem solving.

This differentiation between static and dynamic aspects of context corresponds to our differentiation into static and dynamic context. We have chosen to separate static and dynamic context by using different terms, thus for static context we use the term profile, and for dynamic context we use the term context. We use profiles for describing characteristics of entities, e.g., user, device, rescue operation. Context we use to describe dynamic aspects of the environment, e.g., location and movement of user or device, current role, etc.

Schmidt [Schmidt1998] and Dey [Dey2000] discuss what context and context-awareness are in relation to mobile computing. Here the focus is very much on the environment of entities (e.g., users or devices), and not on the context of information items or semantic context. Dey gives a definition of context in [Dey2000]: “Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and application themselves”, in addition to an overview of different other definitions of context.

Pfoser et al. [Pfoser2002] do not specifically use the term context, but what they term profile data, which includes user and device profiles and movement data, contains information that is typically regarded as context: user location, time, current device position, movement history, etc. Thus, their profile data can be said to contain both static and dynamic context.

Schwotzer [Schwotzer2006A] distinguishes between different types of context, and focuses on the relationship between explicit knowledge (documents, models, pictures) and its context. Two types of contexts are described: topical context and location context. Topical context is described as a collection of terms and their relations of a dedicated domain of discourse, e.g., a research area. This type of context is used for organising documents from a topical point of view, and can be said to state the semantics of documents. Location context is used for attaching documents to locations, e.g., longitude/latitude. The focus here is on the context of information items (documents, resources), and the semantic context of a piece of information through the subject topics it is associated with, rather than context of the knowledge carrier, e.g., the device that stores the information item, although location context for an information item would likely be the same as the location of the knowledge carrier. Using ontologies is a powerful approach to describing contexts in the form of concepts and relations between these (conceptualisation).

In relation to the CoBrA context broker architecture, Chen et al. [Chen2003] describe context to mean “... an understanding of a location, its environmental attributes

Page 61: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

45

(e.g., noise level, light intensity, temperature and motion) and the people, devices, objects and software agents it contains.” Thus, their notion of context seems to incorporate both static context information about entities (users, devices) as well as dynamic aspects like motion and location. Chen et al. take an ontology based approach to modelling and sharing context knowledge.

Perich et al. [Perich2002] use the term profile, and state that a profile used in pervasive computing environments should reflect the changing context of the user, thus should be described in terms of beliefs, desires, and intentions. Rule-based profiles are used to determine both current and future actions of entities. In their model, ‘beliefs’ represent factual information, e.g., user schedule and preferences for, e.g., type of music. ‘Desires’ represent something the user would like to accomplish, e.g., to listen to a preferred type of music. ‘Beliefs’ and ‘desires’ can be assigned values or functions indicating utility and reliability to support comparisons with other profile information. ‘Intentions’ represent a set of intended tasks that can be deduced from ‘desires’ or explicitly expressed by the user. For instance, downloading a specific piece of music, which can either be explicitly expressed by the user, or deduced from a stated ‘desire’. Intentions are modulated by beliefs as well as contextual parameter, e.g., location, time, battery power, and storage space. The profiles are encoded in ontologies.

In their conceptual framework for context-aware systems, Coutaz et al. [Coutaz2005] see context as an information space consisting of a set of contexts, where each context is defined by a specific set of situations, roles, entities and relations between entities. The information space can be modelled as a directed state graph where the nodes represent contexts, and the edges represent the conditions for change in context. Entities may be literal values as well as real-world and information objects. The roles of an entity can be for example functions that the entity can satisfy. An ontological foundation is together with an architectural foundation seen as a key aspect in this model.

3.3.5 Modelling Languages and Technologies As stated in Section 3.3.2, standards are important for metadata management, and as we will see in Section 3.4, the efficient use and management of metadata for information and knowledge integration and sharing in heterogeneous environments is dependent on standards. In this section we describe two important standards for modelling metadata; The Resource Description Framework (RDF) and Topic Maps. Ontologies can play an important role in information integration and sharing, e.g., as common vocabulary and as a possible solution to the semantic heterogeneity problem. The Web Ontology Language (OWL) builds on RDF and RDF Schema, and we will present OWL in conjunction with RDF. RDF and OWL RDF [RDF] is a data modelling language used for information representation and knowledge exchange in the WWW, particularly metadata about resources that can be identified on the Web. Providing a formal based model for metadata, its intended use is in situations where metadata has to be processed and exchanged by applications and machines without loss of meaning, thus supporting interoperability between applications, which is a very relevant feature in relation to our application scenario (and middleware). OWL [OWLGuide] is a language for defining ontologies, and is used for publishing and sharing ontologies, supporting advanced Web search, software agents and knowledge management.

To allow machine-processable statements, Uniform Resource Identifiers (URIs) are used for identifying the elements of an RDF statement, and XML is used for representation

Page 62: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

46

and exchange of statements. RDF statements are similar to other formats for recording information, e.g., rows in a simple relational database, or simple assertions in formal logic. Information in these formats can be dealt with as if they were RDF statements, which allows RDF to be used for integration of data from different sources. Resources are described in terms of simple properties and property values, in the form of a triplet consisting of subject, predicate, and object. A subject represents the ‘thing’ the statement describes. The predicate represents the property of the subject described in the statement. An object represents what the value of the property is according to the statement. It is also possible to state the types of the things in the statement. The element identifiers, URIs, can refer to anything that we need to represent in a statement: network accessible resources (e.g., an information item or a service), resources not directly accessible via a network (e.g., an organisation, a person), and abstract concepts (e.g., ‘current user’, ‘rescue operation role’). A common way to represent RDF statements is in the form of a graph where nodes represent the subjects and objects, and arcs represent the properties. The syntax most commonly used with RDF is RDF/XML, a variant of XML. An introduction to RDF is given in [RDFprimer].

RDF can represent semantics/meaning in metadata and the type of things in statements, and is based on a formal model theory, but it does not make any data modelling commitments, and provides no mechanisms for declaring property names to be used. To define the semantics, i.e., how to interpret what is expressed in the statements, e.g., in a particular domain or vocabulary, we need RDF Schema (RDFS – RDF vocabulary description language) [RDFS], which provides a basic ontology modelling formalism for defining the type of things and relations between properties and resources, i.e., modelling primitives like classes, sub-classes, sub-properties, and domain and range restrictions. These are used for describing the classes, properties and other resources that we can use to describe the information resources [Fensel2003]. Thus, it provides a basic type system for RDF models, allowing developers define a specific vocabulary for RDF data. The class and property system of RDFS is similar to type systems of object-oriented programming languages. Although RDFS provides basic ontological modelling, it is mainly useful for creating class hierarchies, and does not solve all possible requirements for ontologies. More expressive definitions become possible with OWL.

The main concepts of OWL are class/subclass hierarchies (taxonomies), properties and inheritance of properties, and data types stating legal values of properties. Having a basis in formal logics is important in the OWL community. Based on Description Logic (DL), OWL introduces the possibility of reasoning over semantic contents. In addition, it allows combining ontologies in an application. There are three layers of OWL, defined to reflect compromises between expressability and implementability: OWL Full, which gives maximum expressiveness and syntactic freedom, but no computational guarantees – it may be undecidable. OWL Full has no constraints on constructs, and is a real superset of RDFS. OWL DL restricts OWL Full. Here, classes and individuals are strictly separated, thus a class cannot be an individual of another class. OWL DL does not have any characterisation of data type properties. OWL Lite restricts OWL DL by adding further restrictions, e.g., that class construction is only allowed through intersection or property constraints, only allows cardinal restrictions of 0 and 1. Compared to RDFS, OWL Lite gives the possibility to express (un-) similarities and simple constraints in a class hierarchy. What level of OWL is needed will depend on the level of strictness required by the application; if the main need is for expressing and interchange of terms, OWL Full may be adequate (application-specific reasoning can still be used), while when there is a need for a higher level of strictness, and general reasoning is required, OWL DL or OWL Lite may be more appropriate [Herman2007].

Page 63: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

47

Reasoning in RDF/OWL can be very resource demanding, which is a highly relevant issue for mobile devices that may have limited resources. The computational complexity of relevant reasoning problems of several OWL 1.1 sub-languages is examined in [OWLtractabl]. As indicated above, reasoning in OWL Full is intractable, the computational completeness of OWL-DL is partly an open question, but some reasoning problems are decidable (NEXPTIME-complete), depending on the type of reasoning problem. OWL Lite reasoning is complete (EXPTIME-complete). There are sub-languages with effective reasoning engines, e.g., RDFS, but using RDFS means all OWL features are lost, e.g., the possibility to express constraints and differences, class construction, and reasoning over semantic contents. DL-Lite is a sub-language of OWL DL that includes several features of OWL DL. DL-Lite is complete in polynomial-time (PTIME). Its focus is on efficient query answering and allows the use of technologies from relational database management systems to achieve this [OWLtractabl]. Pocket KRHyper [Sinner2005] is a Java-based reasoning engine for mobile devices that provides a Description Logics interface. It has been evaluated on mobile devices with satisfying results. It is likely that a reasoner for mobile devices will be available in the near future. Topic Maps Topic Maps [Pepper2002] is an ISO standard [ISO13250], with the purpose of describing knowledge structures and associate these with information sources. As such it is an enabling technology for knowledge management. Based on the notion of an electronic version of a book index, the basic elements of the Topic Maps model are topics, associations, and occurrences.

The term ‘topic’ refers to an object or node in a topic map representing the subject it refers to. A subject is the real world thing that the topic stands for, e.g., entity, person, or concept. A topic is an instance of one or more topic types, i.e., a class-instance relationship. A topic has three kinds of characteristics: name, occurrences, and roles in associations. One topic may have several base names, as well as variants for each base name according to a specific processing context. The occurrences of a topic are the relevant information resources that it is linked to. Occurrences can be of different types, termed occurrence role/occurrence role type in the standard. Topics and occurrences together create a two-layer model. Through topic associations, descriptions of relationships between topics are enabled. Associations and topics together create a semantic network. Associations are in TM reciprocal by definition, i.e., if A relates to B; B by definition relates to A. Associations can be grouped according to their association type.

XML Topic Maps (XTM) [XTM] is the standard syntax used with Topic Maps. Among its design goals are that it should be easy to use over the Internet, it should support a variety of applications, and it should be compatible with the XML and Topic Maps standards. When developing Topic Map based applications and systems, a Topic Map engine can be used for tasks such as the creation and storage of topic maps, as well as searching and adding topics, associations and occurrences. A number of Topic Map engines are available, their functionality differs in the support for persistence; queries and schemas, import and export of topic maps in various formats, communication/exchange with other Topic Map engines. Alternative solutions for persistence are (1) through support for database storage; (2) through importing and exporting XTM files at start-up and shut-down; and (3) no persistence. Other differences between engines include programming languages, run-time environment, performance, resource needs, and adherence to standards. Topic Map API (TMAPI), a java interface for Topic Maps, has become a de facto standard supported by several Topic Map engines, e.g., Ontopia Knowledge Suite [OKS], TM4J [TM4J], tinyTIM [tinyTIM], XTM4XMLDB [XTM4], and MTV (Mobile

Page 64: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

48

Topic Viewer). The latter is aimed at mobile phones developed as part of the Shark project [SHARK]. A listing of different Topic Map engines and other tools can be found at [Woodman].

A major challenge in our application scenario is the use of possibly resource-weak devices, e.g., mobile phones and PDAs. Simple tests [Vigdal2006] of loading small topic maps (as XTM files) on small to medium capacity mobile phones, and running simple queries on these, have shown that it is possible to use Topic Maps on resource-weak devices (mobile phones), but it is very slow and there is a clear limit on the file size that can be handled.

3.4 Information Integration and Sharing In our application scenario, several organisations participate in the rescue operation, thus the information to be shared originates from different domains and systems that may use different encodings and models to represent this information. It is important that the information can be shared across these various systems. In addition, organisations may be interested in different aspects and views of the same information, and it would be an advantage to be able to combine information from different sources.

Information sharing and integration is useful for combining data from different sources to arrive at new information or to find answers to queries that can not be answered from one source alone, to share available information resources on a particular topic, between different systems, e.g., in different databases on the web, knowledge in different organisations, etc. This is difficult to achieve because different systems may use different abstractions to represent information, different syntax and data structures, etc. There is a need to create understanding (e.g., through translation or mapping) between these different systems. To achieve information sharing, it is necessary to provide full accessibility on data, and ensure that the accessed data can be processed and interpreted by the remote system. The use of ontologies for the explication of implicit and hidden knowledge is a possible approach to overcome the problem of semantic heterogeneity [Wache2001]. These issues are highly relevant to our application scenario and requirements, and in this section, we look at what challenges have to be met in information sharing and integration, and what solutions exist. Some of the challenges can be achieved using standards, thus we will first look at the use of standards, before we describe the interoperability problem and look at how the problem of (semantic) heterogeneity can be solved using ontologies. Finally, we include a description of the Semantic Web initiative, as we share common problems of heterogeneity in relation to information sharing, and a lot of research into achieving semantic interoperability has been conducted in its context. The Use of Standards Standards can provide a common vocabulary in a domain, and is important in all aspects of information sharing and integration, as well as exchange, and knowledge representation. In relation to our application scenario, the different organisations and domains involved may use standards; it is particularly used in the domain of medicine and health. For the domains of police and fire brigades, it has been difficult to find information regarding standards for information modelling, integration and exchange. We assume they follow similar recommendations for information modelling and exchange as the health sector. In health informatics, standards exist for information and knowledge modelling, specific terminology and classifications are used, that may be defined in an ontology. The International Classification of Diseases (IDC-10) [IDC10] is an example of a classification

Page 65: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

49

system used by health personnel in diagnosing (using correct code for the state of the patient). Examples of terminology and classifications from the medical domain are provided by the Unified Medical Language System (UMLS) [UMLS] project at the National Library of Medicine (USA), which develops and distributes multi-purpose, electronic "Knowledge Sources" and associated lexical tools, useful for knowledge representation and retrieval. Source vocabularies used in UMLS include SNOWMED (system for nomenclature for medical records), and MeSH (Medical Subject Headings), a controlled vocabulary thesaurus.

Standards are also developed for electronic message exchange, secure message exchange, electronic patient journal, etc. Some of the most relevant organisations working in this area include KITH [KITH] (Norway), which works in cooperation with the health sector and authorities. Work areas include Information Security, Information Exchange; Codes and terminology, and more. KITH also provide the Norwegian national database for common terminology in the health domain, Volven [Volven]. A framework for electronic message exchange in the Norwegian health sector based on the ebXML [ebXML] standards has been developed [KITH2006]. Starting as an initiative from OASIS [OASIS] and the United Nations/ECE agency CEFACT [UN/CEFACT], the ebXML (Electronic Business using eXtensible Markup Language), is a suite of specifications for electronic message exchange. The ebXML standards were approved as ISO technical specifications in 2004 [OASIS]. The objective of CEN/TC251 [CEN/TC251] (Europe) is to develop standard for health informatics. Work groups include Information models; Terminology and knowledge representation; Security, Safety and Quality; Technology for interoperability. Health Level Seven (HL7) [HL7] (USA) is one of several ANSI-accredited Standards Developing Organisations operating in the healthcare arena. HL7’s domain is clinical and administrative data.

3.4.1 The Interoperability Problem Central in information sharing and integration is the interoperability problem, which states that to be able to share or integrate information, the different systems and domains involved have to understand each other [Wache2001]. The problem of interoperability is well known in the database community with its heterogeneity of systems and data models. There are three categories of heterogeneity problems; syntactic, structural, and semantic heterogeneity [Stuckenschmidt2005]. Syntactic heterogeneity, which has to do with different data formats, can be handled through standards; examples are ODBC, HTML, XML, and RDF. Approaches for solving structural and semantic heterogeneity have been addressed in the distributed database domain, as well as through standardization work, e.g., [CEN/TC251] [OASIS] [KITH]. Structural heterogeneity has to do with differences in data structures and schemas, and this has been extensively dealt with in the distributed database community. There are still many issues unsolved regarding semantic heterogeneity, which has to do with the meaning of the content of information sources. Some simple semantic disagreements can be solved using structural integration; synonyms (different terms, same meaning), homonyms (same term, different meanings), and different attributes in database tables [Stuckenschmidt2003]. Ontologies can provide a solution to semantic heterogeneity; they can be used to explicitly describe semantics of information sources, as well as be used as language for translation between different domains. Approaches for information integration, and the use of ontologies for solving semantic heterogeneity, can be found in [Wache2001] and [Stuckenschmidt2005]. In many ways, an ontology can be viewed as metadata that has been organised in a certain way to model some part of the real world. Haynes [Haynes2004], states that metadata is the key to interoperability in that the access and exchange of metadata between systems help to establish protocols of data exchange

Page 66: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

50

and how the data should be handled, thus interoperability is dependent on metadata exchange.

Related to our application scenario, solutions to all three types of heterogeneity and the conflicts they may cause are necessary for supporting information sharing and integration and particularly for supporting understanding among the various organisations and domains that are represented in rescue operations.

Syntactic heterogeneity is well solved using existing standards [Stuckenschmidt2003], thus in the following, we look at structural and semantic heterogeneity, the different types of conflicts these can cause, and how these kinds of heterogeneity can be solved. Structural Heterogeneity The cause of structural heterogeneity is that different information systems store their data in different structures. The following description is mainly taken from [Stuckenschmidt2005] and [Wache2001].

Structural conflicts occur because things (entities, objects) and facts in the world can be described in different ways even if the same syntax standard is used, e.g., RDF. There are three main types of structural conflicts; bilateral conflicts, multilateral conflicts, and meta-level conflicts. Bilateral conflicts occur when comparing single elements in different information structures, thus involve one element in (each of) the structures found in different information sources. There are three varieties of this conflict: Integrity conflicts, where different identifiers for the same object are used, complicating a merge of information from different sources about the same object; data type conflicts, where different data types are used for the same values, which makes comparisons difficult as comparison operators usually operate on values of the same type; and naming conflicts, where different names are used for the same real world objects. Multilateral conflicts involve several elements in each of the structures, and occur when trying to combine elements from different information sources, generally needed in cases where there is only a partial match of an element in one source when looking at a single element in another source. Multilateral attribute correspondences occur when information is linked to a resource using a single property in one source, and the same information is linked to a resource using several properties in another source. An example here is where a property ‘address’ is used in one source, while in another source, the properties ‘street address’, ‘city’, and ‘post code’ are used. Multilateral entity correspondences are similar to attribute correspondences, but involve the use of single or multiple resources to model a specific piece of information. Missing values occur when some information is missing from one of the sources, for example when recording customer addresses, a case of missing values would occur if two sources both used the properties ‘street address’, ‘city’, and ‘post code’, but for some registrations in one of the sources, a value for ‘street address’ is not used and only the ‘city’ and ‘post code’ are registered. Meta-level conflicts have to do with using different modelling elements to represent the same kind of information. Such elements are entities, attributes and data in conceptual data models, and resources, properties and literals/data types in RDF. Semantic Heterogeneity Semantic heterogeneity considers the content of an information item and its intended meaning, and occurs when two contexts use different interpretation of the information. Thus, to overcome semantic heterogeneity, the meaning of the information item has to be understood across systems.

Page 67: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

51

Wache et al. [Wache2001], list three main causes for semantic heterogeneity (as identified by Cheng Hian Goh): confounding conflicts, when information items seemingly have the same meaning but differ in reality, e.g., due to temporal conflicts, scaling conflicts, due to the use of different reference systems used for measuring a value, and naming conflicts, when there is a significant difference in naming schemes, e.g., synonyms and homonyms.

In [Stuckenschmidt2005], semantic conflicts are divided into data conflicts, caused by the use of different encodings, and domain conflicts, caused by different conceptualisations in domains. Data conflicts occur when different value systems are used for property values. Different scales, occur because numerical values can be based on different scales, e.g., use of different currencies. Different value ranges can occur in cases when abstractions of concrete values are used. Different sources may introduce different abstractions for the same underlying scale (or the scale is not known), for example the use of stars indicating hotel quality. Surjective mappings occur when the abstractions used are of the same scale, but the two value systems used as abstractions have a different number of values. This creates the possibility of a ‘many-to-many’ mapping between the values in the two sources. Domain conflicts have to do with finding relation between different classifications/categorisations. Subsumption is when one class contains all objects contained in another class, which may cause some information not to be found. Overlap occurs when two classes partially overlap, which makes it difficult to see which parts of the instances the instances share and which not, thus complicating information sharing. Inconsistency occurs when classes are disjoint by definition, and is important to pay attention to, as it may cause unwanted results. Aggregation is caused by different levels of abstraction leading to a situation where data is present in an aggregated form (may be similar to subsumption in some cases).

3.4.2 Solutions for Handling Heterogeneity In the following, we look at existing solutions for handling both structural and semantic heterogeneity. This is also termed structural or semantic integration. The following is to a large extent based on [Stuckenschmidt2003], [Stuckenschmidt2005], and [Wache2001]. Structural Integration Solutions dealing with conflicts of structural integration are mainly found in the database community, which has found solutions for the problem of integrating different database schemas. These solutions can be used for structural integration of information. One solution for structural integration is mediator systems that define mapping rules between different information systems.

Integration of heterogeneous database schemas is (commonly/normally) achieved by the use of a global schema that is connected through view to the different schemas to be integrated. There are two general approaches; global-as-view, and local-as-view. In the global-as-view approach every relation in the global schema is defined as a view over the different schemas to be integrated, thus making it easy to answer queries to global schema. A drawback of this approach is that independence is lost in the individual information systems due to their combined use in queries. The local-as-view approach use views to define more complex information structures in the integrated schema from relations in the schemas to be integrated, thus opposite to the global-as-view. A drawback of the local-as-view approach is that answering queries over the integrated view becomes more difficult. Stuckenschmidt discuss possibilities and limitations of these approaches in relation to weakly structured environments in [Stuckenschmidt2003].

Page 68: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

52

Semantic Integration To achieve semantic interoperability, there needs to be some agreement on the meaning of the information that is exchanged, as semantic conflicts can occur when two systems use different interpretations. Simple semantic conflicts like synonyms and homonyms can be solved with (one-to-one) structural mappings, and can be solved using mediator systems for structural integration. To solve more complex semantic conflicts, the semantics in the information has to be taken into account. In [Stuckenschmidt2005] general approaches to accessing information semantics for information sharing are discussed. Information semantics can be captured through its structure (conceptual model) using wrappers derived from the conceptual model. To achieve semantic integration, there are two approaches that both assume that domain knowledge is carried in the information structure; structure resemblance, which uses automatic reasoning on a logical model that is a one-to-one mapping of the conceptual structure, e.g., in SIMS, and structure enrichment, which bases its mapping on a logical model of the information structure, enriched with additional concept definitions, e.g., in KRAFT and OBSERVER. An alternative to capturing semantics from information structure is to extract or derive semantics from the text using natural language processing, but there are still many limitations.

Kashyap and Sheth [Kashyap1998] propose domain specific metadata as the most suitable for dealing with semantic heterogeneity issues. Domain specific metadata can represent information that is meaningful related to a specific application or domain, and be used for creating descriptions capturing the information content of underlying data. Domain specific ontologies can be used as vocabulary for the metadata, i.e., linking metadata terms to ontologies, making issues of vocabulary use important. Semantic interoperability at the vocabulary level can be handled by terminological relationships. Ontologies are viewed as a special kind of domain specific metadata. The Role and Use of Ontologies in Information Integration Ontologies can be used as a solution to solve semantic heterogeneity, both to explicitly describe the semantics of information sources, and as a language for translation. If several ontologies are used we also need mapping between ontologies. Ontology-based solutions for information sharing and Semantic Web are described in [Davies2003B] and [Fensel2002]. A survey of different ontology-based solutions for information integration is given in [Wache2001]. In information integration, the main role of ontologies is to describe the semantics of the information sources and make the contents explicit, and the identification and association of semantically corresponding information concepts. In addition, ontologies can be used as global query model/schema, or for verifying the correctness of the integration mapping from the global to the local schema. There are several approaches to information integration, Wache et al. [Wache2001] describe three general approaches for ontologies used in the role of content explication and semantics description: Single ontology approach – having one global ontology with shared semantics, that all users have to conform to; Multiple ontology approach – where mapping between (each pair of) ontologies is required; and Hybrid ontology approach – where multiple ontologies are built on top of or linked to a shared vocabulary of basic terms which may function like a bridge or translation between ontologies. The three approaches are illustrated in Figure 3.2.

Page 69: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

53

Figure 3.2 Approaches to ontologies in information integration.

The single ontology approach, having one global ontology used across all systems, is useful for integration problems where all the information sources to be integrated provide nearly the same view on a domain. But if any of the information sources have a different view on a domain, it becomes difficult to find a minimal ontology commitment. The approach is also vulnerable to changes that can affect the conceptualisation of the domain represented by the ontology. SIMS and ONTOLINGUA use the single ontology approach. As the multiple ontology approach is not dependent on any commitment to a common and minimal ontology, it can simplify modifications in information sources or the adding/removing of information sources. But due to the lack of a common vocabulary, the comparison of different source ontologies becomes very difficult. This can be overcome using additional representational formalisms. Inter-ontology mapping is used to identify semantically corresponding terms in different source ontologies, but such a mapping also has to consider different views on a domain, e.g., different granularity and aggregation of the ontology concepts, which makes such inter-ontology mapping difficult to define in practice. OBSERVER uses the multiple ontology approach. The hybrid ontology approach was developed to overcome drawbacks of single and multiple approaches. It is similar to the multiple ontology approach in that each source is described by its own ontology, but to aid compatibility between the source ontologies, they are built upon one global shared vocabulary (can also be an ontology), containing basic terms (primitives) of a domain. As the terms are based on these primitives, they are easier to compare than in the multiple ontology approach. Complex terms in local ontologies are built by using operators on the primitives of the shared vocabulary. There may be differences in how the primitives from the shared vocabulary are used for describing the local ontologies. COIN, BUSTER, and MECOTA use variations of the hybrid ontology approach. Advantages of the hybrid approach are that it is relatively easy to add new sources without modifying any mappings or the shared vocabulary, and that it supports the acquisition and evolution of ontologies. A drawback of this approach is that the reuse of existing ontologies is difficult, as they have to be rebuilt/re-developed from scratch to refer to the shared vocabulary.

As our application scenario involves sharing information both within and across different organisations, we believe a variety of the hybrid approach will be the most advantageous in relation to the KM. But as these organisations may use existing domain

Page 70: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

54

ontologies, solutions are needed that would allow the use of these without having to re-develop existing domain ontologies from scratch.

3.4.3 The Semantic Web Being an extension of the Web, which mainly focuses on the interchange of documents, the Semantic Web aims to support computer and human cooperation through adding semantics to information available on the Web [Berners-Lee2001]. The focus of the Semantic Web is on finding common formats for integration and combination of data from diverse sources, and languages for recording how this data relates to objects in the real world. It is described as follows by the W3C [SemWeb]: “The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. It is a collaborative effort led by W3C with participation from a large number of researchers and industrial partners. It is based on the Resource Description Framework (RDF).”

The traditional Web lacks semantics; the meaning of the content is hidden in the text, and web pages are designed for presentation through visual effects, fast browsing and navigation. Simple keyword search often results in a large volume of hits (information overload), of which many may be irrelevant results, while information that can be relevant may be ignored. Being hidden in the text, the meaning of the content is not machine processable, so the user has to interpret the content of the returned results (web pages), and possibly have to integrate information from several sources to find an answer to questions. There are three levels of usage of the Semantic Web [Schwotzer2003]:

• Ontology level: contains definitions of the contexts that are places in the context level. This corresponds to knowledge classes in artificial intelligence (AI).

• Context level: this is the semantic level, containing the semantic network that creates hierarchies and relations used for organising resources found in the information level. This level describes the semantics of the information, and RDF or XTM can be used for description.

• Information level: Constituting the basis, this level contains various information resources that can be reached via a URI.

The Semantic Web provides a number of technologies, standards and tools to support information integration on different levels, from syntax, structure and identification at the base, via technologies for knowledge representation, query and data interchange at an intermediate level, to consistency analysis, inference, verification and trust at the top level. RDF and OWL, described in Section 3.3.5, are key base technologies together with Universal Resource Identifiers (URIs) and XML. Shadbolt et al. [Shadbolt2006] gives an overview of the Semantic Web and related technologies, in their review of the Semantic Web vision, looking at what has been achieved and what remains to be solved, as well as future challenges.

3.5 Related Work In this section, we present related work on information and knowledge sharing in heterogeneous and dynamic environments, represented by a set of projects/systems that are relevant for the work in this thesis. A lot of research work is going on in the area of information sharing and in mobile environments, but we found relatively few systems/approaches that focus on a combination of knowledge management and

Page 71: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

55

knowledge sharing for MANETs. We have chosen to present the systems in a separate section because each approach covers more than one of the background areas.

3.5.1 Shark Shark [Schwotzer2002A] [Schwotzer2006A] [Schulz2003] [Schwotzer2002B] [Schwotzer2006B], the name an acronym for Mobile Shared Knowledge, is a distributed system for knowledge organisation, synchronisation and exchange in mobile environments. It is based on a P2P model for knowledge management from Schwotzer [Schwotzer2006B], which was briefly described in Section 3.2.

The approach in Shark builds on three levels of Semantic Web usage [Schwotzer2003]; an ontology layer containing concept topics, a context layer with instances of the topic concepts, and a document layer consisting of the documents containing the explicit domain knowledge. The ontology and context layers together form a knowledge base, which is realised using Topic Maps. The topic instances have links to the documents containing explicit knowledge about the topic. They differentiate between topical and location context, both kinds are defined using the context layer. In Shark, (mobile) users form groups, and both intra- and inter-group knowledge sharing and exchange is possible. Knowledge is shared both within a group and across group boundaries, i.e., both intra- and inter-group knowledge exchanges.

Shark has three main components; mobile station, central station, and local station. The mobile station is software running on mobile devices with a Java VM (virtual machine). It contains a part of the Shark-wide knowledge base. Knowledge is synchronised in an ad-hoc network using the KQML protocol. The shark central station is running server software, and contains the complete Shark knowledge base. It offers tools for knowledge organisation and synchronisation management. Synchronisation is adding new knowledge (unconfirmed knowledge) into Shark. After having been evaluated, the new knowledge is either dropped or published “Shark-wide” (confirmed knowledge). The Shark local station is a special kind of mobile station, running on fixed locations. This component offers location-based knowledge, which is synchronised and exchanged as on mobile stations.

A prototype implementation of Shark, Shark 1.0, uses Topic Maps [ISO13250] for knowledge organisation, SyncML [SyncML] for knowledge synchronisation, and KQML [KQML] for knowledge exchange. TM4J Topic Map engine for Java [TM4J] is used for the internal knowledge base. A simple HTML interface is used for organising the topic maps, define KPs, and manage synchronisation between Shark mobile stations. Shark mobile station runs on Palm OS and Nokia 9210. Shark local stations are not part of Shark 1.0. The architecture relies on stationary server nodes for knowledge and synchronisation management, which may be a drawback in a mobile environment. Knowledge Ports (KP) Shark is based on the concept of knowledge ports (KPs) [Schwotzer2006B] for knowledge management tasks. KPs contain instances of concepts/topics (knowledge instances), thus are a feature of the context layer. In relation to knowledge exchange, the KP is the missing construct of the context layer. A KP declares an instance to be open for knowledge exchange; the KP belongs to this instance/topic. Topic-based knowledge ports are used to handle knowledge exchange, which consists of exchange of both semantics and information. The knowledge ports are defined as topic types and declare topics for knowledge exchange.

There are two types of KPs: incoming knowledge port (IKP) and outgoing knowledge port (OKP). An IKP receives documents or relations from other instances,

Page 72: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

56

while an OKP delivers references or documents to other instances. A topic can have one of each type KP.

In addition to knowledge exchange between two mobile individuals (in topical context), KPs can be used as a basis for location based services. In the first instance, the KP allows stating interest in exchanging knowledge of a dedicated context, and here the KP acts as a semantic filter for the input and output of knowledge. The context can be defined using ontology and instances. Location based KPs are used for combining topical and location context by adding coordinates to the KPs, e.g., longitude/latitude or GMTS/UMTS cell number. Knowledge exchange can then take place when a mobile device is within range of a base station delivering a service corresponding to a certain topic.

Shark peers are autonomous context sensitive peers, and are defined as ‘a person that shares knowledge with other persons’ (knowledge exchange), using KPs for output strategies and relevance filters. Knowledge Exchange Knowledge exchange is identified as a four-step process; finding a common context, testing for mutual interest of this knowledge exchange, testing for rights to exchange this knowledge, and the actual knowledge exchange (of documents, topics or references). Topic Maps and Semantic Web can support steps 1 and 4 of this process, while knowledge ports handle steps 2 and 3.

A simple knowledge exchange protocol for knowledge exchange is described in [Schwotzer2002B]. When the topic of an IKP in one knowledge base (KB) is identical to the topic of the OKP in another, the KPs are said to be corresponding KPs. The mobile devices automatically select identical topics, and then check for corresponding KPs. The corresponding KPs can state a mutual interest in knowledge exchange. Then the 4 steps of knowledge exchange can take place.

Knowledge synchronisation keeps users in one user group in sync, while knowledge exchange happens between users in different user groups. Technically, the knowledge exchange happens between mobile stations and/or local stations. The knowledge exchange consists of four steps; (1) find mutual interest (2) test for mutual interest of knowledge exchange, (3) test (access) rights for knowledge exchange, and (4) the actual knowledge exchange. Topic Maps support steps 1 and 4 (the exchange as XTM documents). In the case of identical topics, XTM merging rules are used.

Access rights are attached to KPs, and tested in step 3 of the knowledge exchange procedure. The KPs aid mobile stations in deciding whether to deliver or retrieve knowledge in the ad-hoc network. During knowledge exchange, parts of the Shark KB is exchanged as small XTM documents created by the mobile station, a process called unmerging.

3.5.2 DBGlobe DBGlobe [Pitoura2003] is a service oriented and data-centric approach where devices form data sharing communities that together make up an ad-hoc database of the collection of data on devices that exist around a specific context, e.g., location or user. This approach is relevant to our approach in its use and sharing of metadata and profiles.

Profile and metadata describing each mobile device, including context and which resources are offered, are stored on fixed network servers. The servers are also used to keep track of the movement of mobile units. Relying on a fixed network is a drawback both in relation to emergency and rescue scenarios and MANETs. Taking a data-centric view, the focus in [Pfoser2002] is on metadata handling in a global computing

Page 73: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

57

environment where data is distributed over a large number of mobile clients. In this approach, each device communicates metadata about the data it contains to its environment, where it can be discovered by other devices posing queries. Distributed indices based on Bloom filters are used for service location and query routing, and an XML-based query language is used to support direct querying.

The focus in DBGlobe is on data management techniques for global computing. It is service oriented in that all data is accessed through services. There are two main kinds of components: infrastructure (fixed network servers) and application-middleware. The fixed network servers are called Cell Administrator Servers (CAS). These servers capture and store contextual information, provide basic service publishing and semantic discovery features, as well as offer ubiquitous connectivity to devices. The small-scale mobile peer devices connected by DBGlobe are called Primary Mobile Objects (PMOs). The CAS server keeps track of the PMOs entering and leaving a cell, and stores metadata describing each PMO through keeping its context and what resources are offered. DBGlobe provides the infrastructure (DataStore, DataHandlers, proxies) allowing PMOs to function as an ad-hoc database. DataStores store priorities, constraints, rules, categories, and descriptions, all relating metadata and services to the mobile entities. DataHandlers (DH) execute rules (on events), manage data flows between devices, and execute queries. The proxies handle disconnections and network interface, as well as the PMO connection to the DH. Groups of PMOs can form an ad-hoc database to combine information related to a query, but the group is not static during the lifetime of the query. The query discovers new ‘relevant’ PMOs or drop ‘irrelevant’ ones, which may cause increased query execution time, or the notion of a “query result”: towards continuous query evaluation and partial results. Thus, the main definition criterion for this database is the query. But in relation to mobile entities, other criteria may be needed taking into account the context of the PMOs carrying the information, e.g., location and temporal aspects.

The combination of data from different devices requires that metadata about schemata have to be communicated a priori. To be able to come up with a metadata proposal, it is necessary to know what data exists. The mobility affects data and produces new data. Three types of data are stored on PMOs: content data, profile data, and essential metadata. Content data consists of descriptive data and information, and may possibly be spatially or temporally referenced. Profile data includes user profiles and device profiles as well as movement data. Essential metadata contains an abstract view of content and profile data.

3.5.3 DIANE In [König-Ries2002] an approach to a solution for information sharing using services as building blocks is provided. Their issues and approach for service description and discovery is relevant to our work with respect to describing information items and resources, therefore, we restrict the description to include only their approach to service description. The research of integrated use of available resources in an ad-hoc network is in relation to the DIANE project, which is another service-oriented approach. The metadata standards used are Learning Object Metadata (LOM) [LOM] and DAML-Services [DAML].

The issues identified as necessary to address are: service description, representation of user context, service discovery, efficient usage of services, integration of services, and motivation to offer services. They propose solutions to two of these: service description and service discovery. Their solution to service description uses a classification of services differentiating between consuming and producing documents. There is a transformation service class for both consuming and producing documents, e.g., when converting a

Page 74: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

58

document to a different format. Documents are subdivided according to the document type. A document contains information described by keywords in an ontological network. Metadata for a document-producing service would be: semantic content (information part), technical data (document part), and service characteristics (service part). Services are characterised by what they do, how they do it, and how they can be accessed. Service delivery and combination may be dependent on user context. The user context is characterised by the user’s current location, his level of knowledge, the device used, and current state of the device. Composite Capabilities/Preference Profiles (CC/PP) for profile description is used in the DIANE project. CC/PP [CC/PP] describes device capabilities and user preferences, often referred to as the delivery context of a device, which can be used to guide the adaptation of content presented to that device.

In [Baumung2006], Baumung et al present a peer-to-peer service oriented middleware for semantic services management in MANETs, developed as part of the DIANE project. Their middleware consists of three sub-layers; Semantic Processing, handling semantic descriptions and matching algorithms, a Pool Manager for service and request management, and P2P-based multicast, for request dissemination and group management.

3.5.4 MoGATU The approach in MoGATU [Perich2002] is relevant to profile and context management in their use of profiles and ontologies for filtering and prioritisation of data. Ontologies are used for describing user profiles as well as for queries and answers. MoGATU is a framework for profile based data management. In this approach, information managers (InforMa) functioning as local metadata repositories are used for metadata handling, enabling semantic-based caching. All devices cache the metadata, e.g., who has what data, and possibly also data obtained from neighbours. The approach uses profiles to determine what data to obtain and its relative worth. Their approach to profile modelling was described in Section 3.3.4.

The information managers are present on all devices, and have a local metadata repository that includes schema definitions (ontologies) for locally available information providers as well as what is termed facts, e.g., queries and answers for local and non-local information providers. An information provider registers with a local or remote information manager. One common language (DAML+OIL) is used for metadata representation. This solution does not offer a global view of knowledge available in the network. MoGATU uses a point-to-point pull-model and semantic-based data caching.

3.5.5 AmbientDB AmbientDB [Fontijn2004] [Boncz2003] constitutes a full-fledged distributed database system for MANETs, providing a global database abstraction over a MANET by adding high-level data management functionality to a distributed middleware layer. The approach is non-centralised and ad-hoc/dynamic, using structured queries. They use Distributed Hash Tables for indexing. No ontology based solutions or methods from knowledge management are used. This approach is mainly focused on data management in MANETs, thus for the work of this thesis, it is mainly relevant in relation to metadata management, querying, and possibly context management. On closer inspection, we did however find that AmbientDB was not as relevant to our work after all, and we mainly considered it as a possible database solution to build on. Unfortunately, in relation to the time span of the work for this thesis, AmbientDB was never completed to a level where it could be employed by outsiders.

Page 75: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

59

3.5.6 Discussion In this section, we discuss how the presented work relates to the three areas of background literature that are relevant for our work, i.e., Knowledge Management and Representation, Information Sharing, and Sparse Mobile Ad-hoc Networks. All approaches focus on solutions for MANETs, although not specifically Sparse MANETs, as we do in our application scenario. All approaches also address aspects of information sharing.

Shark is a system for knowledge management and exchange among mobile users, using topic-based knowledge ports to handle knowledge management tasks, thus very relevant to our application scenario. In the referenced prototype, Topic Maps is used for knowledge organisation. As Topic Maps is designed for humans (for topic browsing) rather than machines (as RDF), and not based on the same foundation of formal logic as RDF/OWL, it may be less useful for reasoning and exchanging information on the middleware level, as is needed in our scenario. In common with our approach for the KM, their approach is based on the three levels of usage of Semantic Web. Their approach to intra- and inter-group knowledge sharing is relevant in relation to our requirement for sharing information both within and across organisations. A drawback in relation to our application scenario is that the architecture relies on stationary server nodes for knowledge and synchronisation management.

DBGlobe is service oriented, and relies on fixed network servers that keep track of node movement and store profile and context metadata. The stationary servers also handle updates. Similar to our approach, the nodes communicate metadata information about what data is available on the node. Although their work covers interesting aspects of handling metadata, their stated focus is mainly on data management. In relation to our work, this approach is interesting in relation to their use of metadata and profiles describing node resources (services) and context, and in the sharing of metadata between nodes.

Diane is a service oriented approach for information sharing in MANETs. The referenced work is mainly relevant to our work through their use of metadata and ontologies in (semantic) service description and discovery, which is related to the description and discovery of information items and resources.

Although the above approaches have solutions for some aspects corresponding to our needs in relation to information and knowledge sharing in and across organisational boundaries, they all rely on stationary servers to some extent, for handling updates and synchronisation, as well as for other data and metadata management. We can not rely on there being a fixed network available in our scenario, thus the above solutions are not sufficient for our needs. The following approaches do not rely on stationary network servers.

MoGATU is a framework for profile based data management in a mobile ad-hoc environment using ontologies and profiles to perform filtering and prioritize data. They use a local metadata repository to cache metadata, which is similar to our approach of local data dictionaries (although our use of the dictionaries is not to cache metadata). Their solution does not offer a global view of knowledge available in the network, or sharing this kind of information about what is available for sharing in the network.

AmbientDB provides a global database abstraction over a MANET, but does not use ontology based solutions or methods from knowledge management. Update issues are handled using rule based database update queries. As we need to deal with both inter- and intra-organisational information sharing in our scenario, we need ontologies and knowledge management facilities to handle aspects of (semantic) heterogeneity that are not handled in distributed databases.

Page 76: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

60

In summary, Shark covers all three areas relevant in our work, and is particularly interesting in its approach to knowledge sharing. Shark is the only related work presented in Section 3.5 that focuses on knowledge management. Referenced work from DBGlobe is relevant in relation to metadata handling, particularly representation of profile and context information, and sharing such information. The referenced work from Diane is related to our work with respect to describing information items and resources through their focus on semantic service descriptions. MoGATU is related to our work through their use of ontologies and profiles in information filtering, and their approach for handling metadata in local repositories. AmbientDB is mainly relevant through being a distributed database solution for MANETs.

3.6 Summary In this chapter we have presented background literature related to knowledge management that is relevant to this thesis, as well as a selection of related work. The overall requirements (Chapter 2) relevant for the KM include the support for intra- and inter-organisational information flow, profiling and personalisation, context management, and group- and organisational support. Issues that need addressing to support organisational intra- and inter-operability in our application scenario are a need to support systems from different domains and organisations to understand each other, the importance of avoiding information overload, the availability of the shared information, that we need possibilities to search for and retrieve relevant information, and that the information has to be exchanged in a way that all systems can understand.

In the material presented in this chapter, we have explored terminology as well as presented existing techniques and solutions that are relevant for the issues and overall requirements to the KM. Existing standards and technologies that can be useful in our approach are for instance XML for information exchange and RDF/OWL for describing information resources and domain knowledge, as well as approaches to overcoming semantic heterogeneity. We have shown that although several important aspects relevant to solutions for our application scenario do exist in related work, they do not all take into consideration all characteristics of our situation; work within knowledge management, and semantic information sharing and integration do not have the resource limitations that we face, while work targeted at MANETs, e.g., Shark, at some level rely on there being a stationary network or base station present for support, e.g., in knowledge synchronisation (knowledge update), which is not possible in our scenario using Sparse MANETs.

Thus, we still need to find solutions that (1) can function on small devices, (2) in an instable environment (no supporting stationary network or base station guaranteed), that (3) allow us to utilise available resources efficiently (i.e., sparingly), but at the same time can (4) support intra- and inter-organisational information sharing and integration as adequately as possible. More specific, we need to find solutions that cater for the following:

• Metadata management, and sharing metadata and information about what is available in the network using minimal resources.

• Handling (propagating/disseminating) updates in both metadata and (domain related) data dynamically without available stationary network, and with minimal use of resources.

• Accommodating interoperability regarding different organisations and domains.

Page 77: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

61

Chapter 4 Knowledge Manager High Level Design This chapter addresses the challenges of knowledge sharing in Sparse MANETs for rescue operations through the design of the Knowledge Manager (KM). The KM is composed of a set of sub-components, presented in this chapter; each handles an overall issue for supporting organisational intra- and inter-operability for information sharing in this environment (ref. Chapter 2). The main focus of this chapter is on our solutions for showing the following claims:

• Claim 1: information overload can be handled through the use of filtering and personalisation.

• Claim 2: vocabulary sharing or mapping has to be enabled to allow cross organisational information sharing.

• Claim 5: sharing metadata about what information is available and where it can be found is essential for efficient knowledge and information sharing.

Each claim is related to one or more of the overall issues, and the solutions addressing the claims are achieved through cooperation between KM sub-components. Claim 1 is achieved using a combination of profiles and contexts, ontologies, and linking semantically related (metadata) information. Claim 2 is achieved through efficient metadata management in combination with context awareness, semantic metadata and ontologies. The solution addressing Claim 5 is achieved through exchanging semantic links of concepts/terms and location of the related information.

The contribution in this chapter is a high level design of the KM, which is a component for the overall information sharing in the system. A central component in the KM is the Data Dictionary Manager, realising our approach to efficient metadata management, which is presented in Chapter 5. Note that our use of the term ‘efficient’ here refers to ‘resource efficient’, i.e., contributing to a better utilisation of available resources for sharing information in the network, and is not a reflection of performance.

Our approach of enhancing metadata descriptions with terms/concepts from (domain) ontologies is similar to the approach of Kashyap and Sheth [Kashyap1998] in their handling of information overload by capturing information content using domain specific metadata in the form of terms from domain-specific ontologies.

In our approach, semantic links showing where among the nodes information can be found, i.e., extracts of metadata describing “what” and “where/who” of available

Page 78: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

62

information, is shared/exchanged among the mobile nodes in the network. This coincides with work in knowledge management where it has been found that sharing knowledge in the form of metadata about where knowledge resides in an organisation, for instance as organisational knowledge maps, often can be as important as the original knowledge itself [Alavi2001].

The main objective of the KM is to manage knowledge sharing and integration in a rescue operation scenario as described in Chapter 2. The KM should provide services that allow relating metadata descriptions of information items to a semantic context. The services should be offered both internally to the system as well as to the application level. The KM should only give the tools for information sharing, not decide usage and content, and keeping it as flexible as possible has been an objective in the design.

In Section 4.1 we give an analysis of the KM requirements, and an overview of the KM design and its components. Section 4.2 goes into more detail on the metadata handling components. In Section 4.3 we present the tool components. Aspects of the presented design are discussed in Section 4.4. In Section 4.5, we describe the cooperation internally in the KM, and present the KM Interface. Cooperation and dependencies with other Ad-Hoc InfoWare components is explained in Section 4.6. We have previously published parts of this chapter in [Sanderson2004] and [Sanderson2005].

4.1 Requirements and Design Overview In this section, we present the requirements analysis for the KM, giving a more in-depth treatment of the overall requirements presented in Chapter 2, together with an overview of the KM design.

4.1.1 Knowledge Manager Requirements The main overall role of the KM is to support applications and the Ad-Hoc InfoWare middleware architecture components in sharing information and knowledge. The application scenario and chosen infrastructure pose certain challenges and requirements on the KM. The requirements from the overall requirements analysis for Ad-Hoc InfoWare that are relevant to the KM include the support for intra- and inter-organisational information flow, profiling and personalisation, context management, and group- and organisational support. In the following discussion, we will look in more detail at functional requirements specific to the KM.

Rescue and emergency operations are highly organised, with a clear role structure with corresponding responsibilities, lines of reporting, and rules and procedures that have to be followed. Several different organisations participate; police, paramedics, and fire brigades are the main organisations involved, and in addition, others are involved if necessary, depending on the type and size of the operation. To support intra- and inter-organisational information flow and sharing, we need to provide functionality similar to distributed databases to support query and retrieval of relevant information. We also need to know what information sources are available in the network, both overall for sharing, and if they have become unavailable due to node movement and network partitioning, thus we need to keep track of changes in the availability of information sources.

To support sharing knowledge and information among the nodes in the network it is favourable to know what is available for sharing, and where this information can be found. In other words, we need distributed knowledge base functionality and a global view of what knowledge is available in the network. Aspects of this can be achieved by sharing

Page 79: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

63

metadata describing what is available and where it can be found. But to propagate all metadata about available information items is unfeasible considering the resource limited environment in our application scenario, thus we need to find some solution to share enough meta-information that it is meaningful, while at the same time keep the amount of actual data exchanged to a minimum.

The structural heterogeneity inherent in cross-organisational information sharing may require support for different approaches to querying and retrieval, e.g., by structure, content, context, and naming, and possibly support both structured and unstructured query languages. In addition, we need filtering and ranking of retrieved results. The dynamic environment and lack of routing table for query routing, implies that query results may depend on the specific location and time of placing the query even though the query itself does not address spatio-temporal data; it is all a question of which nodes are within reach, and which information they contain. It is also desirable to keep communication (number of messages) and storage space consumption as low as possible.

Focusing on the three main participating organisations (police, paramedics, and fire brigades), each of these handles a different aspect of the situation and will have different requirements/needs and preferences regarding the relevance and type of information, and in many cases they require different presentations of the same information. Information gathered by personnel in one organisation may also be relevant for personnel in other organisations, and it would be beneficial to share this information across organisational borders, e.g., in relation to specific tasks in the rescue operation. To do this efficiently, we need to be able to state what is of interest or needed relevant to a user or situation (context), and to filter out what is not relevant given users’ preferences and situation.

During the operation, new information to be shared is continually added to the system, and existing data changed at need. In other words, the information sources are dynamic. Examples are photographs taken at the site to document the incident, overview of available personnel and equipment at all times, updated medical status of patients. Exceptions are the policies, standards, vocabularies and other information exchanged a priori, which will not change during an operation. To adequately offer information sharing in the network, the updates have to be handled by the system as they occur at run-time; hence there is a requirement for handling dynamic updates. In addition to dynamically propagating (meta-) information about new and modified information items available for sharing, the known organisational structure of rescue operations, together with knowledge of common rescue operation roles and information needs, can be utilised in handling dynamic updates. This is made possible by using ontologies describing this domain both in handling profiles and contexts during an operation, and by prioritising updates with respect to type of information and rescue operation role of the users. Such ontology based dynamic updates can accommodate the rescue operation and organisation. Other challenges underlining the requirement for dynamic updates are the hectic and dynamic environment characteristic of the rescue operation itself, which is likely to cause the updates to be frequent, and that the availability of information sources may be instable due to the movement of nodes/personnel and the physical obstacles that may be present on the site, which may cause frequent network partitions. Examples of physical obstacles from the example scenario are the train wreck and the tunnel blocking signals.

To enable information sharing in a heterogeneous environment, the different systems and domains have to understand each other, i.e., there is a need for a mapping or translation of the different conceptual models, schemas and languages that are used. This is a well-known issue in the distributed databases domain. Related challenges in our scenario are the presence of different devices, organisations and domains, and the resulting heterogeneity caused by the use of different data models, standards and languages. In

Page 80: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

64

general, there are three kinds of heterogeneity [Stuckenschmidt2005]: syntactic, concerning data formats and languages used and standardization of these (e.g., XML), structural, which concerns data structures and schemas, and semantic which deals with the meaning of the information. Syntactic heterogeneity can be handled by using common/known standards for information exchange and modelling. Structural heterogeneity can be handled using (schema-) mapping known from the database community. Semantic heterogeneity can be handled through the use of content dependent metadata in combination with ontologies and vocabularies.

Ontologies can be said to give a meta-perspective on data and information, through considering a higher level of abstraction. What is considered data and metadata on different levels depends on which level of abstraction we are operating, thus ‘metadata’ can be considered ‘data’ from a higher abstraction level. Besides providing domain specific descriptions of information, ontologies may be used in solving problems of semantic heterogeneity [Wache2001]. We need the support for the dissemination, sharing and interpretation of ontologies, and browsing and querying of ontologies and ontology contents. The KM should not support the creation and maintenance of ontologies, but the use of already existing ontologies: domain ontologies and/or vocabularies from relevant domains, e.g., medical domain; as support in sharing vocabularies, e.g., as a bridge or an upper level ontology; and for enhancing metadata descriptions with terms/concepts from the ontologies. We also need some support for (instance-) reasoning during operation. To cater for the dissemination of ontologies, the availability of information (here: metadata and ontologies) implies a need for the management of data dictionaries that can provide a global view of knowledge available in the network at any one time. There are basically three kinds of metadata relevant for our application scenario (ref. Chapter 3): semantic metadata describing the meaning of information (includes to some extent ontologies), context and profile metadata describing current contexts and profiles of users and devices, and structure and content describing metadata which has to do with how information items are structured, and descriptions of the intellectual content of these.

Another problem due to both node movement and the heterogeneity of devices is that resource weak devices may not be able to store all needed metadata information, and as there may be frequent partitions in a Sparse MANET, the metadata information may not be available when needed. In the case of semantic metadata, this may mean that a node may store only parts of an ontology, i.e., a necessary minimum. Although the node may request neighbouring nodes for the missing parts, in a Sparse MANET there is no guarantee that nodes containing the rest of the metadata are available. We have termed this the problem of partially available semantic metadata. We have not looked specifically at this problem in this thesis. Non-functional Requirements In addition to functional requirements, there are also non-functional requirements to the KM. It is necessary to have a scalable configuration as the devices may be of different size and capability – not all nodes may offer the full functionality of the KM – and smaller devices may have to request some services from more powerful devices. The possible resource scarcity of some devices also implies that it is an objective to limit resource consumption, like battery and bandwidth. The services provided by the KM should be offered both internally to other middleware components, and externally to the application level.

Page 81: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

65

Resulting Main Issues From the above requirements analysis we have a set of main issues related to the KM. These issues identify areas of challenges/problems in relation to knowledge management that require solution/support to be able to share information in the given application scenario with its characteristics and limitations. Due to the dynamic environment in both Sparse MANETs and in rescue and emergency scenarios, information may become unavailable in an unpredictable fashion, and in addition there may be frequent updates. Thus, it is necessary to keep track of the availability of the information the user is willing to share. It is also important that the system is able to handle dynamic updates. Another issue is that the information is shared both within and across organisational borders, so it is essential to support understanding in relation to the data models, standards, languages and vocabularies used in the different organisations. This can be achieved using techniques from knowledge management and through vocabulary mapping or translation. As the example scenario described, time and resources are critical factors in rescue operations, thus we want to avoid information overload. This can be achieved through context awareness together with filtering and personalisation. Retrieval of relevant information is a major issue for information sharing, including the means to query for relevant information and search in available topics. Finally, to be able to implement the actual sharing, information exchange has to be conducted in agreed-upon standardised formats. Summary of Requirements The above analysis results in these functional requirements for the KM:

• Support use and sharing of domain ontologies/vocabularies. • Provide resource efficient metadata management. • Availability tracking of shared information items. • Support dynamic updates. • Support personalisation and filtering of information. • Enable querying and retrieval of relevant information items. • Support information exchange in standardised formats.

The main non-functional requirement to the KM is that it should support a scalable configuration.

4.1.2 Design Overview The issues and requirements are translated into groups of functionalities/services that need supporting, and this is reflected in the KM design in the set of sub-components of the KM (see Figure 4.1). Overall, the required functionality can be differentiated into those handling the structure, content and meaning of information, and those that have a supportive role. The first group of services all have to do with handling metadata at some level, ranging from data structure definitions/descriptions to conceptual definitions of some domain. This group of components we have termed metadata handling components. The components handling the group of supportive services/functions are termed tool components. The KM Engine is a front-end component which is described in Section 4.5. The KM assumes a data management component for access to local data storage, including the knowledge base used by the KM and its sub-components. A Data Management component is included in the figure for clarity, but it is not part of the KM or the Ad-Hoc InfoWare middleware architecture. The reason for this is that data management is viewed as being part of the local node as different nodes may use disparate systems for data management and storage.

Page 82: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

66

Figure 4.1 The Knowledge Manager.

The KM requirements are supported through the functionality offered by the components and this is shown in more detail for each component in the following subsections. Each component is addressing one of the main issues, as indicated in the following. Use and access of domain ontologies is supported through Semantic Metadata and Ontology Management, addressing the issue of understanding. The Data Dictionary Manager (DDM) is responsible for storage and management of metadata, keeping track of availability of information items, and supports the Query Manager (QM) in querying and retrieval of information items. The Profile and Context Manager (PCM) is responsible for personalisation and filtering of information, addressing the issue of information overload. The XML Parser (XML-P) component ensures exchange of information in a standardised format. The most central component related to the claims in this thesis is the DDM, which therefore is the component mainly focused on in this work. Developing the whole KM is too wide a scope for one thesis, and we will only give requirements for the other components. Note that although we have a unit for Semantic Metadata and Ontology Management, its function will mainly be as providing a structure for using and accessing already existing ontologies and semantic metadata. Therefore, we will in the following use the term framework, i.e., Semantic Metadata and Ontology Framework (SMOF). As the KM does not offer data management services, we assume the existence of a data management component outside of the KM that will handle local storage and access to data.

4.2 Metadata Handling Components These components all handle metadata at some level, and are closely related to the three kinds of metadata in our scenario: information structure and content description metadata, semantic metadata, and profile and context metadata. The first of these is addressing information item contents, structure and localisation, and is handled by the DDM component. The second kind covers concepts and relations, provides a semantic context, and is handled by the SMOF. The third kind denotes different kinds of profiles as well as context-related information, and is handled by the PCM component. The three types of metadata are described in Chapter 3.

4.2.1 Data Dictionary Manager The DDM is the main component for metadata management, designed to manage metadata storage through multi-tiered data dictionaries, and to support tracking of availability. All

Page 83: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

67

nodes have a DDM. Each DDM has a Local Data Dictionary (LDD) that contains metadata descriptions of local information items to be shared, and a Semantic Linked Distributed Data Dictionary (SDDD) keeping a global view of sharable information. The requirements for the DDM include managing storage of metadata; management of organisation and update of local and global data dictionaries; keeping track of information availability; and supporting requests for information from the data dictionaries. Being very central in our approach, the DDM is – through metadata management – involved in solutions related to all claims made in this thesis: both as registry of metadata for information to be shared, linking of related information and nodes, and especially in the exchange/sharing of semantic links. The design of the DDM, its requirements and how these are supported are detailed in Chapter 5, which is in its entirety devoted to the DDM and our approach to metadata management. Here we give an overview of the DDM parts and functionality.

Each node keeps an LDD containing data structures and content descriptions of locally stored information items that the user has registered to share in the network. The metadata descriptions are enhanced with terms/concepts from (agreed-upon and/or standardised) ontologies/vocabularies, i.e., the information items are linked through metadata attribute values to semantic contexts anchored in domain ontologies. Consisting of three layers – information layer, context layer, and knowledge layer – our approach is based on the three layers of Semantic Web usage, which we have in common with work by Schwotzer et al. [Schwotzer2002A]. Our approach of using metadata descriptions enhanced with terms from domain ontologies we have in common with the approach presented in [Kashyap1998] for handling information overload.

Different solutions to storage and distribution of a global data dictionary were considered before deciding on the final solution. Here we will briefly summarise alternatives that were considered before describing our approach any further. The original intent was to have a global distributed data dictionary (GDDD), i.e., akin to a global data dictionary in a distributed data base system. Our goal is however not to organise the nodes into a distributed (multi-) database management system, but rather to support information sharing through exchanging metadata about what is available for sharing. For this we need on each node a metadata dictionary or catalogue exclusively for metadata describing information the node chooses to share among its peer nodes. A global data dictionary demands a global conceptual schema [Özsu1999], which in our scenario would present some problems due to several reasons, e.g., the heterogeneity inherent in sharing information across several participating organisations, the dynamicity of the environment, the lack of autonomy of the nodes, and the (possibly frequent) network partitions.

One alternative solution is to let only emergency team leaders keep copies of the dictionary in the form of abstracts or summaries of LDD content from all peers. Problems with this solution include centralisation and the solution’s connection to user roles. In this alternative, metadata from new neighbours/peers would have to be requested and propagated to the relevant team leader. In addition to the obvious unreliability and potential bottleneck of a centralised solution, the whole dictionary would have to be transferred to another device in cases of swapping devices or of changes in rescue operation role, e.g., change of person acting as on-site commander. The highly dynamic environment in our application scenario with possibly frequent network partitions and merging makes a centralised solution very unstable and unreliable, thus this is not an option in our case.

Another possible solution that was considered is to have three levels of GDDD reflecting the capacity of the device, so a device with limited resources would keep a simple version, while more powerful devices would keep more advanced versions. In the

Page 84: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

68

simplest version, the GDDD would contain only abstractions of information residing locally on the node, i.e., only for items registered in the LDD. A medium level GDDD would keep abstracts of metadata both from the LDD and also from what is available at neighbour nodes. The highest level GDDD would keep abstracts of metadata regarding available information in the whole network, i.e., global view of available knowledge, as well as ontologies. The two higher levels of GDDD on nodes with sufficient resources imply that they have to request and store metadata from neighbours. This would cause an increase in communication (network traffic) for requests and update of the metadata stored on neighbouring nodes, which is highly undesirable in a resource limited environment. It also is very dependent on network stability, i.e., that neighbours are within range, for more resource-weak nodes to have access to metadata. Such network stability is impossible to guarantee in Sparse MANETs due to movement and network partitioning. Thus this alternative is not useful in the highly dynamic environment in our application scenario.

A third alternative considered was to provide a “virtual GDDD”. In this solution no global data dictionary is stored anywhere; it exists through links stored in the LDDs, together with the metadata descriptions of the information items to be shared. This means the LDD will be more complex, having to store and organise information related to the links in addition to metadata descriptions of each registered information item. The issue of replication has not been considered here. In this solution, services from the SMOF could be used to link the LDD items to the global view through links consisting of attributes and values from ontologies.

The final alternative is the solution that was found to suit our requirements and application scenario environment best, and is the basis for the design presented in Chapter 5. In this solution, terms/concepts from the metadata descriptions and where the resource is stored, together with availability tracking of information items, are kept in what we have termed a Semantic Linked Distributed Dictionary (SDDD), which functions like a semantic index structure. The SDDD is not a “virtual GDDD”, but exists as linking tables in the data dictionary. The LDD keeps metadata descriptions of local information resources registered to be shared in the network. The SDDD keeps a semi-global view through linking terms/concepts (extracted from LDDs) to where the information resource was registered in an LDD, i.e., the local LDD entry or the nodeID of the node where the information resource was registered (see Figure 4.2). The connection between concept/term and where the related information resource is registered, we have termed a semantic link. All nodes keep both an LDD and an SDDD. The semantic links are propagated through the network through synchronising SDDDs upon partition changes.

Figure 4.2 Nodes and links.

Page 85: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

69

The SDDD can be seen as a kind of global distributed data dictionary, although we use the term in a different sense than the traditional where there is a global conceptual schema as known from distributed databases [Özsu1999]. In the SDDD there is no global conceptual schema. This is possible in our application scenario through the use of

1) a common base ontology or vocabulary, 2) a priori exchanged information about standards and data models, 3) a standardised format for information exchange, and 4) vocabulary mapping.

In our six-phase rescue scenario, described in Chapter 2, agreements and exchanges regarding vocabularies, data models and standards take place in the a priori phase (Phase 1).

Requirements/issues that are not (or only partly) supported by the DDM include policies or rules for the distribution of SDDD, and a solution to the problem of partially available semantic metadata. These have not been looked into. The SDDD is now distributed in a lazy fashion, depending on which nodes meet and exchange SDDD content. An alternative solution for this distribution; using strategies, possibly based on estimates/predictions of node movement and direction, might be a great improvement.

To illustrate the role of DDM in the KM, we present two use cases of key functionality of the DDM. The first demonstrates finding relevant shared information items by searching the SDDD, and the second how the LDD is used for retrieving the metadata description of an item locally on the node where it is stored (and registered for sharing). Use Case (1): Find Shared Items Related to Concept Terms The components involved are KM Engine and DDM (Figure 4.3). This use case is used by applications to find metadata registrations for items related to given terms/concepts. It will return results for items registered both locally and globally.

1. An application (or middleware component) requests KM for information items related to a given set of terms/concepts by calling findItem() in KM Interface.

2. KM Engine forwards the request to DDM for a concept search by calling the DDM function findResources() with the terms.

3. DDM looks up in SDDD and returns a list of registered shared items for each term/concept.

4. KM Engine organises and returns the results. The application can then choose to retrieve metadata descriptions from the local LDD, or use services from QM to forward a query to relevant nodes.

Figure 4.3 Find registered shared items in SDDD.

A variant of the above use case is ‘find related nodes/sources’. This variant of the use case is used by DENS to find correct publisher nodes across vocabularies, and also by the QM in finding related nodes to send queries to. Here, the returned results will only contain nodeIDs, i.e., no local metadata registration IDs. Figure 4.3 above also applies to this variant.

Page 86: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

70

1. DENS calls findSource() in KM Interface with a request to find a set of nodes that are related to a given set of terms.

2. KM Engine forwards the request to DDM for a concept search by calling the DDM function findResources(). DDM looks up in SDDD and returns a list of related nodes for each term.

3. KM Engine organises and returns the results to DENS. Use Case (2): Retrieve Metadata Description from Registrations in Local LDD This use case involves KM Engine and DDM components (Figure 4.4). It is used by applications to retrieve metadata descriptions of locally registered information items after having used findItem() to get the local LDD entry ID.

1. An application (or middleware component) calls getRegItem() in KM Interface with a local entry ID.

2. KM Engine forwards the request to DDM by calling the DDM function getInfoItem() with the local entry ID.

3. DDM queries the LDD for the registration and returns the requested metadata description.

4. KM Engine forwards the results to the application.

Figure 4.4 Retrieve metadata descriptions in LDD.

4.2.2 Semantic Metadata and Ontology Framework The purpose of SMOF is to support the use and access of ontologies and semantic metadata. The requirements include supporting information integration through the use of ontologies for solving problems of semantic heterogeneity, and to offer functionality related to concept matching, traversal, consistency and validity checking, and simple reasoning. It should also handle problems of partially available semantic metadata – which is particularly relevant in a Sparse MANET given the dynamic network topology – as well as dealing with run-time changes in semantic metadata. A set of language and model standards for semantic metadata and ontologies may also have to be supported. In addition, there may be need for some automatic or semi-automatic dynamic gathering of metadata. Internally in the system, ontologies are useful for describing categories of devices, events, resources, access levels, etc. Externally, i.e., to the application level, the focus is on use and sharing of existing domain ontologies from the participating organisations. When considering the requirements for ontology/metadata handling, it is important to differentiate between requirements and degree of fulfilment of these that is possible to achieve in an environment having powerful devices and stationary network, and what is realistic in a dynamic and resource limited environment such as our rescue and emergency scenario. The following problems/requirements have not been looked into: how to handle partially available semantic metadata; automatic or semi-automatic gathering of metadata; run-time changes in semantic metadata.

Page 87: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

71

Ontology Use and Reasoning Our focus is on the use of existing ontologies during a rescue operation, making the assumption that ontology development and maintenance will occur outside of the rescue operation, in the respective organisations, and with access to a stable and powerful computing system. It is unlikely that services other than storage and querying of ontologies and instances will be needed during a rescue operation. Ontology use implies already developed ontologies with defined classes and relations, and instance data.

The reasoning tasks required for ontology use are called instance reasoning, and these are [Kiryakov2003]: (1) To find the most specific classes describing a partially specified instance; (2) finding all instances of a given class definition that exist in the data set; (3) more complex queries about instance data; (4) consistency checking of instance data. The inference services needed for instance reasoning are realisation – find the most specific classes describing a given instance; instance checking – that the instance is of a given class; retrieval of instance names described by a given class; model checking – that the set of instances is a correct model of an ontology; and minimal sub-ontology extraction – finding the minimal sub-ontology such that a set of instances is a correct model. The latter is useful to determine the scope of ontology exchange, e.g., between two systems. Given the resource limitations of our scenario and depending on the context in an ongoing rescue operation, e.g., resources and capabilities on the currently available devices, it is likely that only a limited set of instance reasoning services can be offered. Powerful devices/laptops will be the only nodes able to offer such services, and smaller devices will have to request these services from these, as resource weak devices will not have the capabilities to perform such inference tasks.

Consistency and validity checking is in our scenario relevant in instance reasoning, but as ontology creation and maintenance is not an issue during the rescue operation, this kind of checking is not required for ontology classes and relations. Related to the rescue scenario phases, described in Chapter 2, the latter may be relevant in phases 1 (a priori) and 6 (post processing). Traversal in hierarchy or semantic net is mainly relevant as part of search (and instance reasoning). During a rescue operation, where time is a critical factor, applications that offer browsing or traversing ontologies are not likely to be in high demand by users. Information Integration and Sharing There are two main points that need considering regarding the requirement for support of information integration and sharing through using ontologies to solve semantic heterogeneity. The first is linked to the issue of understanding, and has to do with mapping and translation between different ontologies/vocabularies, data models, etc. There are two alternatives in this case; the first is using a common minimum base model (data model, ontology or vocabulary), or upper ontology, that can be used to bridge different models; the second is to demand that all participating organisations use the same standards and vocabularies. As the organisations are from different domains, the latter alternative would be difficult or impossible, thus the first alternative – bridging – may be a better choice. This alternative may also assist in adding/including new (or parts of) ontologies/vocabularies, as long as these are related to the upper ontology. The second point that needs consideration is that the ontologies/vocabularies have to be serialised in agreed-upon standard languages and syntax, e.g., XML and RDF. XML is widespread and chosen as the language for message exchange in the Norwegian health sector [KITH]. An example of a possible common modelling language is OWL, which is typically modelled using RDF and serialized using XML. A set of language and model standards may have to be supported by the framework. Requirements to ontology languages address issues like

Page 88: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

72

expressiveness, completeness, correctness, and efficiency, as well as interoperability with relevant standards [Fensel2001]. Several related projects have chosen to use OWL or its forerunner DAML+OIL, e.g., [Chen2003], [Perich2002]. For knowledge exchange, knowledge exchange protocols are needed, e.g., KQML and KEP [Schulz2003]. Realising SMOF Enabling technologies for realising SMOF are OWL and (to some degree) Topic Maps. These are described in Chapter 3. Here we discuss their usefulness in relation to SMOF.

Designed for use by applications that need to process the content of information rather than just presenting information to humans, OWL facilitates greater machine interpretability of Web content than that supported by XML, RDF, and RDF Schema by providing additional vocabulary and extended| formal semantics. Its focus on greater machine interpretability is important in relation to SMOF in that it is at middleware level. OWL has three sub-languages of increasing expressiveness [OWLGuide]: OWL Lite, OWL DL, and OWL Full. OWL Full gives no computational guarantees, thus is not useful for SMOF. OWL DL is expressible and guarantees computational completeness and decidability, but is very resource demanding. OWL Lite is tractable/manageable, but still heavy on resources. An alternative language is DL-Lite, which is a fragment of OWL DL tailored for handling efficiently a large number of facts. Its main focus is to provide efficient query answering on the data and to allow the use of Relational Database Management technologies for such a purpose [OWLtractabl]. To be able to utilise reasoning on resource weak devices, a small reasoning engine will be needed. In work on reasoning engines for RDF/OWL (DAML+OIL), e.g., in [Kiryakov2003], focus on resource limitations is not an issue. Reasoning engines for OWL are very resource demanding, which is a problem in our scenario, but it is reasonable to assume that an OWL/RDF reasoning engine for mobile devices will appear in the near future. An existing reasoning engine based on description logic for small devices is the Pocket KRHyper [Sinner2005].

The purpose of Topic Maps (TM) is to describe knowledge structures and associate these with information sources [Pepper2002] [ISO13250]. As such, it is an enabling technology for knowledge management. Created to be an ontology framework for information retrieval [Garshol], it is mainly targeted for humans, e.g., for visualising and browsing [Pepper2006]. During a rescue operation, it is highly unlikely that the users (rescue personnel) will have time to browse topic maps, which makes TM less useful for SMOF. But TM may be a valuable tool for exploring knowledge and resources outside of an ongoing operation, i.e., in the a priori and post-processing phases. In relation to our application scenario, it is important that loading topic maps will not be too time and resource consuming. Simple testing of loading small topic maps (as XTM files) on small to medium capacity mobile phones, and running simple queries on these, have shown that it is possible to use Topic Maps on resource-weak devices (mobile phones), but it is very time consuming and there is a clear limit on the file size that can be handled. A set of small tests using J2ME with a Mobile Topic map Viewer (MTV) topic map engine on two mobile phones, (1) Sony Ericsson T610 and (2) Sony Ericsson K600i, showed that the Sony Ericsson K600i could handle medium size topic maps (less than 221 kB), while the Sony Ericsson T610 only managed very small topic maps (less than 29 kB) [Vigdal2006]. Vocabulary Mapping The SMOF is central in vocabulary mapping, and one very simple way to achieve vocabulary mapping is by keeping a simplified thesaurus of terms with their synonyms and vocabulary ID. In the following use case we illustrate how this can be achieved in our

Page 89: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

73

application scenario. The use case is also applicable for the QM in cases of mapping between multiple query languages on different nodes. The focus is on thesaurus lookup in relation to subscription languages used by the DENS (see Chapter 2). More detail about how this is solved in the DENS can be found in [Skjelsvik2006]. It is assumed that each organisation is using its own standardised vocabulary and data model, thus there is a need to support subscriptions targeting different vocabularies. As information in our application scenario is shared across organisations, the support must include translation or mapping between these different vocabularies. For instance, sensors monitoring temperature may have been placed out in a certain area. Even though the sensors support only the vocabulary used by the fire brigade, the coordinator of a team of police officers may also be interested in the measured temperature. As the vocabulary used by the police officers use different terms to designate temperature, it is necessary to translate between these vocabularies. The support has to include translation or mapping between different vocabularies (here used in different subscription languages) and locating nodes that will understand the different vocabularies so that a subscription can be sent to the correct nodes (finding all nodes associated/linked to the given vocabulary and attribute). Use Case (3): Multiple Subscription Language Support This use case involves SMOF and DDM. In addition, it involves a Data Management component (outside of KM). Note that although this use case shows vocabulary lookup for DENS, the same use case applies for simple vocabulary mapping for any other middleware component or application. As a prerequisite for this use case, DENS has requested the KM for nodeIDs for nodes related to a set of subscription attributes. DENS requests the KM to retrieve synonyms for a set of attributes (terms) by calling the function findSynonym() in KM Interface on each of the relevant nodes (Figure 4.5). Not shown in the figure is KM Engine, which receives the request and forwards it to SMOF, and organises the results before returning them to DENS.

1. SMOF is requested (by KM Engine) to perform thesaurus lookup. It looks up the term and returns all related synonyms and which vocabulary they belong to. This is repeated for each term in the request from DENS.

a. SMOF requests DDM to retrieve locations for vocabularies. b. SMOF requests Data Management to retrieve synonyms for a term from all

vocabularies in the thesaurus. 2. The resulting synonym lists for all attributes are organised and returned to DENS.

Figure 4.5 Find synonyms.

4.2.3 Profile and Context Manager The main objective for PCM is to support personalisation, ranking and filtering of information through profiling and context. It should offer services both internally in the system and externally to the application level for creation and maintenance/update of profiles and contexts, and define default structure of different types of profiles and contexts. It should also offer functionality (services) that the QM will need in filtering

Page 90: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

74

incoming (retrieved) information (filtering is also listed as a requirement to the QM). Updates to contexts are more critical than updates to profiles, as profiles are more static, while some parts of the context may be very dynamic and change often, e.g., the location of a mobile device.

A profile in our use of the term, gives the “what” and “who” of an entity, and contains fairly static information. An entity here can be anything that can have a profile description, e.g., a device, a user, a document, etc. Examples are device type and resources, user identification, roles and preferences, e.g., regarding domains of interests or fields of expertise. Profiles can be used for filtering purposes, as well as for advertising what services and resources the entity can offer, together with any requirements and conditions/terms for use of these. A node/device profile can also include the resources offered on a node (e.g., services), and this is part of the metadata stored in the metadata dictionary, which assures it is searchable through services from DDM. This means that for instance if a node runs a full DENS, this is described in the node/device profile, and can be used (by full-DENS-nodes) to find other DENS-nodes.

Contexts contain information concerning the “where”, “when” and “why” of an entity, for instance the location, time, and situation. This is a more dynamic kind of information, changing along with shifts in situations and locations (e.g., node movement). The term context is commonly used to include both static context, which we have termed profiles, and dynamic context.

The requirements for what information is relevant to keep in the context and profiles are likely to change according to the application scenario, e.g., the type of rescue operation, user preferences, and possibly organisational preferences.

4.3 Tool Components The tool components provide base functionality (to the other components and the application level) for information search and exchange.

4.3.1 Query Manager An essential part of sharing information among users/nodes is to be able to search and query the available information to find the exact piece of information or data needed. The main purpose of the QM component is to offer functionality related to search/query and retrieval of available information in the Sparse MANET.

The requirements for the QM include support to other components in querying and retrieval, allowing different approaches to querying; support dispatch of queries; support filtering and ranking of results. It will use services/functionality provided by other KM components, e.g., DDM in finding semantically related nodes to send queries to. Different approaches to querying that may be relevant are query by content, structure, context and profile information, naming an object (retrieval), and through using keywords/terms. Ranking of results could be according to criteria, by content, structure, profile, etc. Filtering of results according to (user) profile or chosen criteria (e.g., structure, content, and context) and ranking (e.g., by only returning results that are ranked above a certain threshold value). This has not been a focus in this thesis.

The QM is responsible for sending/dispatching/forwarding queries, using the underlying communication system, as well as gathering, filtering and ranking results before returning these to application level. Query forwarding is relevant in cases of forwarding queries to other semantically related nodes. Given that there are many resource

Page 91: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

75

weak devices among the nodes, the full QM may only exist on devices powerful enough, and will offer this query management as a service to other nodes. Another option is to have the QM on resource weak nodes to function as a proxy for a more resourceful node that is able to handle query management. In this case, the QM would be required to forward queries to the resourceful node if it is available. A challenge here is the instability that may be caused by the mobility of the nodes and possible frequent network partitioning. The KM assumes that this is handled by other parts of the middleware and infrastructure.

Regarding the support of different query languages and mappings between these, it is a major point that the KM and its components should not pose restrictions on which query languages or vocabularies to use, but rather offer support for translation or mapping between these. In the QM, this can be solved through the use of wrapper/mediator functions used for both queries and results. In cases where a query has to be sent to other nodes using different query languages or vocabularies, this can be achieved the following way: Services from the DDM are used to find nodes that have related information. The query may be split into smaller parts before it is sent to the related nodes. Translation/mapping to the local query language may be requested as a service of more powerful nodes. After processing, results are translated and sent back to querying node which gathers and organises these. Results are delivered to the application level in the format chosen by the application (or querying node). Being at middleware level, the QM should not offer any means for query creation to the user.

Solutions for adding new criteria and/or change/delete existing criteria for both filtering and ranking are needed. In the case for criteria stated in profiles, these are part of the responsibility of PCM.

The QM will offer two services: search/query and retrieval. The functionality for the services is achieved in cooperation with other KM components. For retrieval, the QM mainly cooperates with the DDM in localizing which nodes have the requested information. Search/query includes the following:

- Dispatch/forward query, in cooperation with DDM for finding semantically related nodes to forward query to.

- Offer different approaches to query/search. - Offer different query languages: in cooperation with DDM, and SMOF;

functionality offered by the DDM to find which nodes understand which query languages, and functionality from SMOF for thesaurus lookup. For this we have adopted the approach used for support of different subscription languages in [Skjelsvik2006].

- Filtering the results: cooperation with PCM for filtering criteria (stated in profile and/or according to context).

- Gathering results. - Ranking of results: cooperation with PCM for criteria for ranking. - Format results to what understood/accepted locally using wrapper functions.

The QM depends on the underlying communication system for sending query messages and requests for retrieval on other nodes. A query may also be made more permanent in the form of a subscription, in which case services from the DENS, would be used. Use Case (4): Forward Query to Related Nodes This use case involves the KM Engine, QM, and DDM components. The use case illustrates how QM forwards a query to related nodes (Figure 4.6). QM has received a query from an application via the KM Engine. The query is marked as a global search/query, so the QM will have to forward the query to related nodes. How wide the search should be,

Page 92: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

76

i.e., if receiving nodes should forward it further to their related nodes, may have to be stated in the request.

Figure 4.6 Forward query.

1. QM receives a query from an application via the KM Engine. 2. QM sends a request to DDM for a concept search to find related nodes (ref. use

cases in Section 4.2.1). 3. QM creates the message (or uses an appropriate service from the underlying

message system) and requests the underlying message system to send query to these nodes (i.e. not to all nodes). The results are received through the message system.

4. Results are gathered and organised and returned to the application via the KM Engine.

4.3.2 XML Parser Information sharing necessitates information exchange in some format and standards that are understood by the parties of the exchange. Given that XML is becoming a de facto standard for information exchange, e.g., in health and medical domains [KITH] [OASIS] [CEN/TC251], a lightweight XML Parser may be needed. Providing access to an XML document’s content and structure, an XML parser is a software module that reads the XML document and then breaks it into a set of fundamental components that is meaningful to the application or an end user. The actual parsing is conducted based on the rules and syntax defined in the standard [Riehl2002]. In addition to offering services related to parsing of XML documents, this XML handling component could support creation of (XML) structure templates, for instance based on different kinds of metadata. If RDF and RDF-based ontology languages are used, an RDF parser will also be needed. Due to the resource limitations on smaller devices, the parser will have to be of a lightweight kind, suitable for use on, e.g., a mobile phone or PDA.

Requirements to a suitable XML parser include sufficient speed; limited memory usage – both for program memory footprint and for storage of XML data; support for namespaces; support for Java 2 Micro Edition (J2ME) platform [J2ME]; and possibly some level of XML Schema validation. The speed of the XML parser is mainly dependent on the coding of the API, while the memory footprint is related to both the coding of the API and the coding of the program that handles the XML document. The amount of memory needed for XML data storage depends on the parser paradigm. XML Schema validation is very demanding of processor resources, and may not be possible on all devices.

A common categorisation of XML parsers is into streaming parsers and in-memory parsers [Obasanjo2003]. In-memory parsers load more of the XML document into memory before processing it, which allows more features, e.g., search and modification, but puts a heavy strain on memory usage. These are classified into tree model parsers, a.k.a. document object model (DOM) parsers, and cursor model parsers. DOM parsers load the entire XML document into memory, while cursor model parsers only load the current

Page 93: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

77

part. Streaming parsers do read-only and forward-only streaming through the document, loading only the current part into memory, thus are very memory efficient, but with limited features/functionality. Streaming parsers are classified into two types, push parser and pull parser, according to whether the control of the parsing remains with the parser application (push parser) or the consuming application (pull parser). The latter implies that the programmer asks the parser for the next event, while in a push parser the parser application informs the consuming application of new events. Streaming parsers are generally considered more CPU and memory efficient than in-memory parsers, at the cost of less functionality. Pull parsers have been found to be more suited than DOM parsers in resource limited environments, they perform better and are a good fit for middleware applications [Sosnoski2001]. Pull parsers have also been found to outperform SAX2 parser, especially for smaller files [Sosnoski2002]. SAX2 is an extended version of Simple API for XML – SAX, the standard specification for a push parser. Different XML parsing paradigms are presented in [Obasanjo2003] and [Riehl2002].

As is well known, XML is very verbose, which does not suit the high latency and low bandwidth characteristic of Sparse MANETs. A solution may be text or binary compression [Hanslo2004]. This requires added processing on both sending and receiving nodes, but has been found to be useful in cases of poorly connected clients and servers [Tianl2004]. Using J2ME and the open source lightweight parser kXML [kXML], Wong performed tests of simple web services showing that XML parsing on mobile phones is possible [Wong2003].

4.4 Discussion The functions of DDM and the QM together can be compared to the functionality offered in data source mediators and wrappers, which have been used to address problems related to information integration over the Internet. A wrapper essentially exports information about source schema, data, and query processing capabilities, while a mediator centralises information provided from wrappers into a unified view stored in a global data dictionary. The mediator also does the work of decomposing queries to smaller parts that can be processed by the wrappers, gathers partial results and computes the final answer [Özsu1999].

In comparison, the DDM provides information about the structure/organisation and content of the information resource/item, while the QM decomposes the query received from application level, requests the DDM for related nodes, forwards queries (or parts of queries) to these nodes, gathers and organises results. DDM and QM are together with SMOF involved in any translation/mapping between query languages.

As described in Chapter 2, the devices in our application scenario are of varying size and resource capacity, and not all devices will have enough resources to offer the full functionality of the KM, but instead offer reduced services of each sub-component. As the DDM is the most central component for sharing information about available knowledge in the network, all devices will offer full services from this component, which will enable functionality to register and share metadata about available information. KM running on more powerful devices can offer the heavier KM functionality, e.g., ontology instance reasoning and complex querying, as services to resource weak devices.

Referring to Chapter 3, the KM can be related to elements from knowledge management as in the following. We have adopted the hierarchical view of knowledge, i.e., viewed as increased levels of semantics (meaning). Our main concern is management and sharing of explicit knowledge, i.e., factual knowledge found for instance in documents,

Page 94: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

78

databases, files, and models. Tacit knowledge, found in, e.g., mental models, procedures and skills, is not handled, although sharing this kind of knowledge may be valuable in emergency operations. The knowledge management processes in focus in relation to the KM are knowledge storage/retrieval and knowledge transfer/sharing. As knowledge learning (knowledge creation) and integrating new knowledge (knowledge application) are not relevant during a rescue operation, we are not addressing these in the KM.

Regarding ontologies, our focus is on the use of already existing ontologies. The intended use being (1) domain ontologies and/or vocabularies from relevant domains, e.g., medical; (2) as support in sharing vocabularies, e.g., as a bridge or an upper level ontology; (3) enhancing metadata descriptions with terms/concepts from ontologies.

4.5 Cooperation in the Knowledge Manager In this section, we show how KM sub-components cooperate to offer services through sub-component (inter-) dependencies in the component diagram in Figure 4.7. Note that in addition to the front component, KM Engine, an interface, KM Interface, has been added, as KM sub-components do not offer interfaces directly to the rest of the system. The DDM data dictionaries (LDD and SDDD) are not shown in the figure. The KM Engine is responsible for receiving incoming requests for services (through the KM interface), control execution of requests, and dispatch/delegate tasks and (sub-) requests to the appropriate component. We give an overview of relevant functionality offered by the KM Interface in the following.

Figure 4.7 The Knowledge Manager component. Internal dependencies.

The functionality offered by the KM Interface is shown in Table 4.1. The listed services focus on what is needed for registering metadata for information to be shared in the network, search and retrieval of this kind of meta-information, and vocabulary mapping. The presented functions form the backbone of our approach to metadata management, enabling the most basic information sharing in our application scenario. They do not cover all the requirements to the KM; additional functionality may be offered by the KM Interface, including services for XML parsing, services for more advanced querying by the QM, and services regarding profile and context management (PCM) and ontology use (SMOF).

Page 95: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

79

Table 4.1 KM Interface.

Service Function Description Comment Registering information items for sharing (DDM).

registerItem() Register metadata of information items for sharing.

Initiate (local) updates of SDDD.

updateItemReg() Update dictionary entry.

Retrieve local metadata description.

getRegItem() Lookup item in LDD given the local entry ID (registration ID in LDD).

Used for retrieving metadata about local information items after having retrieved the LDD registration ID using findItem() described below.

Search for shared information items by concepts/keywords from vocabulary.

findItem() Retrieves nodeID and/or local LDD registration ID from the SDDD given terms/concepts.

Note: retrieves nodeID or local LDD entry registration ID. Should be possible to state if only local LDD registrations are to be returned.

Vocabulary mapping/synonym lookup.

findSource() Finds nodes that are associated with given terms and vocabulary ID.

Returns results in a specific format.

findSynonym() Looks up in a thesaurus for sets of synonyms for a set of terms from a given vocabulary.

Returns results in a specific format.

4.6 Cooperation with Ad-Hoc InfoWare Middleware Components

The KM depends on services offered by other middleware components in the Ad-Hoc InfoWare architecture, particularly the DENS component for subscriptions to event notifications. The Resource Manager (RM) component offers services that the KM subscribes to via the DENS, e.g., discovering new nodes in the network. In addition it depends on services of access control provided by the Security and Privacy Manager. In this section we will describe how the KM can subscribe to events via the DENS, and how the KM Engine will know when to prompt the DDM to initiate an SDDD exchange. Subscribing to Event Notifications – DENS The KM subscribes to an event through sending a subscription for some information to the DENS. DENS does not know anything about the subscription contents, only how to forward/place a subscription to the correct publisher node, and deliver a notification back to the subscriber node at the requested event, thus decoupling subscribers and publishers in space and time. A subscription consists of a statement in some subscription language, and where (publisher node) it is to be placed. The attributes in the subscription are keywords/concepts from a vocabulary or ontology registered in the LDD/SDDD.

The DENS will forward the subscription (wrapped as a Watchdog/trigger) to the publisher node, and act as a mediator node where necessary. It offers multiple subscription language support, as shown in Use Case (3) in Section 4.2.2 and described in [Skjelsvik2006], and does not have any knowledge about different subscription languages.

Page 96: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

80

Notifications are delivered by sending a message to the subscriber node. How this may be solved is implementation dependent. The KM will then parse (XML parser) the content of the notification message locally.

The KM will make subscriptions to different events. The requested type of event is stated in the subscription through using a subscription language. Different types of events relevant for the KM, e.g., update, new arriving node, and context change, may be defined in vocabularies/ontologies. There may be “sub-events”, i.e., specifications of a type of event, e.g., different kinds of updates like data_update, profile_update: role, dynamic context: node_position, etc., defined in the vocabulary. All the events defined in such an ontology together create an event space. From the perspective of the DENS, subscriptions are always connected to nodeIDs. For the KM it may be useful to connect subscriptions to for instance a rescue operation role or membership of group/team or organisation; this kind of connection is solved locally in the KM through utilising information in profiles and context.

How a subscription is made to the DENS will depend on the implementation, for instance through an interface offered by the DENS, and the KM has to offer a handler or interface so that the DENS can send notifications about events. The information needed for a subscription includes the subscription itself, which subscription language is used (language=SL-ID of some kind), and where the subscription should be forwarded to (where=any/all/nodeID). The subscription language ID has to be included because DENS does not look at the actual content of the subscription.

The DENS offers different types of subscriptions. These are differentiated by when the subscription is made/installed (a priori/configuration time or at run-time), where the subscription is to be forwarded (which nodes), and its scope, i.e., whether it is local on the node or global. Predefined (a priori) subscriptions are installed at the time of software configuration. Then, at bootstrap-time, the DENS places the corresponding triggers/Watchdogs. Run-time subscriptions are made in the running phase (Phase 4) of the rescue scenario phases (Chapter 2). The distinction between local and global scope is only relevant to the DENS – it will not make any practical difference for the KM. In addition to the predefined subscriptions already installed, the KM will make subscriptions at run-time and needs to consider where to forward the subscription. There are three alternatives when stating where to forward the subscription; ANY source/node, ALL nodes, and to a particular/given nodeID. ‘ANY source’ denotes a delay tolerant subscription and implies that the DENS will forward the subscription to any node that has relevant information (using services from the KM to find these sources). ‘ALL nodes’ means the subscription will be forwarded to all nodes in the current partition. In combination, this gives five types of subscriptions (see Table 4.2) that the KM will use, as described in the following (KM perspective).

Table 4.2 Different types of subscriptions.

Type When Scope Configuration Where Type 1 a priori local implicit all Type 2 run-time global explicit all Type 3 run-time local explicit my_nodeID Type 4* run-time global explicit any Type 5 run-time global explicit nodeID

*= Delay Tolerant Subscription The KM will typically have a Type 1 subscription to the (local) RM for detecting changes in the network. The notification from the DENS is used to prompt the DDM to initiate

Page 97: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

81

SDDD exchange. The KM will request the nodeIDs of the current network partition from RM, and then the DDM will perform an exchange with each of these nodes. See Chapter 5 for a description of the procedure for SDDD exchange. The Type 2 subscriptions can be used by the KM for detecting changes in profiles and dynamic contexts, e.g., change of device, device position, or when a user change role in the rescue operation. Assuming that the rescue leader (OSC) of an operation keeps overview of active personnel and resources, a KM running on the node of an OSC may want to make such a subscription. The officer in charge (OiC) of a domain, e.g., medical or police, may also want these kinds of changes, but only for the personnel in their domain/organisation. This can be solved in the KM using Type 3 or Type 5 subscriptions. KM can use Type 3 subscriptions in cases where a node wants to detect changes locally on the node to trigger some action, e.g., sending a message/update to the relevant officer in charge. From the perspective of the KM, type 3 subscriptions can be viewed as a local variant of type 2. Type 4 subscriptions are termed delay tolerant subscriptions, and are forwarded to any node that has the requested information. Type 4 subscriptions are not used specifically by the KM, but the DENS uses services from the KM to find relevant nodes. The Type 5 subscriptions are explicit subscriptions to particular nodes. These are useful where certain changes on a particular node are to be notified. For instance, if a team leader wants to know the movement and position of a particular node at all times, the KM running on the team leader’s node can make a subscription for changes in the device position for that particular node. For instance if fire-fighters are guided by a team leader through a burning building, e.g., using radio. Discovering Changes in the Network – RM The KM needs to know about changes in the network, e.g., to know when new nodes have arrived so that it can initiate an exchange of SDDDs. The RM provides information about changes in the set of nodes that are part of a partition (partition changes), and the KM on all nodes have a predefined (Type 1) subscription via the DENS for stable partition changes. A partition (Px) is defined as the set of nodeIDs listed in the routing table at time t. The RM detects partition change by finding differences in the set of nodeIDs between time t and t+x. RM sends a message to the DENS when detecting a partition change, and DENS sends a notification to the KM. The KM will then request RM for the nodeIDs (new and removed), and then initiate SDDD exchange (new node) and check if the partition change caused any changes in information availability (removed nodes), and perform necessary updates. The DENS will have to refresh delay tolerant subscriptions (Type 4) when there are new nodes in the partition. To detect relevant nodes, it will request the KM for any new nodeIDs having keywords/concepts for these subscriptions. The KM will reply with the answer once it has finished the SDDD exchange with the new nodes. The DENS will then perform the necessary updates.

4.7 Summary In this chapter, the overall design of the KM has been presented. This component is part of the Ad-Hoc InfoWare middleware architecture, and is aimed to deal with issues related to knowledge management and information sharing in a Sparse MANET. The main issues are as follows: (1) understanding across different domains, (2) dealing with the problem of information overload, (3) availability tracking of information, (4) search and retrieval of information, and (5) information exchange in standardised format. There are five sub-components in the KM, each responsible for functionality related to one of these issues (shown in parenthesis below). Three of these are managing metadata in some form: DDM

Page 98: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

82

(issue 3), SMOF (issue 1), and PCM (issue 2). The QM (issue 4) and XML Parser (issue 5) are tool components. The actual storage of data is handled by a Data Management component that is not a part of the KM. The functional requirements to the KM are supported in the following way:

- Support for use and sharing of domain ontologies/vocabularies is supported through SMOF and Data Dictionary Management (ontologies are registered as resources to be shared).

- Management of metadata (data structures, content descriptions) and ontologies (vocabularies) handled by SMOF and Data Dictionary Management.

- Querying and retrieval of relevant information items handled by QM. - Keeping track of the availability of information items handled by DDM. - Dynamic updates are handled by a cooperation of PCM, SMOF, and Data

Dictionary Management. - Personalisation and filtering of information handled by PCM. - Support of information exchange in standardised format is handled by the XML

Parser. Through the presented design we have shown relevant claims in the following way: Claim 1, avoiding information overload through filtering and personalisation, and Claim 2, cross organisational information sharing demands vocabulary sharing or mapping, are achieved through cooperation between these components: PCM, SMOF, QM, and DDM. Claim 5, establishing and sharing information on who knows what, is achieved through the DDM and its exchange of links of related concepts and nodes.

In addition to showing the how the above claims are addressed, this chapter has also shown that Claim 3 – the importance of efficient metadata management – is handled by the DDM, and that Claim 4 – utilising ontologies in dynamic update – is handled through the cooperation of several KM-components in cooperation; SMOF, PCM, and DDM. Claims 3 and 4 are shown in more depth in Chapters 5 and 6, respectively.

Page 99: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

83

Chapter 5 Metadata Management: A Three-Layered Approach In this chapter, we focus on our approach for metadata management. We look into how we can achieve resource-efficient metadata management for Sparse MANETs in rescue scenarios through registering and exchanging information about which information resources are available in the network. We address the following questions:

• How can we achieve efficient sharing of knowledge about what information/resources are available in the network?

• How can we achieve keeping track of the availability of these resources? • How can we handle dynamic updates in an efficient way? • What data modelling technology is suitable in our application scenario?

Our solution is a three-layered approach to metadata management, realised through a multi-layered data dictionary. The two lower layers in our approach, information layer and semantic context layer, handle metadata descriptions of information items, as well as what we have termed semantic links (ref. Chapter 4). Semantic links connect terms/concepts to the location of related information items. The semantic context layer can function like a (semantic) index structure – indicating which nodes to send queries to – assuming semantically related nodes have a higher probability of having relevant information. The third, ontology layer, handles semantic metadata. In combination, these layers create a semantic network of information items on nodes. To realise our approach, we need a metadata manager that can keep track of information availability and support sharing information of available resources, and we need a data modelling language that can support semantics and that is suitable for exchanging metadata between nodes, as well as support interoperability given the inherent heterogeneity in our application scenario. It is also important to be able to exchange metadata between nodes without loss of semantics.

We focus on showing solutions for the following claims:

• Claim 3: efficient metadata management is essential in a solution for sharing information in resource-limited environments.

• Claim 5: sharing metadata about what information is available and where it can be found is essential for efficient knowledge and information sharing.

Page 100: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

84

The main contributions presented in this chapter are a three-layered approach to metadata management realised through a multi-tiered data dictionary, and the design of a data dictionary management component with dynamic update, realising the two lower layers of our approach. We describe the three-layered approach to metadata management in Section 5.1, where we will also look at dynamic updates, as the handling of these is an essential part of metadata management in such a dynamic environment as our application scenario. The Data Dictionary Manager (DDM), introduced in Chapter 4, is the main metadata handling component in our three-layered approach. It partly realises the two lower layers. The DDM is part of the KM, situated on all nodes, and has a Local Data Dictionary (LDD) and a Semantic Linked Distributed Data Dictionary (SDDD), which are essential in our approach. We describe the DDM in Section 5.2. We have previously published parts of this chapter in [Sanderson2005] and [Sanderson2006]. In Chapter 8, we present an example implementation of the DDM.

5.1 Overview of Our Approach In this section, we present the three-layered approach to metadata management, followed by the different kinds of dynamic update that are relevant in our approach. We address our approach from three perspectives: levels of information; realisation; and data modelling technology (see Figure 5.1). The levels of information – the three layers in our approach – reflect our (hierarchical) view of knowledge and (to some extent) the three kinds of metadata we need to handle (see Chapters 3 and 4). The DDM for the two lower layers mainly represents realisation. In addition, we need data modelling technologies to realise the ontology layer, as well as modelling metadata on all levels (layers) in our approach.

Figure 5.1 Three perspectives on our approach.

A three-layered approach is also taken in Schwotzer et al. [Schwotzer2003]. Another feature of our approach is to enhance metadata descriptions with terms from domain ontologies. Utilisation of vocabulary terms in metadata descriptions is used in [Kashyap1998] in their approach for handling information overload, and discussed in relation to the Semantic Web in [Stuckenschmidt2005].

5.1.1 The Three Layers Conceptually, our approach for metadata management consists of three layers of increasing abstraction that reflect three levels of information (see Figure 5.2). These are based on the Semantic Web view of levels of use [Schwotzer2003], described in Chapter 3, which is also used in Shark [Schwotzer2002B]. At the lowest level, the information layer consists of metadata descriptions of information item structure and content, as well as the

Page 101: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

85

information items themselves. The semantic context layer is the intermediate level, this constitutes a semantic net consisting of the terms/concepts used in metadata descriptions and their relations. The concept terms are abstracted from the metadata descriptions on the information layer, these terms and the relations connecting them are instances of concepts/classes and relations defined in ontologies. The latter constitutes the ontology layer, which is the highest level.

Figure 5.2 Three-layered approach – levels of information.

A semantic link, between information and semantic context layer, connects terms/concepts in metadata descriptions of information items (information layer) to related information items and related terms/concepts, what we have termed a semantic or topical context. The semantic link can point to a locally registered item or to a node that has registered information items related to the concept/term. The information layer and the semantic links to the semantic/topical layer are realised through a multi-tiered data dictionary, managed by the DDM. The ontology layer is realised using data modelling technologies.

As stated above, the realisation of the two lower layers is achieved by the DDM, which is responsible for metadata management, and the main metadata handling component in our approach. The DDM data dictionaries are the LDD and SDDD. LDD contains metadata about information for share, and the SDDD contains the elements of the semantic context layer, i.e., concepts/terms and semantic links, and keeps track of information item availability.

Metadata descriptions enhanced with concepts from a vocabulary are added to the LDD by the application layer. Excerpts of the metadata descriptions, e.g., concept/terms and the entry ID for the registration in the LDD (i.e., metadata resource ID), are extracted from the LDD to the SDDD. The concepts and their inter-relations make up a small semantic network on a node. All SDDDs together constitute a global semantic network of concepts and relations across the network nodes that can be traversed to find semantically related information (see Figure 5.3). The SDDD content on each node is distributed and replicated through the exchange and merging of SDDDs when they come into network range (same network partition). Through relating the concepts and nodes in linking tables, the SDDD provides the means for linking information items in the local data dictionary to a semantic context.

Page 102: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

86

Figure 5.3 Three-layered approach - realisation.

A semantic link, as stored in the SDDD, between information and semantic context layer, is essentially a pair {concept/term, metadata resource ID}, where the metadata resource ID can point to a metadata description registered in the LDD, or to another node (nodeID). The concepts/terms in the links are extracted from the LDD metadata descriptions, and originate from ontologies/vocabularies. A nodeID is a unique identification of a device that is part of a network. In our application scenario, we assume that a (static) nodeID has been given to all devices a priori. We further describe how the DDM manages SDDD links in Section 5.2.

An important issue in the context of SDDD links is the level of granularity for linking. We have chosen to use two levels of link granularity, depending on whether the link points to items in a local LDD, or to items found at a different node. Pointing to the node will be sufficient in the latter case, the rest of the link being resolved locally on the node where the item is. When linking to the LDD on the same node, the link points to the LDD entry/registration (registration ID). The level of availability tracking, i.e., what can be tracked, depends on the level of link granularity; for a link pointing to another node, i.e., nodeID registered in the SDDD, availability of the (source) node can be tracked, but not individual information items on that node. For a metadata resource ID pointing to a (local) LDD registration, it is possible to track the availability of the information item. However, assuming that any item registered in the LDD will be available for sharing, any local item will likely be available during the session.

The DDM is only capable of managing the metadata and links between vocabulary/ontology terms (concepts), and nodes (or local dictionary registration), i.e., between nodes and between the two lower layers; in addition we need a data modelling technology that can handle semantics, particularly to realise the ontology layer, but also partly the semantic/topical context layer.

In addition to a metadata manager, we need data modelling technologies that support semantics. Two such technologies are OWL [OWL] and Topic Maps [ISO13250] [Pepper2002], presented in Chapter 3 (see Figure 5.4). Both are capable of representing meaning in the form of statements about resources. The most important differences [Pepper2006] between OWL and Topic Maps (TM) lies in that they are targeted at machines and humans respectively, OWL is optimised for performing inferences, while TM is optimised for retrieval, and that only OWL is based on formal logics.

Page 103: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

87

Figure 5.4 Three-layered approach - data modelling technology.

For the Information layer, RDF(S) is sufficient. RDF(S) contains modelling primitives of both RDF and RDF Schema (RDFS). RDFS extends RDF with frame-based primitives, thus solving some limitations of RDF, e.g., primitives for defining relations between property and resource [Gomez-Perez2004]. OWL builds on RDF and RDFS, adding more vocabulary for describing properties and classes. Thus, OWL can be used for Semantic/topical context layer and Ontology layer.

Based on the notion of a book index, TM enable descriptions of knowledge structures and associate these with information sources. TM is sufficient for realising the Information layer and the Semantic/topical context layer. And can to some extent be used for ontology definitions, but as it is not based on formal logic, it is not useful for reasoning. Therefore, it may not be sufficient for the Ontology layer, as it may not fulfil our requirements for simple reasoning (see Chapter 4 for requirements).

5.1.2 Dynamic Update An important issue in relation to automatic sharing and filtering of information among rescue operation participants is how to handle dynamic update, i.e., handling update at run-time, dynamically as changes occur. The information shared in our application scenario is not static; during a rescue operation, on-site observations and gatherings of data will continually be added to the system, and possibly changed. For instance, medical personnel will register casualties for medical treatment, and this patient information will be updated on changes in the patient status. Exceptions are what is known a priori, i.e., exchanged between organisations in advance of any rescue operations, like shared standards and vocabularies, policies, etc. Given the dynamic environment, with a lot of activity and movement, updates are likely to be frequent, and information sources may disappear and reappear in an unpredictable fashion. This means that solving issues of dynamic updates becomes essential. These issues are:

• Unstable availability caused by the dynamic environment and limited resources. • Frequent updates increasing communication needs and raising new issues of

consistency. The above problems also signify that it is impossible to have an accurate and all-including view of all the available information at all times, or to achieve that all nodes have an identical view; each node can at best manage a partial snapshot of the total amount of information available in the network. Consequently, complying with very high/strict

Page 104: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

88

demands (rules) for consistency will be virtually impossible in this environment. A level of consistency equalling “as good as possible” or “adequately” may have to be accepted. What an adequate level of consistency would be may depend on the situation, i.e., context, type of information, demands of participating organisations, user preferences, etc. As we have considered this to be out of scope for this thesis, we have not looked into levels of consistency.

Most related works rely on stationary networks to some extent for handling updates and synchronisation, as well as for other data and metadata management. In the following, we summarise how the most relevant systems of the related work presented in Chapter 3 handle update issues in their approach. The MoGATU [Perich2002] framework for profile based data management uses profiles and ontologies for filtering and prioritising, which is similar to our approach. AmbientDB [Fontijn2004], being a full-fledged distributed database system for Sparse MANETs, handles update issues using rule based database update queries. In the topic-based approach of Shark [Schwotzer2002A], a stationary knowledge base is used for synchronising knowledge among mobile users of the same group, while ingoing and outgoing knowledge ports are used for information exchange among members of different groups. The service oriented and data centric approach of DBGlobe [Pitoura2003], use stationary servers to handle updates, profiles and metadata storage.

We have two aspects of dynamic updates. The first (1) is to handle metadata dynamically as they occur, which has to do with updating and sharing metadata in a dynamic environment. This is handled by the DDM, and our solution is presented in this chapter. The second is (2) to handle updates of (domain related) data changes, and is handled using our approach to ontology based dynamic updates, which is presented in Chapter 6.

A solution for dynamic update for rescue operations using Sparse MANETs will have to handle node dynamicity and frequent updates. Our solution to dynamic update for metadata extracts, described in Section 5.2.3, takes into consideration issues of availability and communication needs, while consistency issues may be interesting as future work. Next, we will look at the different kinds of update that we have to deal with in our application scenario. Different Kinds of Dynamic Update In our approach, updates are differentiated according to which entities are updated (e.g., locally or between nodes), what is being updated (e.g., metadata or data), and whether it is a change or an append to existing information. In relation to metadata management, we have two kinds of update: update locally on a node and update between nodes. In the following, we will use the terms local (or vertical) update and horizontal update to differentiate between these two. There are two types of horizontal update; the first is metadata exchange, which is the same as SDDD exchange and merge, i.e., updating the view of knowledge available for sharing in the network. The second is context or profile dependent update, which we have termed ontology based update (or semantic update). In addition to local and horizontal update, there may be subscription related updates. These updates are specifically requested by some entity (user/application or system) for specific changes in the LDD and for changes in data and information stored on the nodes, and should be handled by different parts of the system, e.g., subscriptions to event notifications using the DENS. In a database implementation, triggers – as known from active databases – can be used for notification of changes. As the subscription based updates are handled outside of metadata management, it is not relevant for the work in this thesis; therefore we

Page 105: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

89

do not describe subscription based updates any further. Figure 5.5 gives an overview of the kinds of dynamic updates that we describe in this section.

Figure 5.5 Overview of kinds of dynamic update.

Local update is update of the SDDD locally on the node – reflecting LDD changes in metadata descriptions. This kind of update is between different layers of the dictionary, and can be both appends (new metadata extracts) and changes to existing SDDD content (SDDD tables). Local/vertical update is usually unproblematic, as it happens locally and is likely to be unaffected by the environment outside of the node. In Section 5.2.2, we describe how local update is solved in our approach. Although updates/changes to LDD also happen locally on a node, thus can be termed a local update, we have not included these here, as LDD update only concerns the LDD level of the data dictionary. Metadata exchange, i.e., SDDD exchange and merge, involves exchanging metadata extracts, e.g., concept IDs, metadata resource IDs, and availability information between SDDDs on different nodes, and then merging the received content in the SDDD. A ‘metadata resource ID’ is here meant to refer to the registration ID of the metadata description for an item in the local LDD, or a nodeID if the result of an exchange. These updates, both locally and between nodes, are appends (additions) to the SDDD linking tables, except when updating information availability (a change). It may be worth considering prioritizing metadata of different types and importance, but we have not looked into this. In ontology-based update, we consider different priorities for different types of information items, which is a related issue. Metadata exchange, as described in this section does not guarantee that all nodes have the same view of information available in the network. This has to be accepted, as keeping all nodes completely updated at all times would put a too great strain on communication and resources in the Sparse MANET. If a user/application or a situation requires a complete up-to-date view, this can be solved using subscription related updates, handled by the DENS (ref. Chapter 4). We describe our solution to metadata exchange in Section 5.2.3 and Section 5.2.2. Ontology based dynamic update is the term we use for updates where information from defined ontologies is used to support a more efficient use of available resources when sending updates to other nodes in the network. In Sparse MANETs, and given the characteristics of our application scenario, resources like bandwidth and battery may be limited, and it is desirable to use the existing resources as efficient as possible. Sending frequent updates may be costly resource wise, and it would be valuable to use context information and knowledge about the update itself, e.g., what kind of update, what kind of information or data is being updated, what is the situation, etc., in utilising the available resources best possible. For instance, information about which kind of data is more

Page 106: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

90

important in a given context can give updates of this data a higher priority when sending the update message to relevant nodes, i.e., prioritised update messages. Here we assume that prioritized messages are possible in the underlying message system. Ontology based update involves retrieving information stored in the knowledge base, e.g., types of information or data and their priorities, the structural organisation of rescue operations, etc., defined in ontologies. It also involves the DDM in retrieving necessary metadata regarding where (e.g., which node) to find this information (e.g., priorities). The actual information is then retrieved from the knowledge base using a data management component outside of the KM (ref. Chapter 4). As ontology based update essentially concerns how we can utilise resources more efficiently when performing dynamic updates, it may involve any kinds of update, data or metadata, and changes or appends. We describe our approach to ontology based dynamic update in Chapter 6, and there we present an example rescue ontology targeted for this purpose.

5.2 Data Dictionary Manager Being the main component in our approach for metadata management and sharing, the DDM is responsible for storage and management of metadata for items registered to be shared, as well as responsibility for structure and content description metadata. The DDM is a very lightweight component, small enough to be able to run on all devices. The requirements are explored in more detail in Section 5.2.1. The main requirements of the DDM as described in Chapter 4 are:

- Manage storage, organisation and update of data dictionaries. - Keep track of information/data availability. - Support request for information from data dictionaries.

In this section, we give a description of the design of the DDM and the two levels of data dictionaries. We give detailed requirements, examples and some use cases, and present the functionality offered by the DDM. As seen in Figure 5.6, the DDM has two data dictionaries, LDD and SDDD. The figure does not show the KM Engine.

Figure 5.6 Data Dictionary Manager (DDM).

5.2.1 DDM Requirements and Functionality In this section, we will describe detailed requirements and the resulting functionality of the DDM.

Page 107: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

91

5.2.1.1 Detailed Requirements The DDM does not handle requests directly; applications and middleware components issue their requests to the KM Interface, and the KM Engine then dispatches the request to DDM, as described in Chapter 4. We have marked requirements resulting in functionality offered via the KM Interface as ‘external’, and those that will only be used by the DDM internally as ‘internal’. Manage storage, organisation and update of data dictionaries (external and internal). Metadata descriptions and extracts have to be organised and stored in the LDD and SDDD respectively, and the dictionaries have to be updated on changes. This leads to the following (sub-) requirements:

• Register/update LDD contents (external and internal). Applications and middleware components should be able to register in the LDD new information resources to be shared, as well as update existing LDD items.

• Update SDDD contents (internal). The DDM should update the SDDD with links and availability information, as well as update the information already stored in SDDD.

• Perform metadata exchange (internal). Exchange and merge of SDDD contents between nodes. This includes the exchange of link information (metadata extracts) between DDMs, merging the received contents into SDDD, and update availability information. To avoid duplicates, in some cases only availability and/or timestamps to existing links are updated.

• Create or restore data dictionaries at time of start-up/boot (internal). At the time of creation, this includes adding initial elements to the LDD at time of creation, and populating the SDDD with metadata extracts from the LDD. At restart, e.g., during a rescue operation, the data dictionaries are restored.

• Save data dictionaries at shut down (Graceful shut down) (internal). This implies that the contents registered in the data dictionaries should be in persistent storage after shut down so that they can be restored at start-up.

Keeping track of information availability (internal). The availability of information resources/items registered in the linking tables in SDDD. It is the availability of the information resources (registered in a LDD for sharing) that we want to track, not availability of metadata descriptions (the LDD registration entry). For a link pointing to another node, we can only keep track of availability of the node, not the individual information item on that node. For a link pointing to a local item, a more fine-grained availability tracking may be possible. Availability is typically updated as part of metadata exchange, initiated after a partition change has been notified to the KM. Support requests for information from the data dictionaries (external and internal). Support request for information from SDDD and LDD.

• Find sources with related information. Given a set of terms (concepts), the DDM should return sources with related information items (search in SDDD). These sources may be the registration ID for metadata description in the LDD, or nodeIDs of other nodes that contain related information.

• Request information about registered items. This implies lookup in the LDD, retrieving details from the registered metadata descriptions, including the location of the (local) information source.

Page 108: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

92

Note that the DDM only offers search and retrieval of metadata registered for sharing in the data dictionaries, and is not involved in retrieval of actual information items or data. This means that an application, after having used the DDM to find relevant metadata descriptions of information items, will use the data management system on the node where this information item was registered in the LDD for retrieving the item. This implies that in cases where the relevant information item is not registered on the local node, a request will be sent to the node where it is registered. Thus, retrieval of information items (data) is independent of the KM. The DDM will need protocols for communicating with DDMs on other nodes, e.g., for exchanging SDDD contents at metadata exchange.

5.2.1.2 Resulting Functionality In the following, we present a set of functions representing the minimum functionality that the DDM needs to fulfil its requirements. First, we list functions used in services offered by KM, followed by functionality that is only used internally. The purpose is to show what is needed, as well as possible input and output. In Chapter 8, we present how it can be realised in an example implementation. Functionality used in services offered by KM:

addInfoItem(): Add/register information item in LDD (LDD entry). Input: the metadata description including the resource name.

updateInfoItem(): Update an existing entry in the LDD. Input: the new or modified metadata description.

getInfoItem(): Request information about items registered in the LDD. Input: a metadata registration ID. Returns the metadata description containing, e.g., location of resource.

findResources(): Find all resources related to given terms/concepts. Input: a set of concept terms. Used for search of the SDDD. Returns all related SDDD links, both involving local LDD registrations and remote (nodeIDs), together with link level and availability.

Functionality for internal use:

boot(): Start-up/boot of DDM and data dictionaries. Options can be given to restore existing LDD and SDDD, or to create from scratch.

restart(): Restart of DDM, restoring both LDD and SDDD.

shutdown(): Graceful shut down of DDM and data dictionaries. The data dictionaries are not deleted, but can be discarded at next start-up.

addInitialElementsLDD(): Add initial elements to LDD (at start-up/boot).

populateSDDD(): Populate SDDD with LDD extracts (at start-up). This function performs metadata extraction from LDD.

localUpdateSDDD(): Update SDDD after changes in LDD.

addLink(): Add a new link {conceptTerm, metaResID} to SDDD linking tables. Used in merge and in populating SDDD.

addAvailability(): Add availability for a new link.

updateAvailab(): Update availability (and timestamp) for a given metaResID to keep track of information availability.

Page 109: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

93

sendUpdates(): For exchange: send all updates to a given nodeID reflecting changes after the previous exchange with the same nodeID. Calls getUpdatesForExchange().

getUpdatesForExchange(): Retrieves updates to send to a given nodeID, reflecting changes after a previous exchange with the same nodeID. Calls getTimeLastExchange().

getTimeLastExchange(): Returns the time for the last exchange with a given nodeID.

receiveAndMerge(): Receive SDDD contents from another node after exchange and merge into SDDD. Calls updateExchangeList().

getLinksSinceTime(): Get SDDD links since a given time (timestamp). Used in receiveAndMerge().

updateExchangeList(): Update the list of previous exchanges.

5.2.2 The Data Dictionaries The main objective of the DDM is to manage metadata, stored in data dictionaries, describing information sources shared in the network. There are two levels of data dictionaries, corresponding to the information layer and the links to the semantic context layer in our three-level approach to metadata management. The lower data dictionary, the LDD, is used for registering local information resources that a node wants to make available to the rest of the network. And a higher level dictionary/catalogue, the SDDD, keep a view of what is available for sharing both locally and on neighbouring nodes. In this section, we will describe the design of the DDM, and the data dictionaries.

At the time of network bootstrap, and before a node has discovered any other nodes in network range, the “world” and the network available as known to the node consists only of the node itself and what is stored locally (metadata registered in the LDD for sharing). So initially, from the perspective of a single node, the global view of the knowledge in the Sparse MANET is only what exists locally on the node. When the node comes into network range with other nodes, it will exchange and merge the contents of its global view with the global views of its neighbouring nodes, i.e., each node exchanges the contents of its SDDD with the SDDD of the neighbouring node. Keeping an SDDD on all nodes at all times solves many of the problems encountered at bootstrap time. Examples include the problem of where to find vocabulary terms/concepts for describing and linking resources, and where the DENS (ref. Chapters 2 and 4) can look for initial subscriptions.

At start-up, the DDM first creates the LDD, adding initial population, or restores it from permanent storage. It then creates the SDDD and populates it with concepts/terms extracted from LDD. Details about how LDD and SDDD are populated at run time are described in 5.2.2.1 and 5.2.2.2 respectively. At shut-down, the DDM should save the contents of both LDD and SDDD in permanent storage. For the LDD, pointers to local resources will be checked and updated at the next start-up. In cases of limited storage space on a node, the SDDD may be discarded at shut-down and then rebuilt on start-up. Relevant DDM functions are boot(), restart(), and shutdown().

Some clean up of links may be conducted at start-up, but through the combination of lazy update and periodical clean-up (refresh), the SDDD will gradually become updated. The availability information in the SDDD is updated in a lazy fashion, i.e., availability is updated at notification of partition changes, instead of polling for changes. As described in

Page 110: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

94

Chapter 4, subscriptions for this kind of information from the RM can be made via the DENS. The LDD and SDDD will need to have some periodical clean-up of outdated information and links respectively, but we have not looked into clean-up in this thesis. The DDM function for updating availability in SDDD is updateAvailab(). There may also be some periodical refresh of availability if there has been a long time since an exchange.

5.2.2.1 Local Data Dictionary - LDD The LDD corresponds to the first level in our three-level approach, the information layer, and is the lower level data dictionary of the DDM. In essence, the LDD is a catalogue/register containing metadata description for local resources that a node (user/application) is willing to share in the network. Examples of types of information items that can be registered for sharing include ontologies/vocabularies, metadata models, system related data, profile and context information, and domain related data, e.g., medical data, photographs, etc. To make an information resource available for sharing in the network, it has to be specifically registered in the LDD by adding a metadata description. The metadata description is composed (created) at the application level, using appropriate concepts and terms from vocabularies and standards agreed upon in the a priori phase. Services from the KM can be used for finding relevant vocabularies, providing these have been registered in LDD. It is necessary to perform consistency checking and cleanup of the LDD to remove outdated information periodically. We have however not looked into consistency checking or periodic clean up in this thesis. Related to the Three Types of Metadata There are three kinds of metadata that are particularly relevant in our scenario; information structure and content description, semantic metadata, and profile and context metadata (ref. Chapter 3). The first is the type of metadata handled by the DDM, i.e., registered in the LDD. The two latter are in the perspective of the DDM seen as resources to be shared, and these can be registered in the LDD for sharing same as an information resource. Examples of metadata registered as resources are a device profile describing node capabilities (and resources) and an ontology defining (the domain of) hardware capabilities. Sharing this kind of metadata, e.g., for profile information or domain ontologies, is useful in a heterogeneous environment such as our scenario, particularly as we are dealing with several kinds of heterogeneity, e.g., devices, organisations, domains, and data models. Structure of Metadata Descriptions As the metadata descriptions registered in the LDD may be for different kinds of resources, and these may be described using different metadata models, it would be an advantage to have a homogeneous structure of the metadata items stored in LDD to simplify metadata management. To achieve a homogeneous structure, we can envelope the metadata descriptions in a container, an approach employed in, e.g., MPEG-21 Digital Item [MPEG-21], for the LDD a much simpler structure would however be sufficient. To differentiate between different structuring of metadata in connection with different types of resources, a categorisation of metadata, i.e., a kind of “metadata-typing”, can be used to show which kind of metadata is stored, for instance, “profile”, “schema”, or “content description”. Such categorisation of metadata would give all items stored in the LDD the same basic structure – only being of different classes/types of metadata. Ontologies can be used for defining and classifying different types of metadata. The LDD can be realised in many ways, for instance as XML structures or as tables in a database.

Page 111: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

95

In the examples we give in this chapter, a very simplified database schema is used, only showing the necessary parts for illustrating the function of the LDD:

LocalDataDictionary(metaResID, infoType, vocabulary, resourceLoc, conceptTerms, timestamp)

The example attributes represent the following:

• metaResID: metadata resource ID, i.e., the LDD registration ID of the metadata description.

• infoType: the type of information of the registered resource, e.g., a vocabulary or a data item.

• vocabulary: vocabulary (or vocabularies) used in description. • resourceLoc: the location of the information resource. • conceptTerms: terms from vocabularies describing the registered information

resource. • timestamp: time of last update.

LDD Creation There will always be some initial content to be registered in the LDD, e.g., metadata of vocabularies and standards to be used, and other information from the a priori phase. The application should be able to decide whether the LDD should keep the previous content. If the previous content belonged to the same rescue context, e.g., due to a temporary shut down, we assume that the DDM will have persisted the LDD and thus can restore the LDD content. At re-start-up, pointers to resources in the LDD should always be checked and confirmed to ensure that the content is as accurate as possible. The relevant function in the DDM for creating the LDD is addInitialElementsLDD(). Register Item in LDD Items are registered in the LDD the following way: Via the KM Interface, the application requests the DDM to add an item to the LDD, providing the metadata to be stored. The function used in KM Interface is registerItem(). The KM Engine dispatches the request to the DDM, calling the function addInfoItem(). A new LDD entry is created, and the LDD tables are updated with the new information. The DDM then initiates local update of the SDDD. If there are more than one copy of an information item in the network, each copy will be registered in the LDD of the node it is stored. LDD Updates When there are changes in the LDD-related information resources, the LDD has to be updated to reflect these changes. The possible ways this is realised, will depend on the LDD implementation as well as the underlying system for storage used on the node. To detect changes in the LDD, subscriptions or, if the underlying storage system is a database system, triggers can be used. Updates to the LDD are local, and include both changes and appends to existing metadata. The corresponding DDM function is updateInfoItem(), which initiates SDDD local update to reflect the changes. Use Case (5): Registering Metadata of Information to be Shared In this use case, metadata descriptions of casualty registration information are added to the LDD. A doctor carrying a device, Node-1, registers casualties with urgency codes (ref. Chapter 2).

Page 112: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

96

1. A doctor uses the casualty registration application to register casualty data about an injured passenger.

2. The application creates a metadata description using concepts/terms from a vocabulary.

3. The application requests the DDM to add the metadata description to the LDD for sharing. The function called (by the application) in KM Interface: registerItem(), the DDM function called internally in the KM: addInfoItem()

4. DDM adds received metadata to LDD. The result of this use case is that metadata description of the casualty registration data is registered for sharing in LDD. Figure 5.7 illustrates possible LDD contents after registration.

Figure 5.7 Example LDD contents.

5.2.2.2 Semantic Linked Distributed Data Dictionary - SDDD The SDDD is the higher-level data dictionary, and its purpose is threefold:

1) linking the first and second layers in the three-layer metadata approach, 2) keeping track of information item availability, and 3) exchanging metadata extracts so information about what is available for

sharing is distributed through the network. Although the SDDD is not a virtual global data dictionary (ref. Chapter 4), it can be said to be a ‘virtual’ data dictionary in the sense that it does not store any copy, abstract, or summary of LDD contents, but is realised as a set of linking tables linking concepts to where the resource is found. In combination with a data modelling technology providing semantic support, e.g., RDF/OWL or Topic Maps, the SDDD links information items on different nodes to a semantic context through terms/concepts used in metadata descriptions, i.e., a kind of semantic network that can be traversed and searched. (This assumes that the metadata terms are from a common/shared ontology or vocabulary.) In our approach of having an SDDD on every node, the contents of the SDDD is distributed and replicated at merge time, i.e., when the SDDDs on two nodes exchange contents, and then merge the received contents with the existing SDDD content on each node. In this way, SDDD content is propagated through the network. This means full replication and distribution of SDDD content. SDDD Contents and Population The SDDD contains tables linking terms/concepts and the location of metadata resources related to these; the location can be the registration ID for the metadata resource in the local LDD, or the nodeID for a node where a resource related to this term is registered, i.e., a node where the term was extracted from the LDD. Thus, the SDDD points to a (local) LDD or to a node where the LDD has registered an information item related to this term.

Page 113: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

97

The data extracted from LDD includes concepts (as concept IDs or concept terms), and where in the LDD the item is found (logical ID of dictionary item). Note that the location of the actual information resource is part of the metadata description in the LDD, and the SDDD only stores a pointer to the metadata description of the information source. This means that to find the location of an information item, it is necessary to perform a lookup in the LDD. The relevant functions in the DDM are populateSDDD() and addLink().

An SDDD link consists of a pair {conceptTerm, metaResID}, where metaResID points to a metadata description registered in the LDD, or to another node (nodeID). The SDDD is created by scanning the LDD for concepts (terms) that are stored together with the LDD registration ID (metaResID) where the concept/term was found. The extraction example in Figure 5.8 results in the links {codeRed, mID11} and {codeGreen, mID22}. The availability of the information item represented by the metaResID is also stored in the SDDD (see Figure 5.9). Availability is updated in conjunction with SDDD exchange and merge (see Section 5.2.3 on metadata exchange).

Figure 5.8 Extracting links to SDDD.

As long as the resource pointed to by a link is registered in the LDD locally on the node, the link points to the registration/entry in the LDD. After a merge, links can point to items registered for sharing on a different node. In this case, the link points to the other node as such, and not to a dictionary entry in the LDD of that node. The link is then resolved locally on the node, using KM services as described in Chapter 4. This means that a higher level of granularity is used when links are pointing locally on a node, while a lower granularity is used when pointing to a different node.

In cases where there are several copies of the same information resource registered in the network, a search in SDDD may retrieve metaResIDs to different copies of the same information resource. This is solved at application level in this way: After retrieving metadata descriptions from the LDD on the relevant node, the application can compare these to see which information resources to retrieve, and any cases of more than one copy of the same information item available will then be discovered.

The following SDDD schemas are used in the examples in the remainder of this chapter:

SDDD_Linking(conceptTerm, metaResID, timestamp) SDDD_Availability(metaResID, availability, linkLevel,

timestamp)

Page 114: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

98

The attributes represent the following:

• metaResID: registration ID in the local LDD or nodeID of the node where the information resource was registered in the LDD.

• conceptTerm: the concept term linked to the resource. • availability: the availability of the registered information item (or nodeID,

depending on link level). For an item registered in the local LDD, this is typically always 1 (available).

• linkLevel: link granularity, i.e., whether the metaResID points to a local LDD entry or to another node.

• timestamp: time of last update. SDDD Update After LDD Changes (Local Update) The SDDD will have to be updated on changes in the LDD, e.g., when new items have been registered, metadata descriptions changed, or items have been removed from the LDD. This is termed ‘local update’ (vertical update) and as the term implies will only happen locally on a node. When there are changes in the metadata descriptions registered in LDD, e.g., new terms/concepts added in the description, the SDDD will be updated to reflect these changes. The procedure updating the LDD can call a procedure for updating the SDDD. Function in DDM: localUpdateSDDD(), calls addLink() and updateAvailab(). Use Case (6): SDDD Population and Update This use case builds on the previous use case (Use Case (5) in Section 5.2.2.1) where a doctor registers in LDD urgency codes of casualties at a rescue site. Immediately after LDD registration, the DDM launches SDDD update with appropriate concept terms and the LDD registration ID extracted from LDD, and availability information. The prerequisite for this use case is that a casualty registration has been added to the LDD (see Figure 5.7). The result after this use case is that SDDD links are created, and the information item (the casualty registration) is now available for sharing in the network. DDM updates the SDDD with the new information by calling the function addLink(). Note that this function also stores availability information for the link, but we have chosen to show this in two steps below.

1. DDM creates a semantic link and updates the SDDD. 2. DDM stores availability information.

Figure 5.9 illustrates SDDD contents as extracted from the LDD shown in Figure 5.7.

Figure 5.9 Example SDDD contents.

Page 115: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

99

5.2.3 Metadata Exchange Metadata exchange is the way the SDDD is updated after changes in the network, and is one of the variants of dynamic update described in Section 5.1.2. It is also the way the (meta-) information about what is available for sharing is propagated through the network. Recalling from Chapter 4, the KM will receive a notification regarding changes in the current partition, and then request the Resource Manager for the set of nodeIDs that comprise the current partition. An example of a partition change is the case of two nodes coming into network range of each other. After receiving the nodeIDs of the new partition, the KM requests the DDM to perform metadata exchange with the new nodeIDs, and update the SDDD with received content and availability changes accordingly. The DDM keeps the nodeIDs of all the nodes that it has performed an exchange with, together with the time of the (last) exchange with that nodeID, on a list of previous exchanges. This list is updated as part of the procedure for exchange and merge. Thus, SDDD update after network change involves:

1) Exchange and merge 2) Availability update

Exchange and Merge The metadata exchanged is the contents of the SDDD, i.e., the metadata extracts from LDD and availability information. What is exchanged is both results from local update – reflecting changes in the LDD – and results from previous exchanges with other (peer) nodes. After an exchange with another node, the contents of the SDDD will have to be updated with the received contents from the other node, what we have termed a merge. When two nodes come into network range, they will exchange SDDD content, and depending on available resources, each node will merge the received content with its own SDDD. This merge can be viewed as an expansion of each node’s worldview. The merge is dependent on available resources, as very small devices, e.g., sensors, may not have enough resources (e.g., battery and storage capacity) to merge and store the received contents in their SDDD.

In relation to exchange it is of importance to handle cases of nodes repeatedly going in and out of range, so as to make sure exchange is not repeated unnecessarily, wasting resources and bandwidth. By using timestamps in the SDDD tables we can know the time that changes to SDDD were made. In addition we need to keep a list of previous exchanges with timestamp and nodeID of the node with which the exchange was performed. At time of exchange, the DDM on each node communicates its own nodeID, and each node checks its list of time of last exchange with the received nodeID. This limits the exchange to include only the updates added to the SDDD since the last exchange with that particular node. Comparing the timestamps of previous node exchanges with SDDD timestamps before an actual exchange takes place, will also solve situations of nodes repeatedly moving in and out of range, as unnecessary updates can be avoided. When using timestamps, issues of different time throughout the network have to be dealt with adequately. For simplicity, we assume that this issue is solved by some other part of the system.

The DDM Exchange table (list of previous exchanges) used in metadata exchange, keeps track of nodeIDs and time of previous metadata exchanges (SDDD exchanges). It is managed by the DDM, thus can be said to be a part of the data dictionary, but it is not a part of LDD or SDDD. An example database schema is shown below.

DDM_Exchange(nodeID, timestamp)

Page 116: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

100

As an example, we consider a case where two nodes A and B have already exchanged SDDDs once, node B then moves out of range from node A, and later they re-join, which means an update exchange of SDDDs has to be performed. The DDMs first exchange nodeIDs, and each then checks to see if the received nodeID is on the list of previous exchanges. Taking the time of last exchange with received nodeID for comparison, the SDDD tables are scanned to find any additions or updates that have occurred between last exchange and the time of the current update. Finally, the nodes exchange the relevant SDDD content. In case of a node repeatedly moving in and out of network range, an update exchange would only happen if any actual changes had occurred in the SDDDs. In addition, time limits (minimum time between exchanges) can be used, e.g., in the case periodic refresh of updates are needed when nodes remain in network range (neighbours) for a longer time period. Relevant functions in DDM are shown in parenthesis. A stepwise description of the exchange:

1) Exchange nodeIDs. 2) Check list of previous exchanges for time of last exchange with received

nodeID. (DDM: getTimeLastExchange()) 3) Retrieve updates since last exchange. (DDM: getUpdatesForExchange()) 4) Send relevant updates to nodeID.

The merge (DDM: receiveAndMerge()) is completed in the following steps:

1) Concepts are matched with existing concepts in the SDDD. 2) If there is a concept match, the SDDD linking table will be updated with

node-IDs (node links). a) In the case of new concepts, the DDM adds the concepts and node-

IDs to the SDDD linking table providing there are enough resources.

b) Otherwise it adds no concepts, only a link to the node that expands its global view.

3) The DDM updates the availability table in the SDDD. 4) Update list of previous exchanges; Add nodeID and time of exchange, or

update time of exchange, to the list of previous exchanges. Availability Update The DDM checks the set of nodeIDs received from the RM against the nodeIDs related to terms in the SDDD, and links {term, nodeID} involving nodeIDs that are missing in the new partition is marked as obsolete. The obsolete link {term, nodeID} is not deleted, as the node may rejoin the partition at a later stage, and the link be activated again. Concept terms related to information items that were available only at the now missing nodeID, i.e., that now have only obsolete links, is now ‘nonexistent’ in the view of this node. The function in DDM that performs availability update is updateAvailab(). It is possible that some garbage collection of obsolete links should be performed at certain time intervals; when an item has been unavailable for a very long time, we can assume that the nodeID will not rejoin the partition. Another case where links can become obsolete is if all items related to a term/concept are unavailable.

In summary, availability update in conjunction with metadata exchange is performed in the following steps:

1) Check the set of nodeIDs from RM with existing links. 2) Set availability of links {term, nodeID} with missing nodeIDs to 0 (zero).

Page 117: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

101

Use Case (7) : Metadata Exchange – Merging Global View of Sharable Information Two nodes, Node-1 and Node-2 come into range and form a network, exchange SDDD contents, and update their SDDDs.

1. Node-1 comes into network range with Node-2; the nodes perform necessary procedures of security and recognition (not elaborated here).

2. The DDM on each node requests exchange of metadata to expand the global view (the SDDD).

3. The DDMs exchange metadata. The function called in the DDM is sendUpdates().

4. The DDM initiates a merge of the received content to its global view (the SDDD). Functions called in the DDM are receiveAndMerge(), and updateAvailab().

The result of this use case is that the nodes share a common view of what information is available for sharing globally in the network.

Figure 5.10 SDDD after merge.

Figure 5.10 illustrates what the contents of SDDD linking and availability tables will be after a merge. We assume that a common vocabulary and metadata model is used. The tables show the contents of the SDDD tables for linking concept to resource (node) and for availability tracking on Node-1 after a merge between Node-1 and Node-2. The metadata resource ID (metaResID) for the concept term (conceptTerm) ‘codeRed’ points to a dictionary item in the local LDD on Node-1, and the metaResID for ‘codeGreen’ points to Node-2. If Node-1 wants to find the linked-to metadata for the resources on Node-2, the Node-1 will query Node-2 for the item. Node-2 DDM will query its SDDD for the item’s metaResID, and then request its local LDD for the metadata information and return this to Node-1. Node-1 can then decide whether to retrieve the information item or not. A More Detailed Example of Metadata Exchange This example illustrates the use cases above. As stated earlier, what is actually shared and exchanged between the DDMs (in SDDD exchange) is metadata extracts, not the full

Page 118: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

102

metadata descriptions. The following example, which is loosely based on and example in [Skjelsvik2006], illustrates this kind of information sharing through an example of metadata exchange and merge. The two nodes in this example are ‘Nod-1’ and ‘Nod-2’. Nod-1 has some initial contents registered in its LDD, e.g., regarding data structures and vocabulary to use. The SDDD has been populated with extracts of this content, i.e., concept terms and metadata resource IDs (pointing to items in its LDD). To differentiate between links/pointers to local items and those pointing to metadata resources on another node, the link level – showing link granularity – is stored together with each metadata resource ID, as well as the availability (0 or 1, where 1=available) of the metadata resource. Link level 0 designates a local metadata resource, while link level 1 shows it is from a different node. In Figure 5.11, we show example contents of the LDD and the SDDD on Nod-1, before registering with any neighbours.

Figure 5.11 Contents of Nod-1 LDD and SDDD.

In Figure 5.12, example contents of the data dictionaries on Nod-2 before merging with Nod-1 are shown. Note that Nod-2 already has merged SDDDs with another node (Nod-4), which is not in range now.

Page 119: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

103

Figure 5.12 Nod-2 before merge.

When Nod-1 registers with another node (Nod-2), the SDDD contents is exchanged in the following steps:

1) Nod-1 communicates its nodeID: ‘Nod-1’. 2) Nod-2 will also send its nodeID, and check if ‘Nod-1’ is on the list of previous

exchanges. As there have been no previous exchanges with Nod-1, the Nod-2 will add Nod-1 and time of current exchange to its list.

3) Nod-1 sends the contents of its SDDD together with its nodeID as sets of quadruplets in this format: <nodeID, <conceptTerm, metaResID, linkLevel, availability>, <...>,...>, which in this case is the following:

<Nod-1,<vocabulary,mID1,0,1>,<med1,mID1,0,1>, <temp,mID2,0,1>,<med1,mID2,0,1>,<time,mID2,0,1>>

4) When Nod-2 receives this information, it checks the link level for each quadruplet. In case of link level 0 (local), it replaces the metaResID with the received nodeID, changes the link level to 1 (external), and then adds the information to its SDDD. The availability is updated in SDDD if applicable (no changes in this example).

As nodes exchange and update data dictionary contents with other nodes, this information is gradually spread among a number of network nodes. Figure 5.13, illustrates the contents of Nod-2 after merge with Nod-1, note that there are no changes in LDD.

Page 120: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

104

Figure 5.13 Nod-2 after merge.

Nod-1 also merged SDDD contents received from Nod-2; the resulting SDDD is shown in Figure 5.14 below.

Figure 5.14 Nod-1 after merge.

Page 121: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

105

In Figure 5.15, the list of previous exchanges for Nod-2 before and after merge is shown in the form of database tables.

Figure 5.15 Nod-2 list of previous exchanges, before and after merge with Nod-1.

5.3 Summary In this chapter, we have described our approach to metadata management – a three-layered approach of increasing semantics. Detailed design of the DDM is also presented, realising the first two layers or our approach through its two level data dictionary. For representing metadata semantics, we need to use data modelling languages like RDF(S) or Topic Maps. The third layer is realised using a data modelling technology like OWL.

The problems addressed in this chapter are how we can support a resource efficient distribution of information about what is available for sharing in the network, how to track information availability, how dynamic updates can be handled efficiently, and what data modelling technology is suitable for our application scenario. The addressed claims are Claim 3 – efficient metadata is essential in a solution for sharing information in resource-limited environments, and Claim 5 – sharing metadata about what information is available and where it can be found is essential for efficient knowledge and information sharing. We have shown solutions for these claims through the detailed design of DDM, together with descriptions of how metadata extracts are shared and updated throughout the network by metadata exchange of SDDD contents.

The main contributions in this chapter are a three-layered approach to metadata management realised through a multi-tiered data dictionary, and the design of a data dictionary manager supporting dynamic update.

Page 122: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts
Page 123: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

107

Chapter 6 Ontology Based Dynamic Updates In this chapter, we consider the problem of supporting rescue operation organisation through ontology based dynamic updates. All rescue operations have a certain organisation which also has to be supported together with the coordination of the ongoing operation itself (ref. Chapter 2). There has to be a trade-off between efficiently propagating updates, and at the same time utilising scarce resources, like bandwidth, in the best possible way. One way to do this is by filtering out irrelevant information, and giving higher priorities to more important information. What is to be deemed irrelevant or more important information depends on the contents of the update (type of information being updated), and – given the structural organisation of rescue operations – may also depend on the rescue operation role of the users sending and receiving the update. In our approach to this solution, we use ontology based dynamic update and update priorities, and combine the use of profiles and context modelling. The contribution in this chapter is to show how ontologies can be utilised in this way in a rescue and emergency operation.

As described in Chapter 5, ontology based (semantic) dynamic update uses profile and context information, defined in ontologies, to contribute to a more efficient utilisation of available resources when sending updates, which may be valuable in a Sparse MANET.

The claims addressed in this chapter are:

• Claim 4: ontologies can be utilised in dynamic update to accommodate both the organisation of the rescue operation and the dissemination of updates.

• Claim 1: information overload can be handled through the use of filtering and personalisation.

Claim 1 is addressed indirectly through describing how information profiles and context together with device and user profiles can be utilised in filtering.

We differentiate between static and dynamic aspects of context in our terminology; the term profile is used to describe static context, i.e., the aspects of context that do not change very frequently during the rescue operation, for instance description of users and devices. Other aspects of context may change often, examples are the current position of a device and the task a user is assigned to. These frequently changing aspects of context we address when using the term dynamic context.

In Section 6.1 we focus on our approach for ontology based dynamic updates, including the motivation for choosing ontologies as basis for update in rescue scenarios,

Page 124: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

108

factors influencing our approach, and our proposed solution. An example rescue ontology consisting of profiles and dynamic context is described in Section 6.2. In this section we also look at how the knowledge base is populated related to the different rescue scenario phases. Parts of the contents of this chapter were previously published in [Sanderson2006]. Section 6.3 discusses populating the knowledge base. In Section 6.4, we describe how the profile ontology can be handled in our architecture. Section 6.5 describes how the KM components contribute to ontology based updates. How ontology based dynamic updates and rescue ontologies are handled in our architecture, is further explored in Chapter 7 in conjunction with an example.

6.1 Description of Approach In this section we describe our approach to ontology based dynamic updates. We give the motivating and influencing factors for choosing ontology based updates, and then present our solution. Motivation and Influencing Factors The main motivation for choosing to use ontologies in solving dynamic update issues is twofold: (1) accommodate rescue operation organisation; (2) ensure better dissemination of the most important information. These are relevant issues in an environment of limited resources, and where it is vital that important information gets top priority in distribution. To find solutions for the ontology based update, we have to use a combination of profile ontologies, context modelling, and priorities. We also need knowledge of the operational organisation of the rescue operation.

The first reason, accommodating operational structure and organisation, is relevant as there in rescue and emergency operations typically are several organisations cooperating to achieve a common goal, thus a number of cross-organisational procedures and policies exist to support cooperation and integration. These cross-organisational procedures can be described in an ontology and utilised to improve update procedures. The second reason, ensuring better dissemination of the most important information, will enable updates of high priority information to get through first, which is essential in a rescue operation where distributing urgent information to participants as fast and efficient as possible, is of vital importance. To support this, update priorities reflecting rescue operation policies and information importance can be used. This may also alleviate some of the limitations we are facing in a Sparse MANET, like device resources and bandwidth.

By linking relevant information to users via profiles and dynamic context, as in the example rescue ontology in this chapter, we can know something about what information needs a user will have, as well as find out about changes in information needs and available resources on the device currently used, e.g., if the user changes device or takes on another rescue operation role. Thus, ontologies, here represented by the example profile ontology and the use of information profiles for roles and rescue scenarios in combination with device and user profiles, is indirectly used in information filtering (Claim1). Filtering is supported through the rescue ontology by (1) having an information profile connected to each rescue operation role, stating relevant information items for that role, and (2) information from user profiles, e.g., team membership or user role in the rescue operation, are used for targeting update messages to specific groups and personnel.

The main influencing factors of update priority are: the context and model of the rescue operation; which role in the rescue operation the user has; and the type and importance of information. An ongoing rescue operation is of a specific type, e.g., a land,

Page 125: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

109

sea, or air rescue operation, international or national, of a certain size, etc. This kind of information is part of the context of the operation, described in a rescue operation context model. It can be represented in a rescue operation profile describing the characteristics of an operation, including specifications of rescue operation roles, responsibilities, and lines of reporting.

The priority of updates from a user may depend on which rescue operation user role he/she has. The user role is directly dependent on the rescue operation context model, i.e., relevant responsibilities and lines of reporting depend on which kind of rescue operation the ongoing operation is. For instance, certain updates coming from a rescue site leader (OSC) may be important to all users, while other updates may be important to only some, depending on organisation (e.g., medical, fire, police). The user role of the receiving party, as well as the current task or situation, may also influence the priority of the update. Which role a user has in a rescue operation, is typically described in the user profile, and does not change very frequently during the operation.

As the importance of information may differ with its type, basing updates only on rescue operation context model and user roles is not sufficient. For instance, critical medical information will most likely be of higher importance than other types regardless of the role of the user providing the update, or whether the rescue operation is at land or at sea. Therefore, we need to set the update priority of information accordingly. Information priority may also depend on the policies given by authorities, in addition to policies existing in each organisation. It is not likely to be affected highly by the dynamic aspects of contexts, e.g., movement and time (spatio-temporal issues), or to change very frequently during the operation. Focusing on what may influence the update priority of information in relation to a rescue operation scenario, we find that the main influences are from aspects that do not change very frequently during the operation, like user role in rescue operation (both of sender and of receiver of update) and rescue operation model (e.g., responsibilities and lines of reporting). To utilise information priorities for updates, an ontology or model that allows description of different kinds of information items and their priority is needed. For a particular information item, the type and update priority can be presented in an information profile. Proposed Solution There are two main aspects regarding a solution for ontology based dynamic updates. The first is to use priorities for both information types and rescue operation roles. Updates of critical information, like critical change in medical state and warnings of explosions, can be given a higher priority than less important information, e.g., changes in weather conditions. By utilising the rescue operation role, updates from a user with more coordination responsibility, e.g., the OSC who is responsible for overall coordination of operation, can be given higher priority than a team-member with low-key responsibilities, e.g., keeping the public away from the accident scene. The second aspect is to exploit the operational structure in assuming that certain lines of reporting will be followed. This means we can make the assumption that highly important information is always first reported to the rescue operation coordinator (rescue site leader) and also to the officer or team leader of higher rank within each domain (e.g., medical, fire, police).

To accommodate these two aspects, we have to base our solution on well defined ontologies of profiles and a context model for the rescue operation at hand. Therefore, we propose to use ontologies to represent the rescue operation context model, as well as profiles for user, device, and information. Update priorities are specified in the profile ontologies, and set according to information importance and the current role of the user (sender) in the rescue operation. The users are rescue personnel carrying Ad-Hoc InfoWare

Page 126: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

110

enabled devices. We make the assumption that update messages will be propagated through the network in accordance with the routing methods used in the Sparse MANET.

Note that both information priorities and role priority should be combined for actual update priority runtime. Different combinations of these priorities can be expressed in rules for making decisions in each update case at runtime. Another relevant point regarding prioritising updates is that issues of fairness need to be addressed. In our case, we will assume that (1) rules for (different types of) updates have been agreed upon a priori to the rescue operation; and (2) for message routing we will rely on there being some degree of fairness inherent in the underlying system layers.

In Chapter 2, the organisation and structure of rescue operations are described. In Figure 6.1, a simple model based on that description, reflecting the role hierarchy and lines of reporting that are utilised in our solution, can be seen. This model only shows the main classes; officers in charge, team leaders and team members for each participating organisation/domain exist as subclasses of the more general subclasses shown here. We have included the RSC and RCC as classes in this model, but not as part of the profile ontology, as we have chosen to limit the profile ontology to rescue operation roles that are filled by personnel at the rescue site. Although RCC and RSC are roles/functions related to rescue operations, these functions are filled by a group of people, usually not present at the rescue site. To simplify the figure, pr:ReportsTo relations are placed directly between classes (in the model such relations exist between individuals of the classes and not between the classes as such).

Figure 6.1 Simple model of rescue operation roles.

The model can be extended by suitable subclasses to fit the needs of each organisation. For instance, we could have particular subclasses denoting all team leaders, all team members, and all teams belonging to a particular organisation. For instance, in a medical organisation we can have particular subclasses ‘hlth:TeamLeader’, denoting all health team leaders, ‘hlth:TeamMember’, denoting all health team members, ‘hlth:Team’, denoting all health teams, etc. Other organisations will make similar extensions. Each organisation has certain roles that always are present; for instance, there will always be an officer in charge for the personnel from each organisation.

A relevant issue when using ontologies is access to referenced vocabularies and reuse of existing ontologies, e.g., domain ontologies. The reuse of ontologies is generally recommended, and an example of an existing ontology that we may have reused elements from is the Standard Ontology for Ubiquitous and Pervasive Applications (SOUPA) [SOUPA], which contains several parts that would be useful additions to the example rescue ontology, e.g., Person, Device, and Time. In our application scenario, i.e., Sparse MANET with possibly frequent and lasting network partitions, referenced vocabularies may not be accessible at all times, which may cause difficulties. This is particularly

Page 127: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

111

relevant in the case of importing ontologies, which implies including the entire set of assertions provided by that ontology into the current ontology [OWLGuide]. Although this is an important issue, we have considered it to be out of scope for this thesis, and make the assumption that (necessary parts of) referenced vocabularies are available. Therefore, this issue is not handled or shown in our rescue ontology or in the examples in Chapter 7, and we have chosen not to reuse elements from the SOUPA ontology.

6.2 Example Rescue Ontology In the following, we present an example rescue ontology consisting of a set of profiles and dynamic context definitions. The purpose of this rescue ontology is to illustrate how an ontology can be used in our solution for ontology based update; it is not meant to be a complete ontology for rescue operations. Therefore, although sufficient for our illustrative purpose, the model may be insufficient for a rescue operation ontology describing all aspects of rescue operations. The ontology is presented in OWL XML syntax in Appendix C. The example model comprises four kinds of profiles:

• User Profile: Person, role, privileges • Device Profile: Device role, capabilities • Information Profile: Information priority • Rescue Scenario Profile: Rescue scenario specifics

When related to ontology based dynamic update, the User Profile is used to describe the role a user has in the current context, i.e., the ongoing rescue operation. The Device Profile describes the type and role of a device. The Information Profile contains subject and priority for all types of information items that the information system needs to handle, thus is central in relation to utilising priorities in update. The Rescue Scenario Profile provides description of static context specific to a rescue scenario, like rescue operation roles tailored to the size of the rescue operation, and role update priorities. Dynamic contexts – including position, time, and movement patterns – are collected in a class pr:Context, which is defined in a separate part of the rescue ontology.

Figure 6.2 Upper ontology.

Below are example database schemas for the internalisation of the upper ontology of profiles (see Figure 6.2). The domain of the attribute pr:ProfileType is {pr:UserProfile, pr:DeviceProfile, pr:InformationProfile, pr:RescueScenarioProfile}.

pr:Profile(pr:PId, pr:ProfileType) pr:UserProfile(pr:UPId,...) pr:DeviceProfile(pr:DPId,...) pr:InformationProfile(pr:IPId,...) pr:RescueScenarioProfile(pr:RSPId,...)

Page 128: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

112

New (arriving) profile information is received in terms of OWL documents serialised to XML. These are internalised and their contents used for populating the ontologies. We envision the information system as comprised of a collection of profiles and contexts organised in a structure linking profiles and the correct dynamic context. Part of joining the rescue operation includes sending a profile (XML document) to a higher rank officer (and/or the OSC) on arrival, in conjunction with the device registering as a node in the current network. The dynamic contexts will typically change frequently during a rescue operation, while the profiles will remain fairly stable (but some changes will probably occur). The different profile ontologies are described in Section 6.2.1 and dynamic context in Section 6.2.2.

6.2.1 Profile Descriptions In this section each profile subclass is described together with example database schemas and XML where appropriate. After presenting the profile subclasses, we describe how the profile ontology can be utilised for filtering. User Profile When related to ontology based dynamic update, the User Profile is used to describe the role a user has in the current context, i.e., the ongoing rescue operation. The User Profile is shown in Figure 6.3. Not shown in the figure, but included in the OWL XML presentation, are restrictions expressing information like, “a team member always reports to a team leader”.

Figure 6.3 User profile ontology.

Page 129: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

113

The class pr:RescueOperationRole has three subclasses: pr:OfficerInCharge, pr:TeamLeader, and pr:TeamMember (note that pr:OSC is a subclass of pr:OfficerInCharge). To express different information needs and typical information items that are connected to a role, each rescue operation role has an information profile. For instance, OSC will handle information regarding personnel resources that the role TeamMember does not need, while a TeamMember of a medical team will likely handle patient records, which a member of a team of fire fighters for extinguishing a fire will not need.

The user profile in this example ontology does not have a property for including an information profile specific to the user profile, although including such a property may be valuable, so that each user could set preferences regarding information item types of interest, and set information item priorities to reflect this, for instance a paramedic would likely have a high interest in information of type “change in patient status”, and its priority set to “urgent”, while to a police officer, changes in medical status of patients may not be of the same interest and priority, if at all included in the information profile. But as this situation is (at least partly) solved through having an information profile connected to the class pr:RescueOperationRole, we chose not to include it in the class pr:UserProfile. Below are example database schemas for the user profile ontology.

pr:UserProfile(pr:UPId, pr:person, pr:role) pr:RescueOperationRole(pr:RORId, pr:RORoleType,

pr:reportsTo, pr:responsibility, pr:isMemberOf, pr:hasUpdatePriority, pr:hasInfoProfile)

pr:Person(pr:PId,pr:name,pr:affiliation,...) pr:Responsibility(pr:RId,...) pr:Team(pr:TId,...)

The domain of pr:RORoleType are values that identify the subclasses of RescueOperationRole, e.g., pr:OffIcerInCharge, pr:OSC, pr:TeamLeader, pr:TeamMember. The domain can be extended with new values when new subclasses are introduced in the ontology. A person, class pr:Person, has a name, and is connected to an organisation, e.g., a given hospital. The imported ontology for organisations (org) is a simple ontology for describing an organisation, included in Appendix C. Information Profile The Information Profile ontology (Figure 6.4) is a general ontology for all types of information items that the information system needs to handle. Together with the user profile it is central for utilising priorities in ontology based dynamic updates. The profile includes classes for information type (pr:InformationItem) and priority (pr:InformationPriority). The property pr:subject connects the information item to information related to its type, e.g., a class name or a definition/identification indicating which kind of information item this is. The range of pr:subject is a string (xsd:string), e.g., in the form of a class name or a URI to the definition of the information item type.

Page 130: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

114

Figure 6.4 Information profile ontology.

The example database schemas for the information profile are shown below.

pr:InformationProfile(pr:IPId, pr:item) pr:InformationItem(pr:IId, pr:subject, pr:priority) pr:InformationPriority(pr:IPrId,...)

As an example of possible information priorities, we have used four levels of priorities (top, urgent, normal, and low). These are for illustration only, and may differ from the terms that may actually be used in a rescue operation. In the profile ontology, they are subclasses of pr:InformationPriority; pr:Top, pr:Urgent, pr:Normal and pr:Low.

Figure 6.5 Example information priorities.

Presented in XML, the initial priorities would look like the example below.

<pr:InformationPriority rdf:ID="&pr;Top"/> <pr:InformationPriority rdf:ID="&pr;Urgent"/> <pr:InformationPriority rdf:ID="&pr;Normal"/> <pr:InformationPriority rdf:ID="&pr;Low"/>

Rescue Scenario Profile The rescue operation specific context is given in this example Rescue Scenario Profile ontology (Figure 6.6). The roles covered by the scenario (both for devices and personnel) and related aspects are described elsewhere in this section (pr:UserProfile and pr:DeviceProfile). Thus, descriptions of the classes pr:DeviceRole and pr:RescueOperationRole are not included here.

Page 131: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

115

Figure 6.6 Rescue scenario profile ontology.

The class pr:RescueOperation describes the rescue operation a rescue scenario profile is for. Rescue operations can be categorised in any number of ways, e.g., by its size in number of people, type, kind of incident, etc., or a combination of these. The type of rescue operation is in this ontology presented as classes for operations at land, sea, or in airspace; thus pr:RescueOperation has three subclasses denoting different types of operations; pr:LandOperation, pr:SeaOperation, and pr:AirOperation. In Norway, this indicates which level in the SAR organisational hierarchy (see Chapter 2) will coordinate the rescue operation. Although one of the two main rescue centres (Rescue Coordination Centre – RCC) is responsible for all rescue operations at land, sea, and airspace, the coordination of

Page 132: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

116

land operations is usually handled by a local rescue centre (Rescue Sub-Centre – RSC). The RCC handles rescue operations in airspace, except in the vicinity of airports where the RSC takes over. For operations at sea, the RSC handles accidents close to shore, while the RCC coordinates cases involving ships at sea.

Different kinds of incidents are described in the class pr:Incident. There are two subclasses of pr:Incident; pr:NaturalDisaster and pr:Accident, for specifying the kind of incident, e.g., an earthquake or a car accident. Accidents may involve one or more vehicles (e.g., train, boat, car, or aircraft). The property pr:incidentDescription is for giving a textual description of the incident. The properties pr:areaDescription, pr:areaConditions and pr:regionName describe aspects of the area of the current incident; pr:areaDescription a general description of the area, e.g., city area, rural, mountain, woodland, public road, airport, etc.; pr:areaConditions the conditions in the rescue area, e.g., weather, snow depth, accessibility. The property pr:numberOfPeople gives the (likely estimated) number of involved persons, e.g., passengers, train driver, pilot, etc. (not rescue personnel), and can also be used in such a categorisation. International involvement in the rescue operation has not been included in this ontology. A rescue scenario is connected to a set of information items that typically are handled during the operation, as well as a set of rescue operation roles. Therefore, the rescue scenario profile includes an information profile (pr:infoProfileForScenario), as well as all the roles covered by the scenario (pr:rolesCoveredByScenario). The latter designates all roles that are connected to the rescue scenario, i.e., roles of both personnel and of devices. Example database schema for the rescue scenario profile ontology:

pr:RescueScenarioProfile(pr:RSPId, pr:rescueOperation, pr:roleCoveredByScenario, pr:infoProfileForScenario, pr:incident)

pr:CoveredRole(pr:CRId,...) pr:RescueOperation(pr:ROpId,pr:operationType) pr:Incident(pr:IncId,pr:incidentType, pr:numberOfPeople,

pr:incidentDescription, pr:regionName, pr:areaDescription, pr:areaConditions)

pr:Accident(pr:AccId, pr:vehicleInvolved,...) pr:NaturalDisaster(pr:NDId,...) pr:Vehicle(pr:VId,...)

Example initial population of pr:NaturalDisaster, pr:Vehicle, and pr:Accident is shown in XML below.

<pr:NaturalDisaster rdf:ID="Earthquake"/> <pr:NaturalDisaster rdf:ID="FloodDisaster"/> <pr:Vehicle rdf:ID="Car"/> <pr:Vehicle rdf:ID="Train"/> <pr:Vehicle rdf:ID="Boat"/> <pr:Accident rdf:ID="CarAccident">

<pr:vehicleInvolved rdf:resource="#Car"/> </pr:Accident> <pr:Accident rdf:ID="RailwayAccident">

<pr:vehicleInvolved rdf:resource="#Train"/> </pr:Accident> <pr:Accident rdf:ID="BoatAccident">

<pr:vehicleInvolved rdf:resource="#Boat"/> </pr:Accident>

Page 133: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

117

Device Profile The Device Profile ontology (pr:DeviceProfile), shown in Figure 6.7, describes the type, capabilities, and role of a device, in addition to the current user and owner. For the KM and DDM, the main use for this kind of profile would be in relation to different configurations of the SDDD.

Figure 6.7 Device Profile.

Example database schemas for the device profile ontology are pr:DeviceProfile(pr:DPId,pr:deviceRole, pr:device,

pr:currentUser, pr:Owner) pr:DeviceRole(pr:DRId,...) pr:Device(pr:DevId, pr:model, pr:capability,

pr:nodeID,...) pr:DeviceCapability(pr:DCId,...)

The model name of the device, e.g., “Nokia 770”, indicates which type of device it is (“Internet Tablet”, mobile phone, laptop, PDA, etc.). The class pr:DeviceCapability describe capabilities like CPU, memory, etc., both those inherited from the device type and those specific to this particular device (e.g., when using a memory card for extra memory). In addition to a nodeID given a priori, a device has an owner, which can be a person or an organisation, e.g., an ambulance device belonging to the hospital. Whoever is currently using the device is shown with the property pr:currentUser, and may change during the course of the operation. A device can have a designated role, e.g., functioning as a server device. This is represented with the class pr:DeviceRole. An example of three initial device roles; communication device, server device, and ambulance device, is presented in XML below.

<pr:DeviceRole rdf:ID="&pr;CommunicationDevice"/> <pr:DeviceRole rdf:ID="&pr;ServerDevice"/> <pr:DeviceRole rdf:ID="&pr;AmbulanceDevice"/>

Page 134: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

118

How the Profile Ontology Supports Filtering Assuming that an instance of a pr:InformationProfile correctly reflects the information needs/interest for an instance of a pr:RescueOperationRole, a pr:UserProfile is linked to this information need/interest through the current pr:RescueOperationRole. And, through the pr:UserProfile, via the pr:RescueOperationRole, the user (instance of pr:Person) is indirectly linked to a pr:InformationProfile dependent on the current context (the role). Class pr:DeviceProfile, has a property pr:currentUser, linking the device to an instance of pr:UserProfile (the current user). Thus, through simple reasoning, the information needs/interest for a device in the current context of its user (current rescue operation role) can be inferred. This indirect/derived link (see Figure 6.8) between a user and its information profile (via user profile and rescue operation role), to the device profile (via the current user of the device), stating information needs, can be utilised to filter out information items that are irrelevant to a device or a user. Thus, the profile ontology can be utilised for filtering, indirectly supporting Claim 1.

Figure 6.8 Derived relations to information profile.

6.2.2 Dynamic Context In this section, we give an example of a simple ontology for dynamic context (Figure 6.9). New information is added to the knowledge base every time there is a change in context during the rescue operation, e.g., when a device updates its position, its current user changes, or there is a change in user roles. The property cxt:profile links the context to the appropriate profile, i.e., the profile for the device or user that an instance of a context relates to. The class cxt:Context has two subclasses; cxt:DeviceContext and cxt:UserContext, denoting context for device and user, but could be extended with dynamic context for other entities, e.g., for teams. In the current version of the example dynamic context, the information related to a device is the device position (in longitude and latitude), while the information related to a user is the current task. Other information that would be interesting to keep in a context includes an overview of nodes that are direct neighbours of this device/node together with some information about each neighbour node, e.g., resources. It would also be of use to keep a history of previous positions of the device so that a movement pattern can be derived from this. By connecting the context to its (user or device) profile, we can derive the position (dynamic context) of the user of a device and further of the rescue operation role this user has (e.g., the position of the team leader of Team 3 at time x), which can be useful both during and after the operation.

Page 135: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

119

Figure 6.9 Example context ontology.

Example database schema for this dynamic context ontology:

cxt:Context(cxt:CPId, cxt:profile, cxt:time,...) cxt:UserContext(cxt:CPId,cxt:currentTask,...) cxt:Task(cxt:TaskId,...) cxt:DeviceContext(cxt:CPId, cxt:position,...)

cxt:Position(cxt:PId, cxt:longitude, cxt:latitude)

6.3 Populating the Knowledge Base The knowledge base (ontology) is populated both before and during the rescue operation, and different items are added in the various phases during the operation. The six phases reflecting the lifecycle of a rescue and emergency operation as seen in the context of using Sparse MANETs were introduced in Chapter 2. The phases will be briefly summarised below before we show how they relate to the example rescue ontology. A more general discussion of the scenario phases can be found in [Munthe-Kaas2006].

The a priori phase takes place before the rescue operation has started, and involves exchange of shared vocabularies and standards. The briefing phase is initiated with the onset of the rescue operation, and involves gathering and distributing relevant information. Bootstrapping the network takes place at the rescue site. The main phase is running of the network, and involves handling all events that may occur during the rescue operation. At the completion of the rescue operation, all services have to be terminated in the closing of the network phase. Finally, information and statistics gathered during the operation may be analysed for future operations in the post processing phase. Phases are of variable length of time. Relating our example ontology to this rescue scenario, the main phases where profiles will be added and changed, are the a priori phase (Phase 1), the briefing phase (Phase 2), and the phase for running of the network (Phase 4), as shown in Figure 6.10 and described in the following.

Page 136: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

120

Figure 6.10 Rescue scenario timeline - populating the knowledge base.

The initial population, e.g., different types of rescue operations with standard roles and operational guidelines, is added to the knowledge base in Phase 1, while ontology individuals concerning the current operation are added in Phase 2. Device profiles (pr:DeviceProfile) are added in Phase 1, and for a personal device, person data, i.e., the pr:Person individual that is current user of a device, is also likely to be added in this phase. Current user will be updated in Phase 4 on changes. For more general devices, this is added during Phases 2 or 4. For all devices, information about the owner (pr:Person and/or org:Organisation) is added in Phase 1. User profiles (pr:UserProfile), relating a person to a role, is added in Phase 2 and/or 4, and can be changed/updated in Phase 4. The actual roles (pr:RescueOperationRole) are partly added during Phase 1, e.g., the initial roles in the user profile, or created during Phases 2 and 4. The roles will either be created together with a user profile, or connected to an existing one, typically when a user gets a new role, e.g., when ad-hoc groups are created or roles are changed during the rescue operation. Information profiles (pr:InformationProfile) are added (and possibly updated) in all phases where information is registered for sharing in the system, but particularly in Phases 2 and 4. Rescue scenario profiles (pr:RescueScenarioProfile) will be added in Phases 1 and 2. Priorities for rescue operation roles and the information items in information profiles are added on creation of the relevant individuals, to be utilised in update. Example Illustrating Initial Population of Knowledge base In this example we illustrate how the special role OSC can be presented with an individual as initial population of the knowledge base for the user profile ontology. In a rescue operation, the special role OSC is always present, so we have as part of the initial knowledge base such a role (an individual of pr:OfficerInCharge). The ontology might initially be populated by the arrival of an XML document. The initial population constitutes a knowledge base.

<pr:OSC rdf:ID="&pr;Osc"> <pr:isMemberOf> <pr:OSCTeam rdf:ID="&pr;OSCTeamX"/> </pr:isMemberOf> <pr:hasUpdatePriority>1</pr:hasUpdatePriority> </pr:OSC>

In a database implementation, this would result in the entries below. pr:RescueOperationRolepr:RORId pr:RORoleType pr:reportsTo pr:responsibility pr:isMemberOf pr:hasUpdatePriority pr:Osc pr:OScoo ... ... pr:OSCTeamX 1

Page 137: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

121

6.4 Handling Profile Ontologies in Our Architecture

Issues regarding how profile ontologies and the knowledge base population should be handled in our application scenario and architecture include storage – where the profile ontologies should be stored (who should keep what), which components are involved in dealing with the profile ontologies, and how the profile ontologies are viewed by this part of the system – as metadata or resources.

The storage of profiles, ontologies and knowledge base is based on user role in the rescue operation; the rescue site leader (OSC) should keep the full knowledge base for the current operation including the complete rescue scenario profile ontology. Officers in charge of different organisations should keep parts related to their relevant domain (e.g., health). Each node keeps its own device profile and user profile regardless of user role (of device owner/user). Being related to a rescue scenario profile or a rescue operation role, information profiles will likely be kept by the same nodes that keep its connected profiles. The information item (pr:InformationItem) individuals that are part of an information profile will likely be kept together with its information profile, but information about priorities should be distributed to all nodes to be utilised in ontology based updates.

In the KM, profiles and contexts are handled by the component for the PCM, and ontologies by the SMOF, so the rescue ontology profiles will be handled by these components. For sharing and update of profile ontologies and knowledge base, the DDM is also a central component, thus an important component also in ontology based dynamic updates. Search and retrieval in knowledge base and profile ontologies, as well as any required parsing are handled by the tool components.

Seen from the perspective of metadata management, profiles, contexts and ontologies are resources to be shared among participants, thus metadata descriptions of these should be registered for sharing and extracts exchanged by the DDM. This contributes to sharing vocabularies as well as providing efficient metadata management. The PCM, together with the different profile ontologies, provide filtering and connecting users to relevant information through the use of, e.g., user profiles, role descriptions, and rescue operation context model. As described in Section 6.2.1 in connection with user profiles, information needs/preferences for users and devices in connection to the current context (e.g., user’s rescue operation role) can be utilised in filtering and personalisation. This is possible through using derived relations between a user, the information profile of the user’s rescue operation role, and the device the user is currently using, to filter out information items that are not relevant to the device or the user.

A possible solution for dispatching the prioritised updates quicker in the network, could be to tag update messages with relevant priorities, e.g., in the form of metadata. This would require extra processing at each node when handling and forwarding messages.

6.5 KM Components in Ontology Based Update

Ontology based dynamic update is context and profile dependent. It can happen between nodes (horizontal update), can be both changes and appends, and involves both SDDDs and knowledge bases.

Page 138: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

122

Updates to the knowledge base, in the form of XML documents, may contain for instance change in context, rescue operation resources (e.g., movement/new position of device, personnel arriving, change of rescue operation role, change of device), etc. This includes profile and context updates. Such update messages are sent to the node associated to the role a user is to report to (e.g., OSC). The update message is tagged with update priority. We assume priorities of different types of information and different roles have been given by experts of SAR and different domain experts (e.g., meteorologists, medical, police) in respective organisations during development and maintenance of ontologies (not during rescue operation). The rules needed for deciding update priority at run time are also assumed created beforehand. The update priority is derived from (1) priority of rescue operation role (both sender and receiver) and (2) information priority.

Data updates are changes related to some specific (to organisation or domain) data, e.g., temperature measurements, changes in medical status weather conditions. These kinds of update are not specifically handled by the KM, although services from KM can be used in the update procedure and for retrieving necessary information from the knowledge base, e.g., priority information. The different kinds of information to be updated have different priorities, and we assume priorities are set by domain experts, thus we have not elaborated on which type of information should have which priority.

We describe procedures for knowledge base updates and sending prioritised update messages (ontology based) below. Possible additions to the requirements specific to ontology based update are described in the following. Note that we assume all nodes keep their own user profile and device profile, and has knowledge of their rescue operation role and which role it is to report to. In addition, all nodes have to keep an overview of priorities for all rescue operation roles and information item types, as well as the necessary rules for deciding the priority of update messages. We assume that this information has been distributed a priori to the rescue operation (Phase 1). It is also necessary to know the nodeID of devices, and this can be solved by subscriptions for (changes in) this information via the DENS.

1. Find priority of role(s) to send update to. 2. Find priority of the information item to be sent. Utilise rules to find update priority

based on priorities of rescue operation role of receiver and sender and information item type.

3. Find the node/device connected to the role (user) it is to be sent to, i.e., to find the nodeID for the device used by a user having a certain role. (In pr:DeviceProfile.)

4. If necessary: Find all members of a certain group, e.g., member of a given team (e.g., OSC team or “Team 3”), users having a certain role, all health personnel, etc.

To accommodate ontology based update in requesting information from the knowledge base, an addition to the KM interface that would be of advantage here is given below.

Table 6.1 Addition to KM Interface.

Service Function Description Comment Get/Request information in the (local) knowledge base.

getKBInfo() Takes a query stating what is requested from the knowledge base, e.g., in XQuery or SQL, or some other query language. Returns the requested content in XML code.

Information in ontologies, profiles, and context, e.g., priorities, nodeID of device, user role in rescue operation, concept terms for metadata descriptions.

Page 139: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

123

Related to getKBInfo(), examples of use are applications generating metadata describing information before registering items for sharing, e.g., requesting concepts/terms from appropriate vocabularies or ontologies. The function can be used in ontology based update for instance to find priorities for information item type (of update) or rescue operation role of sender and receiver, who a user is to report to according to the current rescue operation role, etc. Requests for information from profiles and dynamic contexts for users, devices and information items would be useful in many situations, e.g., finding priorities for sending updates, or to find the current position for a user or device. The function can also be used to get information about current (dynamic) context of user or device, e.g., location. Note that the knowledge base resides outside of the KM, and a Data Management component is used by the KM to access the knowledge base.

The following use cases show how the KM components can cooperate to solve ontology based updates, the first shows update of the knowledge base, and the second sending prioritised messages. Use Case (8): Update Knowledge Base Updates to knowledge bases during the rescue operation will be additions and modifications to the data/population. Ontologies will not be modified/added during the rescue operation, only outside of the operation, i.e., Phases 1 and 6 (ref. Chapter 2). The KM can receive information for update from a local source, e.g., when the user makes changes to his/her profile or update of device position, or in a message from an external source (node), e.g., when the OSC receives a message containing profile information concerning arriving personnel. The KM components involved are KM Engine, XML Parser (XML-P), SMOF, DDM, and PCM.

Figure 6.11 Knowledge base update.

Stepwise description of knowledge base update by the KM (see Figure 6.11):

0. KM receives a request for knowledge base update from a local or external source. This may be via the KM Interface (not shown), or a message received at a given port.

1. KM Engine requests XML-P to parse the information to get the kind of update (what is to be updated), and the changes/content/data.

2. If necessary, KM Engine requests the SMOF for ontology information needed for the update.

2-1: SMOF requests the DDM to get the location of this information. 2-2: SMOF requests the Data Management component for the data.

Page 140: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

124

3. KM Engine requests the PCM to update the context or profile in question. 3-1: PCM uses the DDM to get location of the item. 3-2: PCM requests the Data Management component to update the knowledge base.

Not shown in the figure are KM Interface, interface to data management component, and DDM retrieving metadata from the LDD. Use Case (9): Send Prioritised Update Message The KM is requested to send a prioritised update message in relation to ontology based update. Priorities are set for rescue operation roles and information items. The nodeID, user or role in question to receive the message can be stated in the request, or the kind of event or update will indicate what kind of (update) message to send, and whom to send it to. For instance, a message with profile information sent to OSC for notification on arrival (joining the operation). Note that the KM does not handle message exchange for applications. The KM components involved are KM Engine, XML Parser (XML-P), SMOF, DDM, and PCM.

Figure 6.12 Send prioritised update message.

Description of procedure of sending prioritised update message (see Figure 6.12):

0. The KM is triggered (requested) to send the prioritised update message. 1. KM Engine requests the XML-P to parse the (incoming) message. 2. KM Engine requests the PCM for profile (or context information). The PCM

should know what to extract and which profile to get information from, depending on the kind of event (e.g., join operation, leave operation, update device location).

2-1: PCM uses the DDM to locate this information. 2-2: PCM requests the Data Management component for the data.

3. KM Engine requests the SMOF for information from ontologies: (a) the profile/context data, (b) priorities of this kind of information (if relevant), (c) priorities of receiver and sender (assuming this kind of information is on devices from a priori phase).

3-1: SMOF requests the DDM to find the location of this information. 3-2: SMOF requests the Data Management component for the data.

Page 141: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

125

4. KM Engine requests the DDM to locate and return nodeID for the receiving node(s), e.g., nodeID for the OSC’s device.

5. KM Engine creates a message and requests the XML-P to parse it. 6. KM Engine requests the underlying message system to send the message.

Not shown in the figure are KM Interface, interface to the Data Management component, and interface to underlying message system.

6.6 Summary In this chapter, we have described our approach to ontology based dynamic updates, and given an example rescue ontology of profiles and dynamic context to show how ontologies can be utilised. We have also outlined how the knowledge base will be populated in the different phases, how profiles are handled in our architecture, and how the KM components contribute. The main claim we address in this chapter is Claim 4: ontologies can be utilised in dynamic update to accommodate both the organisation of the rescue operation and the dissemination of updates. A solution for this claim is shown through our approach to ontology based updates, which utilises knowledge about rescue operation roles and priorities of roles and information types, described in the example rescue ontology to support information sharing in the rescue operation. Indirectly, we also address Claim 1 – information overload can be handled through the use of filtering and personalisation – as the profile ontology (particularly the information profile), can be utilised for information filtering.

Page 142: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts
Page 143: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

127

Chapter 7 Using the Knowledge Manager in a Real Scenario The objective of this chapter is to provide a kind of proof of concept through a detailed example showing how our architecture can be used at run-time in a rescue operation scenario. We want to show how the different parts that have been presented in the previous chapters (4, 5, and 6) are connected, and how they work together, in a way that can explain the rationale behind and logic in the solutions. Thus, we aim to link the KM design, DDM exchange and update, ontology based update, and the example rescue ontology in one large example. We base the example on the railway accident rescue scenario presented in Chapter 2. After analysis of the scenario, we look at how our architecture can be used in such a setting, and then go on to give some detailed examples of chosen events.

7.1 Analysis of Railway Accident Scenario – The Phases

We start by analysing the scenario according to the six phases (ref. Chapter 2). For each relevant phase we look at possible events, particularly related to the KM, but also for the whole middleware where appropriate. The events are the basis for examples described later in this chapter. Phase 1 – A Priori Information about standards and policies, as well as data formats and vocabularies are shared among the organisations in this phase. Ontology creation and maintenance would also happen outside of the rescue operation, thus are included in this phase. The SMOF is used in populating the knowledge base with initial contents. Therefore, we show samples of possible initial knowledge base content in Section 7.4.1. Example initial content is necessary parts of vocabularies (ontologies) on each device so that it is “ready for action” when the personnel are called out to a scene. Related to the example rescue ontology, this can be necessary parts of device, rescue scenario, and information profiles. This information is presumably updated often so that all devices are ready to be taken out to a rescue operation on demand. What will be stored on each device depends on for instance device capacity and the organisation (or person) it is owned by.

Page 144: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

128

Phase 2 – Briefing This phase starts once the emergency central has been alerted and a rescue operation is initiated. Information about the accident, area, weather, etc. is gathered and distributed to relevant personnel. Assessment of resource needs continues through the operation (Phases 2, 3, and 4), e.g., personnel, equipment and other resources (transportation, blankets, food, shelter possibilities in area, etc.). Knowledge bases are populated with profiles for devices and personnel (users) as they are registered to join the operation. Information for briefing and sharing, as well as information about the current rescue operation (e.g., rescue operation type, incident type, information about area) is provided. We show examples of possible knowledge base content added in this phase in Section 7.4.2.

Start/reporting of the accident. The train driver reports the incident and location to the train control centre. The person on duty at the control centre alerts the emergency central, and starts the appropriate internal emergency procedures.

Initiation of Rescue Operation. The emergency central and participating organisations start gathering information about the area and available resources. All personnel get relevant parts of this information to their devices on leaving for the accident (briefing personnel).

Related to the KM, activities to add/update the knowledge base takes place; profiles are registered, and information added, e.g., devices, users, information for briefing and sharing, and facts about the current rescue scenario. Phase 3 – Bootstrapping This phase starts when personnel arrives at the scene, and involves bootstrapping the network. This phase is partly overlapping with Phase 4.

Rescue operation communication and tasks. Rescue personnel carrying Ad-Hoc InfoWare enabled devices arrive at the scene. The devices have shared (security) keys. Nodes and routing daemons start up and the MANET is set up. The devices connect and become nodes in the network. The activity on site and the possibly poor infrastructure in the mountain area means there may be frequent network partitions and nodes may be disconnected from the network for periods.

The KM will start up together with the rest of the middleware components, and its sub-components initiated. We assume that each device/node has necessary (parts of) vocabularies (/ontologies), standards, etc., added during Phase 1. The KM tasks include creating and initiating dictionaries with already known information to be shared (DDM), and start updating dynamic context, e.g., device location/position. The DDM creates/starts the LDD, adds initial information, and then creates the SDDD, as described in Chapter 5. Information received during the briefing phase (Phase 2), as well as other information that is to be shared in the network, has been given metadata descriptions and registered (by an appropriate application) in a file or catalogue, which the DDM at start-up uses to find initial information to add to the LDD. Phase 4 – Running Once the network has been established, we go into Phase 4, running of the network. In our example scenario, the leader of the team arriving first has the role of temporary rescue operation leader (on-site-commander – OSC). Once the police arrive, the highest ranked police officer will take this role. Personnel are given appropriate roles on arrival. Typical tasks could be as described below:

• Rescue site leader (OSC): the main tasks are to set up a place of command, get overview of the situation, coordinate equipment and personnel, assign tasks, and

Page 145: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

129

report to control centre. In the railway scenario, examples of gathered information about the situation are from the train control centre and train driver, e.g., number of carriages, passengers, any goods, etc.

• Fire brigade: The main tasks are to control fire, as well as to monitor areas in danger of fire or explosion. Other typical tasks are to cut loose (free) trapped people, help where needed, and place sensors to aid the monitoring of dangerous areas.

• Medical personnel: The main task is medical care, including registration of patients, and evaluation of medical state. If sensors are used in aiding patient monitoring, medical personnel will place these on patients.

• Police: tasks include gathering evidence and securing the area. • All personnel are involved in evacuating people and cooperating and supporting

the rescue operation in any way possible. Data and information is continually generated, gathered and shared. In our railway scenario, for instance like in the following: A police officer gathering evidence takes photographs for a forensic report. Paramedics register casualties, and sensors monitoring patients generate data for medical status update and exchange. Fire fighters place sensors to monitor the temperature in the area around the diesel locomotive. Drawings of train carriages and passenger lists are provided by NSB (Norwegian rail company), and can be used by, e.g., the fire brigade and the police. Fire fighters may use the drawings in locating carriages where passengers are seated, and where there may be danger of explosions. Arriving personnel and helicopters bring in updated weather forecasts (the weather in mountain areas may change very quickly). The OSC, keeping overview of the rescue operation, receive updates about changes in available resources and personnel, and assigns people to teams and tasks, possibly together with the Officer in Charge (OiC) for each domain.

The knowledge bases are continually updated during the operation. Example events from the above scenarios requiring knowledge base update include changes to rescue operation roles and change of owner/user of a device. Role changes may happen for instance when a higher rank police officer comes and takes over role as OSC; the leader of the first arriving team takes on role of temporary OSC. Once police arrive, the highest rank officer will take over this role. Change of user/owner of device, can happen for instance in cases of borrowing a device from someone else if a device “dies”, or if a particular device is connected to a certain role. There are also cases where a person may have several devices, for example a laptop (e.g., in an ambulance) and a PDA/mobile phone (or other device) when on foot/ treating patients, which requires knowledge of which device is currently in active use.

There are a number of events and required tasks for the KM during this phase. Profiles that are updated and added include device, user, and information profiles. Dynamic context, e.g., current device position on movement, is also continually updated. Prioritised update messages (related to knowledge based update) are sent. The information used for deciding the update priority of these messages is gathered in the knowledge base. Registering information in the Data Dictionary and exchanging metadata extracts (SDDD contents) is going on all through Phase 4. Below, coarse groupings of some of the possible events relevant for the KM are listed. Examples for parts of the events for this phase are given in Section 7.4.3.

The KM uses services offered by the DENS for subscribing to relevant events. The DENS then issues a notification message to the KM on the requested event. For instance, the KM on the OSC node has subscriptions to be notified of, e.g., new arrivals (new nodes)

Page 146: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

130

and device/node movement, and this would trigger the KM on the node used by the OSC to send a request for profile or context information from the relevant node. Although some subscriptions may have been predefined, this phase (Phase 4) is the main phase both for making new subscriptions and for receiving notifications on events. On receiving notification from the DENS, the KM will initiate appropriate action. Different subscription types, and how the KM subscribes to events in cooperation with the DENS (and RM) are described in Chapter 4.

Metadata exchange between DDMs will take place throughout Phase 4, and typically when a new node joins the network. As described in Chapter 4, the KM will receive a notification on changes in the current partition, e.g., in the case of new personnel with devices joining the network, and request the DDM to perform metadata exchange. As metadata exchange between DDMs is thoroughly explained with examples in Chapter 5, we will not focus on metadata exchange here. Arrival and departure of personnel, movement on site. This includes nodes joining and leaving the network and network partitions and merges; new organisations and personnel arriving and leaving the rescue site; the creation of new groups (ad-hoc, task-oriented, possibly involving different organisations); and update device location on node/personnel movement. KMs on all nodes will have a predefined subscription for changes in the network (partition change). Actions related to the KM include sending (and receiving) profiles, update of knowledge base: profiles, dynamic context, bookkeeping of profiles and context active in the current operation, and metadata exchange. Update Changes in profiles. This includes change of roles, e.g., when a higher ranked police officer arrives and takes over the role as OSC, and device changes, i.e., changing the current user of a device. There may be a number of reasons why a user may want to change device or people swapping devices, e.g., if there are devices designed for specific tasks, or related to a specific role, or if a device is broken or runs out of battery and a person borrows someone else’s device. In addition, a person may have several devices, e.g., a laptop and a smaller device for easy moving around on the site, like PDA or mobile phone. Actions related to KM are essentially the same as for personnel arrival and departure above: sending update messages and updating the knowledge base. But in this case, the relevant profiles are updated, and no new profiles are added to the overview of the current operation. Information is collected, exchanged and distributed. This includes registration and update of information specific to certain organisational domains as well as information related to the rescue operation itself. Examples of information specific to an organisation are registration of casualties, update of medical/patient records, and gathering data for forensic reports. Information connected to the rescue operation itself are for instance maps and description of the rescue area and accident, the available infrastructure and resources, weather information, etc. On registering information for sharing, metadata descriptions are created by the application used for registering using services from the KM to get information from the knowledge base. There may also be subscriptions made via the DENS for particular kinds of information. (Automatic metadata extraction and attachment (annotation) has not been looked into here, but it is likely that some form of automation will be used. Ontologies can be used.)

Distribution of information: When distributing information, update and information priorities are checked to assure distribution according to preset policies and rules (which the setting of priority values in update messages will be based on). Available profiles are

Page 147: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

131

checked to find possible guidelines for distribution as to who wants/needs which information, as well as for update and information priorities regarding rescue operation roles and information types. Through metadata exchange, the KM (DDM) supports distribution through the network of information registered to be shared.

Subscriptions to given types of information items, may depend on the information profile of a role. Each rescue operation role has a particular information profile stating the kind of information items (data, documents, etc.) that is interesting for that particular role. For instance, personnel from the fire brigade (or only OiC for fire) may be interested in drawings/plan of the tunnel or the train carriages, so will make subscriptions for this kind of information items. All personnel/roles may want to subscribe to updated weather reports.

In summary, the Knowledge Manger has a number of responsibilities and requirements to fulfil during this phase: It is responsible for updating the knowledge base with changes in profiles and dynamic contexts. It should send required messages with appropriate priorities regarding changes and additions in profiles and dynamic contexts to the appropriate role (i.e., to the node currently used by the person occupying this role). It is also responsible for looking up (in ontologies) correct priorities for both information type and roles for sender and receiver, which is then used to decide the priority of an update message. More detailed, the KM should find appropriate priorities for roles, and information items. It should use appropriate rules to determine priority of an update (message) based on priorities of role of sender and receiver, and information priority of the kind of information sent (or updated in the case of updates). It should also be possible to send messages to all members of a given group, e.g., based on organisation affiliation, team membership, or rescue operation role.

The requirements to the KM, and to what degree it can fulfil the requirements, depends on the configuration (weak devices vs. strong/powerful devices). In our rescue scenario the requirements may also to some extent depend on which role its user has; for instance, the KM on the device of the OSC (assuming powerful laptop/device) will be required to do bookkeeping of active profiles and dynamic context for the current operation. Other requirements in Phase 4 include what has already been described in Chapters 2, 4, and 5, e.g., registering information for sharing, and exchanging metadata extracts among neighbouring nodes (DDM), vocabulary mapping, information filtering according to profile and context, simple ontology reasoning, and XML parsing. Phase 5 – Closing of Network and Phase 6 – Post Processing After the closing down of the network and the operation is finished, each organisation may want to analyse chosen parts of data gathered during operation. The organisations may have gathered information relevant to their domain, regarding decisions and actions taken, e.g., summary of data from monitoring that was basis for medical decisions. Ontology alignment, in case of expansion of existing ontologies, e.g., integration, adaptation, and consistency checking, will be carried out outside of the rescue operation, in each organisation. In Phases 5 and 6, there are no required tasks specific to the KM, except graceful shut down in Phase 5.

7.2 Using the Ad-Hoc InfoWare Middleware In this section, we examine how the KM may use the other Ad-Hoc InfoWare middleware components, described in Chapter 2, in relation to the example scenario. The KM depends on services offered by all the middleware components. The RM can be used to locate

Page 148: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

132

resources and services, as well as to provide information about the neighbourhood, e.g., to get nodeIDs for neighbouring nodes. It is also a publisher of information that the KM needs, e.g., changes in the network. Services from the Security and Privacy Manager will be used by the KM for access control, e.g., request access rights. The middleware component that the KM will use most actively during the rescue operation is the DENS; KM subscribes to certain events using services from DENS, as described in Chapter 4, and receives notifications on events. This is highly relevant for the events (event groups) described for Phase 4 in the example rescue scenario; for instance new nodes joining the network (possibly new personnel arriving), updates/changes in the knowledge base, e.g., in profiles at role change or node/device movement, or at data updates, in addition to subscribing to certain kinds of information. The notification of new nodes in the network (partition change) is a predefined subscription installed at all nodes at software configuration time. Once the middleware is started in Phase 3, the DENS will start notifying the KM of partition changes. Other subscriptions are made at run-time. As the DENS is the component that KM cooperates most with, the remainder of this section is about subscribing to events via the DENS.

As described in Chapter 4, there are several types of subscriptions. In the following, we describe how some of these are directly relevant to the scenario example. The Type 1 (predefined) subscription used for detecting new nodes, will for the KM running on the OSC node also be used to prompt for profile information from new arrivals. The Type 2 subscriptions are made at run-time, and used by the KM to detect changes in profiles and contexts. In the case of profile change and change in dynamic context, examples of tables in the knowledge base that will have triggers placed are (using the database schemas resulting from the example rescue ontology in Chapter 6):

o pr:UserProfile (e.g., current role of user, team membership) for role changes, change in team membership, and similar.

o pr:DeviceProfile (e.g., current user and device role) for detecting, for instance, change of current user for a device, or in the device role (depends on ontology).

The OSC will typically subscribe to all changes of this kind (profile changes) in the profiles registered for the current operation. In our example, it would also be useful to make subscriptions connected to a particular role, domain, group membership or organisation. For instance, the OiC of a particular domain (health, police, etc.) will also want to know about changes in profiles and context for nodes used by personnel in their domain, organisation, or group. In this case, the subscription will not be to ALL nodes, but to all nodes in a particular group/domain, e.g., all health personnel. In this case, Type 3 or Type 5 subscriptions can be used. The DENS does not know about the existence of roles, groups, etc.; according to DENS, subscriptions are only connected to nodeIDs, thus the coupling between nodeID and, e.g., user role, will have to be performed locally in the KM.

To subscribe for particular information or updates in data, delay tolerant (Type 4) subscriptions are very useful. In our example, health personnel, e.g., paramedics and OiC for health would subscribe to updates in medical status and monitored data for registered casualties. In an acute medical situation, the notification can trigger high priority alert messages to be sent to all health personnel so that they can come to the patient’s aid as fast as possible. Such a scenario would require that an application is running on the publisher node that can evaluate medical status on basis of monitored data. For instance, in the case of sensors used for patient monitoring, the node acting as “sink”, i.e., receiving data from a set of sensors would be the publisher node with appropriate software/application.

Page 149: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

133

7.3 Using the Knowledge Manager In this section we look at how the KM services may be used by the rest of the system in relation to the example scenario. Applications will use the KM for registering information to be shared in the network, query for available information in the network, get information from the knowledge base, and more, using the services offered through the KM Interface as described in Chapter 4. In the examples, several of the services offered by the KM are used, as shown in the examples for event group 3. Regarding finding information in the knowledge base, of particular relevance for this example scenario is to retrieve the following information:

• Dynamic context, e.g., the current position of a device or user, or current task of a user. The position for a user is found through the position of the user’s device.

• Profiles, e.g., the current role for a user, team memberships, etc. • Find necessary metadata models, standards and vocabularies when generating

metadata for an information item to be registered for sharing. • The current state of the ongoing rescue operation, e.g., for creating a status report.

The Ad-Hoc InfoWare components will also use the services offered by KM, for instance, DENS use services offered by KM in vocabulary mapping to handle multiple subscription languages, as described in Chapter 4.

7.4 Examples from Relevant Scenario Phases In this section we give examples from Phases 1, 2, and 4. Details for the examples are found in Appendix B, thus we only show fragments and small samples in the form of OWL XML syntax (and in some cases database tables). The referenced rescue ontologies (prefixes pr, org, and cxt) are presented in full in Appendix C.

As all persons in these examples are medical personnel, we have expanded the example Rescue Ontology to include a subclass of pr:Person; the new class, pr:MedicalPerson, has an HPR number, which is a registration number given in the Norwegian register for health personnel. All authorised health personnel in Norway are registered in this directory. To accommodate this change, we have added a new attribute to the database schema for pr:Person to hold the person type of the individual (e.g., medical). The database schema for the database tables is shown below.

pr:Person(pr:PId,pr:personType,pr:name,pr:affiliation,...) pr:MedicalPerson(pr:PId,pr:hasHPRnumber,...)

As a support in this example, we have included an information structure necessary for the bookkeeping (by for instance the OSC) of all active profile and context elements in the ongoing operation. These are used in the examples in the form of database tables containing the IDs of profile and context elements; CurrentOperation keeping the ID of this rescue operation, CurrentOperationProfiles, containing the ID for all active profile elements connected to the current operation, and CurrentOperationContexts, containing the ID of all registered context elements. The information structure is represented in the following database schemas:

Page 150: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

134

CurrentOperation(COId,...) CurrentOperationProfiles(COId, pr:profile) CurrentOperationContexts(COId, cxt:context)

The profiles are subclasses of the class pr:Profile, and the contexts belong to the class cxt:Context.

7.4.1 Phase 1 – Initial Content of Knowledge Base Possible initial contents of the knowledge base (KB) added in Phase 1 are given in this section, e.g., personnel information and standard information profiles, shared vocabularies, roles that always are present (OSC), etc. Other possible content that may be added in this phase are extensions of profile ontology for different organisations, rescue scenarios and known types of devices, e.g., a land rescue operation, or a standard ambulance device. In addition to contents in the main KB for the rescue operation, presumably kept by the OSC, we show examples of KB content stored on the device of one of the paramedics in our example, Hans Hansen. Information Profile The example includes a standard information profile (pr:StandardInfoProfile) for a rescue scenario profile, which is a standard profile for typical information items that will need handling by the system in a rescue scenario. Below are descriptions of the information items included in the standard information profile.

− ImmediateBroadcastData: This is typically urgent messages (top priority), e.g., warning of explosions, building collapsing, fire breaking out, etc.

− SiteStatus: The current state of the rescue site; available personnel, resources, etc. − PatientRecord: medical record of a patient (a registered casualty). Medical records

should always follow the patient, either electronic, e.g., patients wearing electronic armbands or badges, or in paper form.

− SiteMap: Map of the rescue operation area. − ForensicsReport: Relevant information items could be gathered evidence for

investigation into the causes of accident or incident, technical data, etc. In the railway scenario accident, examples could be what caused the train accident, technical information about the state of the train and carriages, name of driver, which train, time of the accident, etc.

In addition, some standard messages may be useful to include in a standard information profile. Examples of these are not included, but suggestions – related to the information items above – are as follows:

− ResourceUpdate: Reporting changes in available resources (arriving, leaving). This is related to SiteStatus if used for reporting updates.

− WeatherUpdate: For reporting changes in weather. Related to SiteStatus. − AlertMessage: For instance warnings of explosions. Related to

ImmediateBroadcastData. − ArrivalMessage: Reporting new personnel arriving. May contain user profile and

device profile in cases of arrived personnel. Related to SiteStatus. − PositionUpdate: Message reporting changes in a device position, for dynamic

context update. − CasualtyStatus: Message with changes in medical status of a casualty, e.g., colour

coding (degree of urgency). Related to PatientRecord.

Page 151: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

135

Except for a small example below, we have not shown examples of information profiles for rescue operation roles. General Example of KB Content in Phase 1 The following shows fragments of initial population of the KB distributed to involved organisations a priori to any operation. The samples are related to the railway accident scenario. The following example of initial KB content shows an organisation and the different types of roles. The example individuals are included in the appropriate ontologies in Appendix C. The full example, found in Appendix B, also includes information priorities, kind of devices, incidents (accident, natural disaster), and vehicles. The resulting database tables are shown in Appendix B.

<org:Hospital rdf:ID="VossSjukehus"> <org:address>Sjukehusvegen 16, 5700 Voss</org:address>

<org:telephone>56 53 35 00</org:telephone> </org:Hospital> <pr:TeamMember rdf:ID="TmM"/> <pr:TeamLeader rdf:ID="TmL"/> <pr:OfficerInCharge rdf:ID="OiC"/> <pr:OSC rdf:ID="OSCoo"/>

Below we show a small fragment of an example standard information profile for a rescue scenario. The entire example is described in Appendix B. The different kinds of information items are explained above.

<pr:InformationProfile rdf:ID="StandardInfoProfile"> <pr:item> <pr:InformationItem rdf:ID="InfoItem_31"> <pr:subject> http://www.ifi.uio.no/dmms/InfoItem#ImmediateBroadcastData </pr:subject> <pr:priority rdf:resource="#Top"/> </pr:InformationItem> </pr:item> ... </pr:InformationProfile>

Rescue operation roles can also have information profiles, below we show a small example for an information profile for the role Officer in Charge (OiC) for the medical/health domain, where a medical status update is given priority pr:Urgent. This example individual is not included in the profile ontology, or in Appendix B.

<pr:InformationProfile “rdf:ID=OiCHealthInfoProfile”> <pr:item>

<pr:InformationItem> <pr:subject>

http://www.ifi.uio.no/dmms/InfoItem#MedicalStatusUpdate </pr:subject> <pr:priority rdf:resource=”&pr;Urgent”/>

</pr:InformationItem> </pr:item>

</pr:InformationProfile>

Example showing person information for two paramedics connected to Voss Sjukehus, added in this phase:

<pr:MedicalPerson rdf:ID=”&ll;LL”> <pr:name> Lars Lie </pr:name> <pr:affiliation rdf:resource=”&org;VossSjukehus”/>

Page 152: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

136

<pr:hasHPRnumber>9149988</pr:hasHPRnumber> </pr:MedicalPerson> <pr:MedicalPerson rdf:ID=”&hh;HH”> <pr:name>Hans Hansen</pr:name> <pr:affiliation rdf:resource=”&org;VossSjukehus”/> <pr:hasHPRnumber>9149987</pr:hasHPRnumber > </pr:MedicalPerson>

Example KB Content for Paramedic Lars Lie Below we show an example of content stored in the KB on Lars Lie’s device. HPR number is a (fictional) registration number from the Health Personnel Register where all authorised health personnel in Norway are registered. The example shows personal information and simple user profile for Lars Lie, as well as device information and device profile for his device.

<pr:MedicalPerson rdf:ID=”&ll;LL”> <pr:name>Lars Lie</pr:name> <pr:affiliation rdf:resource=”&org;VossSjukehus”/> <pr:hasHPRnumber>9149988</pr:hasHPRnumber> </pr:MedicalPerson>

<pr:UserProfile rdf:ID=”&llpr;LLProfile”> <pr:person rdf:resource=”&ll;LL”/> </pr:UserProfile >

<pr:Device rdf:ID=”&pc;PDA22”> <pr:model>Nokia xyz</pr:model> <pr:nodeID> nod2 </pr:nodeID> </pr:Device> <pr:DeviceProfile rdf:ID=”&pc;PDA22Profile”> <pr:device rdf:resource=”&pc;PDA22”/>

<pr:currentUser rdf:resource=”&llpr;LLProfile”/> </pr:DeviceProfile>

Resulting database tables on Lars Lies device: pr:UserProfile pr:UPId pr:person pr:role ...llpr:LLProfile ll:LL ...

pr:Person pr:PId pr:personType pr:name pr:affiliation ... ll:LL pr:medical Lars Lie org:VossSjukehus

pr:MedicalPerson pr:MPId

pr:hasHPRnumber ...

ll:LL 9149988 pr:DeviceProfile pr:DPId pr:deviceRole pr:device pr:currentUser pr:owner pc: PDA22Profile ... pc:PDA22 llpr:LLProfile ...

pr:Device pr:DevId pr:model pr:capability pr:nodeID ...pc:PDA22 Nokia xyz ... nod2

7.4.2 Phase 2 – Briefing Phase In this section, we show an example of adding content to the KB during the starting period of the rescue operation. Profiles that may be added to KB: User, Device, rescue scenario, information. In this phase, information about the current operation will also be added.

Page 153: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

137

− The rescue operation has been initiated, now this is known to be a railway accident in Hordaland county, Bergensbanen, close to (partly inside) a tunnel. Information is added as it is reported from the accident area, e.g., weather: -10 degrees Celsius and deep snow, number of passengers, diesel locomotive, caused by rocks (from rockslide) lying on the tracks/rails, etc. (It is possible that some of this information is added at later stages, but we include them in this example anyway.)

− Paramedic Hans Hansen is immediately called out to the operation. A simple user profile is added to the “operation KB”, which is distributed to all in this briefing phase.

− Device profile for Paramedic Hans Hansen’s laptop is registered. − Rescue scenario profile is created; information about the area and what is known about

the accident is added. − Possible information items that may be added to the information profile (not shown in

example): maps of area, aerial photo of the area and the accident, railway map. An instance for the current operation is created and given an ID automatically on the onset of the operation, this is done by application used for initiating the rescue operation (by, e.g., the rescue emergency centre) and start gathering information relevant for this operation. Profiles are added to the KB when they are connected to the operation. This part is outside of the rescue operation, at an initiating and preparatory stage, gathering information that is distributed to devices of personnel called out, devices in helicopters and ambulances that are assigned to the operation. Example KB Content Added in Phase 2 Note that person information (records) and devices were added a priori to the operation (Phase 1), thus not shown here. So the personal information for Hans Hansen was added in Phase 1. User profile for Hans Hansen:

<pr:UserProfile rdf:ID="&hhpr;HHProfile"> <pr:person rdf:resource="&hh;HH"/>

</pr:UserProfile>

The device profile for Hans Hansen’s device:

<pr:DeviceProfile rdf:ID=”&pc;PC56Profile”> <pr:device rdf:resource=”&pc;PC56”/>

<pr:currentUser rdf:resource=”&hhpr;HHProfile”/> <pr:deviceRole rdf:resource=”&pr;AmbulanceDevice”/>

</pr:DeviceProfile>

On initiation of the rescue operation, the necessary information about this rescue operation is added to the KB. An instance of a new accident/incident is created. Although the accurate number of people/passengers involved in the accident is unknown at this point in time, the assumed number of passengers is added at this stage. The train accident is given the ID “bba:Bergensbanen23”:

<pr:Accident rdf:ID=”&bba;Bergensbanen23”> <pr:vehicleInvolved rdf:resource=”&pr:Train”/> <pr:numberOfPeople> 400 </pr:numberOfPeople> <pr:regionName> Bergensbanen,Raundalen, Hordaland </pr:regionName> <pr:areaDescription> remote mountain area, access difficult </pr:areaDescription> <pr:incidentDescription> rockslide, train off tracks due to rocks </pr:incidentDescription>

Page 154: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

138

<pr:areaConditions> deep snow, minus10 degrees celsius </pr:areaConditions>

</pr:Accident>

An instance of a new rescue operation:

<pr:LandOperation rdf:ID=”&bba;ROperationBB23”/> An instance for the role of OSC for this rescue operation is created:

<pr:OSC rdf:ID=”&bba;OSCBergensb23”/>

The rescue scenario profile is created:

<pr:RescueScenarioProfile rdf:ID=”&bba;RSProfileBB23”> <pr:rescueOperation rdf:resource=”&bba;ROperationBB23”/>

<pr:roleCoveredByScenario rdf:resource=”&bba;OSCBergensb23”/> <pr:incident rdf:resource=”&bba;Bergensbanen23”/>

</pr:RescueScenarioProfile>

There may also be additional Information Items added to the information profile here, these are not shown in this example. Resulting database tables for Phase 2: pr: RescueScenarioProfile

pr:RSPId pr: rescueOperation

pr:roleCoveredByScenario

pr:infoProfile ForScenario

pr:incident

bba:RSProfileBB23 bba: ROperationBB23

bba: OSCBergensb23

... bba: Bergensbanen23

pr:RescueOperation pr:ROpId pr:operationTypebba:ROperationBB23 pr:LandOperation pr:Incident pr:IncId pr:

incident Type

pr: number Of People

pr:incident Description

pr:region Name

pr:area Description

pr: area Conditions

bba: Bergensbanen23

pr: Accident

400 rockslide, train off the tracks due to rocks

Bergensbanen, Raundalen, Hordaland

remote mountain area, access difficult

deep snow, minus 10 degrees celsius

pr:Accident pr: IncId

pr: vehicle Involved

...

bba:Bergensbanen23 pr:Train

pr:UserProfile pr:UPId pr:person pr:role ...hhpr:HHProfile hh:HH ...

pr:RescueOperationRole pr: RORId

pr:RORoleType

pr:reportsTo

pr:responsibility

pr:isMemberOf

pr: hasUpdate Priority

bba: OSCBergensb23

pr:OSC ... ... ... ...

pr:DeviceProfile pr:DPId pr:deviceRole pr:device pr:currentUser pr:owner pc:PC56Profile pr:AmbulanceDevice pc:PC56 hhpr:HHProfile ...

Page 155: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

139

Below we show example tables for the bookkeeping of the profiles and context elements active in the current operation. The contents of these tables constitute the total context for the ongoing operation, i.e., both static context (profiles) and dynamic context. We only show what is added in this phase. No dynamic context has been recorded yet, so only profiles have been added. CurrentOperation COId ... curOp234

CurrentOperationProfiles COId profile curOp234 bba:RSProfileBB23 curOp234 hhpr:HHProfile curOp234 pc:PC56Profile

7.4.3 Phase 4 – Running Phase We can group the numerous possible Phase 4 events relevant for the KM into three main event-groups: personnel arrival/departure and registering movement on site (event group 1), updating changes in profiles (event group 2), and collecting, exchanging and distributing information (event group 3). All three involves update of the KB. Below, each main event group is specified further, and examples for each group of events are given. The KM can make subscriptions via the DENS for the different events.

7.4.3.1 Event Group 1 – Personnel Arrival/Departure and Registering Movement On Site.

This group of events are further specified into the following four:

a. Nodes joining and leaving the network, and network partitions and merges. b. New organisations and personnel arrive and leave. c. New groups (ad-hoc, task-oriented), possibly involving different organisations. d. Update device location in dynamic context.

Of these, we have chosen to show examples for 1b (related to both 1a and 1c), showing arrival of new personnel, and 1d, with an example of updating the dynamic context with new device location. The main actions related to the KM are sending and receiving profiles on arrival/departure and changes in dynamic context (e.g., new position on movement), update KBs with these changes. In addition, the overview of profiles and contexts in the current operation is updated when necessary.

In this example another paramedic, Lars Lie (LL), from Voss Sjukehus (Voss Hospital), arrives at the site and joins the operation. Person information was added a priori (Phase 1). He will send profile information to the current OSC (and possibly the OiC for health) on arrival. The KM on the OSC node has received notification about the new node, and has sent a request to the KM on LL’s node to get the necessary profile information. The KM on LL’s node then sends the profile to the KM on the OSC node.

• Paramedic LL (from the same hospital as Hans Hansen), arrives the scene. His user and device profiles are sent to the OSC.

• He is given a role and membership (as team leader) in a team immediately on arrival by the Officer in charge (OiC) for Health.

Page 156: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

140

• A few minutes after arrival, LL’s device registers change of location (position), which is reported for dynamic context update. An update message is sent to the OSC.

At this point in time, the situation is as follows: A number of personnel have already arrived, and a number of roles (e.g., OiC health) have been filled, but this is not shown in examples. Hans Hansen is the current OSC; as he is the team leader of his paramedic team, and as this team was the first to arrive the scene, he took on the role as OSC until the police arrive. (This is not shown specifically in this example, but the information is included in the database tables.)

We assume all nodes keep an ontology of update priorities for all roles and some key information profiles (e.g., user and device profile messages sent on arrival, location updates, etc.), given in the a priori phase. Event 1b: Sending Profile on Arrival to OSC (partly also 1a and 1c): The procedure for sending a profile message on arrival is described in Chapter 6, Use Case (9). In the following, we give a brief description and give an example of possible message content. The KM on the sender’s node receives a prompting from the OSC node to send the necessary profile information. The KM requests relevant information from PCM, and SMOF, and which node to send the message to (which in this case would be the node used by the person acting as OSC). The information is parsed and the KM requests the underlying message system to send the message to the OSC node. Example message (only simplified message content showing the profiles): The message sent to the OSC may contain the following. If the name space abbreviations for well-known standards and referenced ontologies (RDF, OWL, the profile ontology, etc.) are standardised for the operation, this may possibly reduce the message volume somewhat. But it is important to assure that the message content is expanded correctly before use.

<pr:UserProfile rdf:ID="&llpr;LLProfile"> <pr:person rdf:resource="&ll;LL"/>

</pr:UserProfile> <pr:DeviceProfile rdf:ID="&pc;PDA22Profile"/>

OSC Receiving Message with Profile – Update KB: The KM on the receiving node will initiate the necessary procedures to update the KB, as described in Chapter 6, Use Case (8). Following is an example showing contents after update of the KB. Example of update of KB content, Event 1b: The KB will be updated by adding the appropriate profiles (user or device) to the profiles for the current operation (in CurrentOperation). New teams and roles created during the operation are added to the rescue scenario profile for this accident, and added to the KB. The user profile and device profile for Lars Lie, sent to OSC (and OiC health) on arrival:

<pr:UserProfile rdf:ID="&llpr;LLProfile"> <pr:person rdf:resource="&ll;LL"/>

</pr:UserProfile> <pr:DeviceProfile rdf:ID=”&pc;PDA22Profile”> <pr:device rdf:resource=”&pc;PDA22”/>

<pr:currentUser rdf:resource=”&llpr;LLProfile”/> </pr:DeviceProfile>

Page 157: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

141

An instance for a new health team is created for the new team Lars Lie will be team leader for:

<pr:Team rdf:ID=”&bba;hlthTm4”/>

New instance of team leader (for health team ‘hlthTm4’), the role that Lars Lie will take:

<pr:TeamLeader rdf:ID=”&bba;hlthTL1”> <pr:memberOf rdf:resource=”&bba;hlthTm4”/> </pr:TeamLeader>

Updated Rescue Scenario Profile – roles covered by scenario with the new role:

<pr:RescueScenarioProfile rdf:ID=”&bba;RSProfileBB23”> ...

<pr:roleCoveredByScenario rdf:resource=”&bba;hlthTL1”/> </pr:RescueScenarioProfile>

Information added in User Profile for Lars Lie related to role and team membership in this operation:

<pr:UserProfile rdf:ID=”&llpr;LLProfile”> <pr:role rdf:resource=”&bba;hlthTL1”/> </pr:UserProfile>

EVENT 1d: Update of Dynamic Context: This example shows registration of the changed location of Lars Lie’s (handheld) device, and demonstrates how the dynamic context of a device can be updated when the device location changes. After arrival, Lars Lie (LL) is given directions on team and task, and moves towards the team he joins. On registration of change in device location, the KM on nodes having subscriptions for this kind of change, e.g., OSC and OiC for health, receives notifications from the DENS about the change, and requests LL’s node for the update. A message for update of dynamic context is sent to the current OSC (and OiC for health). In the following example, “sender” refers to Lars Lie, and “receiver” to the OSC.

The procedure for sending update message was described in Chapter 6, Use Case (9). The KM is requested to send an update of dynamic context, and creates the message. The message is sent using the underlying message system, and added to the KB at the OSC node (receiver). Possible message contents include the (device) profile ID (pc:PDA22Profile), registered time, and the new position, as shown in the example update of KB content for Event 1d below. In addition, for ontology based update, the message has a priority. The update message priority is derived from applying a rule on the retrieved priorities of a) the kind of information item this update concerns, b) the receiver role, and c) the sender role. In this example, the information item is of the type ‘SiteStatus’. According to the pr:StandardInfoProfile (see Appendix B), information items of type ‘SiteStatus’ has priority pr:Urgent. At the receiving end, this will be added to an existing instance of a DeviceContext, or a new instance created, and through the profile property connected to the correct DeviceProfile individual, which was added earlier.

After receiving the message, the KM on the OSC’s device will parse the message and update the dynamic context of the current operation in the KB. The KB is updated by adding an instance of a DeviceContext to the contexts for the current operation (in CurrentOperation). The procedure for KB update is described in Chapter 6 (Use Case (8)). Below, we show example of KB content after the update of this event.

Page 158: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

142

Example of update of KB content, Event 1d: Change of location for LL device, a few minutes after Lars Lie’s arrival: dynamic context update. A new DeviceContext with the ID event1:ctxUpd25, is added to the KB.

<cxt:DeviceContext rdf:ID=”&event1;ctxUpd25”> <cxt:profile rdf:resource=”&pc;PDA22Profile”/> <cxt:time>10:43</cxt:time>

<cxt:position> <cxt:Position rdf:ID=”&event1;pos23x”> <cxt:latitude>60n39</cxt:latitude>

<cxt:longitude>6e26</cxt:longitude> </cxt:Position>

</cxt:position> </cxt:DeviceContext>

Resulting database tables for Event 1a and 1d: Note that information added before Event 1 is shown in grey. Only tables relevant to Event 1 are shown in this example. Person information for Lars Lie and Hans Hansen has already been added in Phase 1, thus this table is not shown. Hans Hansen is the current OSC for this accident, and his profile was added prior to Event 1. The profile for Lars Lie (llpr:LLProfile) is added in this event: pr:UserProfile pr:UPId pr:person pr:role ...hhpr:HHProfile hh:HH bba:OSCBergensb23 llpr:LLProfile ll:LL bba:hlthTL1

A new instance of a team (bba:hlthTm4) and team leader (bba:hlthTL1) has been created: (An instance of OSC for this accident has been added in Phase 2, at the initiation of the operation.) pr:RescueOperationRole pr: RORId

pr:RORoleType

pr:reports To

pr:responsibility

pr:isMemberOf

pr: hasUpdate Priority

bba: OSCBergensb23

pr:OSC ... ... ... ...

bba:hlthTL1 pr:TeamLeader ... ... bba:hlthTm4 ...

The team is added to the teams in this rescue operation: pr:Team pr:TId ... bba:hlthTm4 ...

The new role is added to the rescue scenario profile: pr:RescueScenarioProfile pr:RSPId pr:rescueOperation pr:roleCoveredBy

Scenario pr:infoProfile ForScenario

bba:RSProfileBB23 bba:ROperationBB23 bba:OSCBergensb23 ... bba:RSProfileBB23 bba:ROperationBB23 bba:hlthTL1 ...

The device profile for Lars Lie’s device is added: pr:DeviceProfile pr:DPId pr:deviceRole pr:device pr:currentUser pr:owner pc:PC56Profile pr:AmbulanceDevice pc:PC56 hhpr:HHProfile ... pc:PDA22Profile ... pc:PDA22 llpr:LLProfile ...

Page 159: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

143

This event is added to the KB: cxt:Context cxt:CPId cxt:CxtType cxt:profile cxt:time ...event1:ctxUpd25 cxt:DeviceContext pc:PDA22Profile 10:43

Device context related to Lars Lie’s profile is added to the context: cxt:DeviceContext cxt:CPId cxt:Position ...event1:ctxUpd25 event1:pos23x

The new position is added: cxt:Position cxt:PId cxt:longitude cxt:latitudeevent1:pos23x 6e26 60n39

Current Operation tables for events 1b and 1d: Event 1a: update after receiving message of new personnel arrival (profile): Lars Lie has arrived the scene, his user profile (llpr:LLProfile) and Device profile (pc:PDA22Profile), sent to OSC, is added to the CurrentOperationProfile table. CurrentOperationProfiles COId profile curOp234 hhpr:HHProfile curOp234 bba:RSProfileBB23 curOp234 pc:PC56Profile curOp234 llpr:LLProfile curOp234 pc:PDA22Profile

Event 1d: Adding the dynamic context update in table CurrentOperationContexts, i.e., adding the ID for a new instance of DeviceContext: CurrentOperationContexts COId context curOp234 event1:ctxUpd25

7.4.3.2 Event Group 2 – Update Changes in Profiles Examples of changes in profiles are

a. Change of roles, e.g., when higher rank police officer comes and takes over role as OSC.

b. Change of device. There may be cases where a user will change his/her current device, e.g., using a special device for a particular task, or changing between a small handheld device and a laptop.

This group of events involves sending and receiving update messages of changes in the appropriate profiles, and update of the KB. We show an example of possible content of an update message sent after a user has changed his role, together with the update of the KB. Example of Update in Knowledge Base: Changing Role in the UserProfile for Hans Hansen. Hans Hansen is a paramedic from Voss Sjukehus (Voss Hospital). Voss Sjukehus is responsible for emergencies in the geographical area of the railway scenario. In the train accident, his team arrived first at the scene, and he assumed the role as (temporary) OSC until arrival of the police (the chief of police is commonly appointed the OSC). In Phase 3 (or start of Phase 4) – as the first team arrived and the network was bootstrapped, the

Page 160: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

144

information about him assuming the OSC role was added to the KB (this is not shown in the example), and this information was sent as update to personnel. At this point in time, the police have arrived, and Hans Hansen has changed his role from rescue site leader to team member. His profile description is available on the PC and the mobile phone. After the change, the profile updates have been sent to the current OSC and medical officers in charge (health OiC), and team leader of his new team. This example shows the change to the user profile of Hans Hansen.

The role Hans Hansen now takes is as a team member of an ad-hoc team created during this operation (Lars Lie is team leader of this team, see event 1). This team is not the same as his paramedic team, where he still is the team leader. To simplify the example, we have chosen to keep these separate. In addition, the examples can only show small fragments of the total amount of information about the rescue operation. The user profile, person information, and device profile for Hans Hansen have been registered in the KB in Phase 2; therefore we only show changes in the example. Example of possible message content of role change sent to the OSC:

<pr:UserProfile rdf:ID="&hhpr;HHProfile"> <pr:role rdf:resource="&bba;hlthTmM34"/>

</pr:UserProfile>

On arriving the scene (before Event 1 and Event 2), Hans Hansen assumed the role as (temporary) OSC as he was first on scene. The role description, shown below, was added to his user profile at that point in time.

<pr:UserProfile rdf:ID="&hhpr;HHProfile"> ...

<pr:role>bba:OSCBergensb23</pr:role> </pr:UserProfile>

In the meantime, other personnel have arrived, and many roles are filled. These are not shown in this example. Now, some time later, the police have arrived and a police officer has taken over the role as OSC. Thus, Hans Hansen is given a new rescue operation role. First, a new instance of a health team member (bba:hlthTmM34) in health team 4 (bba:hlthTm4) is added to the rescue scenario profile for the ongoing operation (bba), and then Hans Hansen’s user profile is changed. The changes are illustrated by strikethroughs of old values.

<pr:TeamMember rdf:ID=”&bba;hlthTmM34”> <pr:isMemberOf rdf:about=”&bba;hlthTm4”/> </pr:TeamMember> <pr:UserProfile rdf:ID="&hhpr;HHProfile">

... <pr:role> bba:OSCBergensb23 bba:hlthTmM34</pr:role>

</pr:UserProfile>

The new role is added to the rescue scenario profile for the ongoing operation:

<pr:RescueScenarioProfile rdf:ID=”&bba;RSProfileBB23”> ...

<pr:roleCoveredByScenario rdf:resource=”&bba;hlthTmM34”/> </pr:RescueScenarioProfile>

Resulting Database Tables for Event 2: The team was added in Event 1, and person information was added in Phase 1, so we do not show these tables again. We only show tables that were changed in Event 2.

Page 161: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

145

Hans Hansen (first on site) took on the role as OSC, but now (in Event 2) his role is changed to team member of health team 4 (hlthTm4). The old value is formatted in grey strikethrough. pr:UserProfile pr:UPId pr:person pr:role ... hhpr:HHProfile hh:HH bba:OSCBergensb23 bba:hlthTmM34

Rescue operation role for OSC for the current accident was added in Phase 2, and the role for team leader added in Event 1: pr:RescueOperationRole pr: RORId

pr: RORoleType

pr:reports To

pr:responsibility

pr:isMemberOf

pr: hasUpdate Priority

bba: OSCBergensb23

pr:OSC ... ... ... ...

bba:hlthTL1 pr:TeamLeader ... ... bba:hlthTm4 ... bba:hlthTmM34 pr:TeamMember ... ... bba:hlthTm4 ...

The new role is added to the rescue scenario profile: pr: RescueScenarioProfile pr:RSPId pr:

rescueOperation pr:roleCoveredByScenario

pr:infoProfile ForScenario

pr:incident

bba:RSProfileBB23 bba: ROperationBB23

bba: OSCBergensb23

... bba: Bergensbanen23

bba:RSProfileBB23 bba: ROperationBB23

bba:hlthTL1 ... bba: Bergensbanen23

bba:RSProfileBB23 bba: ROperationBB23

bba:hlthTmM34 ... bba: Bergensbanen23

7.4.3.3 Event group 3 – Information is Collected, Exchanged and Distributed We differentiate between domain specific information, and information related to the rescue operation itself (organisation and management of the rescue operation).

a. Domain specific information, e.g., patient records and medical status updates (medical), passenger lists and technical data for a forensic report (police), information about explosives and maps of carriages and tunnel construction (fire brigade).

b. Related to rescue operation itself, e.g., maps of the area, weather reports, and updates of new resources, overviews of active personnel, teams and tasks.

In this example, we focus on using the KM for sharing information. It is suitable as an example of information for both a) and b). A police officer (Node_p1) gathering evidence has taken a photograph (tunnel_2) of the train carriages lying inside the tunnel. The photograph is shared across the network (available for users with access to this kind of information), and an appropriate application is used for this. The application adds metadata description and registers the photograph with the KM:

1) Calls getKBInfo() offered in KM Interface to get necessary vocabulary and metadata model for this kind of information item.

2) Offers these as a list of alternatives to the user in simple interface (e.g., drop down lists, boxes and radio buttons) to choose from. Some metadata is added automatically, e.g., type of information item (photograph), resolution/pixels, size, time, file name and location, etc.

Page 162: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

146

3) Calls registerItem() offered by the KM to register the metadata for the information item.

The KM on the node used by the OiC for the fire brigade (Node_f1) subscribes to this kind of information item (Type 4 subscription), and will be notified by the DENS of this event. The node then can request Node_p1 for the data (tunnel_2) directly. Node_f1 sends the request to Node_p1. In the following we assume that requesting end delivering data is handled outside of the KM, and that KM is only used for finding information about the requested data item, i.e., metadata. On Node_p1, the system:

1) Calls KM findItem() to get the location of the metadata in LDD. 2) Calls KM getRegItem() to get location in local storage. 3) Receives the data from local storage. 4) Sends the data to the requesting node (Node_f1).

All other nodes not subscribing to this particular kind of information will learn about the existence of a new photograph through the usual exchange of SDDDs, and they can then query the KM locally (call findItem()) for this kind of information before issuing queries to other nodes in the network. Some of the nodes may have a Type 3 (local) subscription to the local SDDD for, e.g., recent photographs taken on site, and will be notified by the DENS when the SDDD has been updated after receiving this information through an SDDD exchange.

7.5 Summary In this chapter, we have shown a realisation of how the KM would operate during a real rescue operation scenario. The presented examples build on the railway accident scenario and rescue scenario phases presented in Chapter 2, the design of KM and DDM presented in Chapters 4 and 5, and the example rescue ontology and approach to ontology based dynamic updates presented in Chapter 6. The railway scenario is analysed and split into the six stages, and relevant examples for each phase are described to demonstrate various aspects of the KM through showing example KB contents, example message contents for updates at changes in role, position, and arrival of personnel.

Page 163: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

147

Chapter 8 Proof-of-Concept Implementation In this chapter, an example implementation of KM is presented. We describe the implementation environment, some details about the implementation and what has been done. The implementation has mainly focused on the DDM; managing the data dictionaries. Our purpose/need for a DBMS is to implement a data dictionary to demonstrate the metadata management functionality of the DDM. Thus, for a simple prototype of DDM, we decided to build on an existing DBMS, stripping away everything save what is necessary for managing data dictionaries.

A data dictionary is a database containing information about the data stored in the database, i.e., it contains metadata. The terms ‘data directory’ or ‘catalogue’ are sometimes used in the same meaning as data dictionary [Elmasri2004]. The contents include structure, operations, constraints, localisation, and content descriptions of the data stored [Özsu1999]. The data dictionary is also a major component for mapping between different schemas, and thus it should contain necessary mapping definitions. In distributed systems, it also contains information regarding fragment placing, and if replicated, the number and placement of copies. A directory can either be global for the entire database, or organised as a hierarchy of local directories. The latter will require implementation of some distributed search strategy. A global directory is only relevant for fully distributed DBMS (or a MDBS) using a global conceptual schema. If a global directory exists without such a global schema, the directory exists as an extension of the traditional notion of dictionary/directory. Our use of the term dictionary is in this extended way as there is no global conceptual schema, but interoperability is handled through adherence to standards and the utilisation of ontologies and vocabularies (ref. Chapters 4 and 5). A local data dictionary describes data items on one site, while a global data dictionary describes the data for the distributed database as a whole. In our approach, in the DDM, the LDD corresponds to the local data dictionary as it contains metadata descriptions of shared items stored locally on a node, while the SDDD resembles a global dictionary as it contains (extracts of) metadata from the whole network (ref. Chapter 5). The dictionary content is influenced by fragmentation and replication, and the placement of the directory, as well as its contents, may have an effect on query processing, Reliability techniques use data placement, e.g., through having multiple copies, to provide some reliability and availability of the dictionary, thus supporting fault tolerance. A fully replicated directory (multiple copies) gives lower delays for access, and is more reliable in case of failure, but raises

Page 164: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

148

issues of consistency after changes and makes keeping the directory up to date more difficult, in addition to a high storage overhead.

In the following, we describe the implementation design and which functionality for the KM that has been implemented. To test our ideas, we chose to build on Apache Derby [Derby], which is an open-source, small “footprint” relational DBMS written in Java. As we had to add code internally to Derby for managing the DDM tables, some code changes to Derby are described.

Due to the scope of this thesis, as well as time limits, it has not been feasible to implement the whole KM, thus only the parts of the design necessary to demonstrate the most central functionality have been implemented. The following components have been (some only partly) implemented: KM Engine (fraction), SMOF (fraction), and DDM (full). The rest of the KM components are not so central in demonstrating the DDM and existing components/modules can be used to fulfil their functionality. Thus, the QM, XML-P and PCM components have not been implemented. The environment for our implementation is described in the following. Implementation Platform As the focus of this example implementation is on the KM, and particularly on the DDM, the environment outside of the KM has been simulated through small test applications developed for this purpose. The tool used for implementing KM is Eclipse SDK 3.1.0 [Eclipse], which is an open source development platform, running in a Windows XP environment. The code is written in Java 2 SDK Standard Edition, version 1.4.2 [JSE]. The version of Apache Derby we have used is 10.1, release 10.1.1.0 (August 2005) [Derby10]. Choice of DBMS Some of the factors we considered when choosing a DBMS to build on, include it having a data dictionary/directory, code availability, that it is lightweight and can run on several platforms, and that it is standards based. Ideally, we would have liked to use a distributed DBMS for MANETs, but have not found any suitable existing systems that are available. As we in reality only need a data dictionary to demonstrate the DDM functionality, and it should be a part of the DDM, an embedded environment would be appropriate, which may make it easier to simulate the missing items/parts in the application where it is embedded. Embedded DBMSs integrate DBMS functionality directly in the application program. Several open source embedded DBMSs exist, an overview is given by Höpfner and Levin in [Höpfner2007].

An important aspect of Derby is that it fully supports the SQL:99 standard, as well as parts of SQL:2003. Derby DBMS can run on different platforms and in different environments, in embedded mode or in client/server mode. It has a small memory footprint, about 2 Mb for the base engine and embedded JDBC driver. In our case, it will be part of a DDM module for testing. Building on (parts of) an existing DBMS will save time and allow us to focus on our metadata management solution. Our requirements to a DBMS, and how Derby (version 10.1) fulfils these, are shown in Table 8.1.

Table 8.1 How Derby fulfils our wish list for a DBMS.

Wish list Derby v. 10.1Code available Y (open source) Standard based Y (SQL, Java and JDBC standards) Small footprint/lightweight Y (about 2Mb) Run on any platform Y Run on PDA/mobile phone N Data dictionary / catalogue Y

Page 165: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

149

XML (XML/SQL) N (on to-do list) User Defined Types N (on to-do list) Query Y (not distributed) Distributed -replication -failure tolerant

N N N

Recovery Y (not distributed) Transaction management Y (not distributed) Handle some dynamics related to recovery

N

As can be seen, Derby does not quite fulfil all the requirements we have for a suitable DBMS for developing the DDM. Derby has designated data dictionary classes, which we can expand to add the LDD and SDDD tables. Although Derby does not specifically handle XML (XML/SQL), it does have the CLOB data type, which means that large XML structures may be stored like CLOB and then be handled as XML outside of the database, e.g., by using XQuery to query the XML structure after pulling it from the database. Services from the XML-P would then be needed.

In our case, having Derby as part of the DDM, there will only be one instance of Derby accessing the database used in DDM. Our main use for the Derby DBMS is for the DDM dictionary tables (SDDD and LDD) and the list of previous DDM exchanges (exchange table), to demonstrate the DDM functionality. The Derby data dictionary consists of a set of system tables, thus the DDM tables will have to be added to the system tables to be included in the Derby data dictionary. As the DDM will exist on all nodes, and each DDM will have its own Data Dictionary and be the sole user to Derby in this case, we found that using Derby in an embedded environment is the best option for a test implementation of the DDM. This way, there are no demands for administration, and Derby will run in the same JVM as the DDM. In this test implementation of DDM, we did not need to store metadata descriptions as XML, thus we did not use the possibility of storing XML structures as CLOBs. Derby DBMS The Derby DBMS [Derby] is a pure Java relational database engine using standard SQL and JDBC as its APIs. It consists of system-wide properties, error log, and one or more databases. Each database is kept in a separate subdirectory. The data dictionary consists of a set of system tables. If connecting to a database outside of the current system, this database automatically becomes part of the current system. Two separate instances of Derby must not access the same database. Figure 8.1, taken from the Derby Developer’s Guide [DerbyGuide], illustrate the Derby system.

Page 166: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

150

Figure 8.1 Derby System from [DerbyGuide].

Derby can run in different environments: embedded and client/server. The type of environment will affect the classpath, driver name, etc. In an embedded environment, an application starts up an instance of Derby within its Java Virtual Machine (JVM), and no network access occurs. Only a single application can access the database at any one time. Derby is started by loading the embedded driver. Embedded in a single-user Java application, Derby can be practically invisible to the user, since it requires no administration and runs in the same JVM as the application. In a client/server environment, Derby runs embedded in a server framework, while client applications (outside of the embedded environment) connect to Derby via the network. In an embedded environment, each copy of the application will have its own copy of the database and Derby software. In contrast, a Derby server framework can work in multi-threaded, multi-connection mode and can connect to more than one database at a time.

The JDBC API is part of the Java 2 Platform, Standard Edition, and consists of the java.sql and javax.sql packages with a set of classes and interfaces enabling database access from a Java application. A database connection is established by the application by obtaining a Connection object, typically by calling the method DriverManager.getConnection() with a String containing a connection URL identifying a database. The JDBC connection URL also allows certain high level tasks, e.g., creating a database or shutting down the system. The URL will differ with the environment (embedded or client/server) that Derby is used in, but the name of the database to connect to as well as some attributes and values for certain tasks, are allowed in all versions of the URL.

In an embedded environment, when an application shuts down, it should first shut down Derby. If the application that started the embedded Derby quits but leaves the JVM running, Derby continues to run and is available for database connections. Database limits in the version of Derby that we use include no support for indexes for columns defined on CLOB, BLOB, and LONG VARCHAR data types, and that the system shuts down if the database log cannot allocate more disk space. Derby adheres to SQL99 standards wherever possible, but in the version we used, there are some non-standard/untypical features, e.g., in relation to dynamic SQL, cursors, and information schema. Details can be found in the Derby Developer’s Guide [DerbyGuide].

Page 167: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

151

8.1 Overview of Classes The classes are organised in two packages in addition to the package for Derby (see Figure 8.2).

• Package km: Contains the KM classes (including SMOF). Uses package ddmgr. • Package ddmgr: Contains the DDM classes. Uses package derby. • Package derby: The (modified) Derby DBMS module.

Figure 8.2 Packages overview.

The data dictionaries in Derby are system tables. This means that the LDD and SDDD tables, as well as the DDM exchange table of previous exchanges, have to be added to the Derby system tables. Thus, functionality for managing the DDM tables has to be added to Derby system procedures. As Derby is embedded in the DDM application, the DDM is responsible for boot and shut-down of Derby. In Figure 8.3, an overview of the KM and DDM classes is given.

Figure 8.3 KM and DDM classes overview.

In the following, we will briefly describe the classes in packages km and ddmgr, and then show how they correspond to the KM and DDM design. Helper classes, e.g., for writing logs, not relevant to the design have been omitted.

Page 168: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

152

Figure 8.4 Classes in package km.

Package km (Figure 8.4):

• Interface KM: interface offered by the KM. • Implementation class KMImpl: implements the KM. Only partially implemented. • Class SMOF: Implements some functionality related to thesaurus lookup. Package ddmgr (see Figure 8.5):

• Interface DDM: interface to the DDM class. • Implementation class DDMImpl: implements interface DDM. Management of DDM

data dictionary tables. Has a JDBC connection to the database. Statements created on this connection are also used by classes SDDD and LDD.

• Class SDDD: class for accessing SDDD tables. Basically a front class towards the database, creating queries (e.g., with system procedure calls) for the SDDD tables.

• Class LDD: class for accessing LDD tables. An LDD equivalent to class SDDD above. • Class ConceptLink: class holding an SDDD semantic link {concept, metaResID},

including link level, availability, and time of last update. • Class InfoItem: class holding a metadata description (registration in LDD), including

time of last update.

Page 169: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

153

Figure 8.5 Classes in package ddmgr.

The names of the implemented classes differ somewhat from the design presented in Chapters 4 and 5. The implementation classes correspond to the design of KM and DDM as shown in Table 8.2 below.

Table 8.2 Correspondence between implementation and design.

Package Implementation Design

km

Interface KM KM Interface Implementation class KMImpl

KM Engine

Class SMOF SMOF

ddmgr

Interface DDM no correspondence Implementation class DDMImpl

DDM

Class SDDD no correspondence Class LDD no correspondence Class ConceptLink no correspondence Class InfoItem no correspondence

Page 170: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

154

8.2 Database Schema The DDM data dictionary database schema used in this implementation, i.e., added to Derby system tables, are simple schemas sufficient for demonstrating the DDM. The schema for LDD diverts somewhat from the schema used in the examples in Chapter 5. In the LDD and SDDD schema shown below, ‘itemuuid’, ‘linking_ID’, ‘availability_ID’ and ‘exchange_ID’ are identification used in Derby tables. In addition, but not shown here, all Derby system tables also have column for column count. Note that the attribute “timeLastUpdate” was called “timestamp” in examples in Chapter 5. The following schemas were implemented in Derby for DDM tables. Note that all schema names and attributes are in block letters in the implemented code, but in the listing shown below we have chosen (for readability) not to use block letters.

LDD_InfoItem(itemuuid, infoItemID, keyConcepts, resourceName, resourceLocation, timeLastUpdate)

SDDD_Linking(linking_ID, conceptID, metaResID, timeLastUpdate) SDDD_Availability(availability_ID, metaResID, timeLastUpdate) DDM_Exchange(exchange_ID, nodeID, timeLastExchange)

8.3 Modifications to Derby Code The following classes were added to the core Derby code. The classes are necessary for incorporating the DDM tables in the Derby system tables (i.e., the Derby data dictionary). See Appendix A for more details about modifications and additions to Derby.

• In package derby.iapi.sql.dictionary: o Every system table has a descriptor class, thus for DDM tables the following classes

were added: Class SDDD_AVAILABILITY : AvailabilityDescriptor Class SDDD_LINKING: LinkingDescriptor Class LDD_INFOITEM: InfoItemDescriptor Class DDM_EXCHANGE: ExchangeDescriptor

o Interface class internally in Derby for managing all DDM tables: Interface LDD_SDDD (includes DDM exchange table)

• In package derby.impl.sql.catalog: o Each system table needs a ”row factory” class to create new rows in the table:

Class DDM_EXCHANGE_RowFactory Class LDD_INFOITEM_RowFactory Class SDDD_LINKING_RowFactory Class SDDD_AVAILABILITY_RowFactory

o Implementation Class LDD_SDDDImpl: Implementation of the LDD_SDDD interface; the functionality related to managing DDM tables (including exchange table).

In addition, necessary changes in the code for several Derby classes were done, e.g., in the class ‘SystemProcedures’, which contains some system built-in procedures and help routines. In class SystemProcedures, the procedures that the DDM use for accessing the DDM tables were added. These were called in SQL queries from the DDM as described below.

Page 171: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

155

There are three system schema added for system procedures related to the DDM; LDDPROC for LDD related procedures, SDDDPROC for SDDD related procedures, and DDMPROC for other DDM procedures. In retrospect, one system schema for DDM procedures would have been sufficient for this implementation. The following system procedures were added to Derby class SystemProcedures for access through query procedure calls (the procedures are listed under the related system schema):

• In LDDPROC − ADD_LDDITEM – add a new entry to the LDD in LDD_INFOITEM.

• In SDDDPROC − ADD_LINK – add an SDDD entry (link) to SDDD_LINKING. − ADD_AVAILAB – add a new availability entry to SDDD_AVAILABILITY.

• In DDMPROC: − ADD_EXCHANGE – add nodeID with time of last exchange in

DDM_EXCHANGE. In the DDM, the system procedure call is added to a query, and a query execution or update is called using the JDBC API (e.g., Statement.execute()), for instance:

s.execute("CALL SDDDPROC.ADD_LINK('" + conceptID + "','" + metaResID + "')")

In Derby, the query invokes a call of the corresponding function in class SystemProcedures, which calls the appropriate function implemented in class LDD_SDDDImpl. For example, the query above invokes the following function in class SystemProcedures: SystemProcedures.ADD_LINK(conceptID, metaResID), which calls the function LDD_SDDDImpl.addLink(conceptID, metaResID) in the class LDD_SDDDImpl.

As the DDM tables are a part of the Derby system tables, the schemas for the DDM tables and procedures have to be added to the Derby system schemas. The DDM tables are built-in to the system tables’ schema (SYSIBM schema, which is the IBM DB2 system table schema). The major changes in relation to adding the DDM tables to the Derby system tables are in the code for the data dictionary class ‘DataDictionaryImpl’, which implements the management of the system tables (the data dictionary tables) in Derby, e.g., code for creation of the DDM tables at boot time. In addition, necessary schemas and schema descriptors, etc., has to be added to Derby system and the DDM tables added to the system catalogue. The system tables in Derby can not be altered, only queried at run-time [Derby]. To allow changes at run-time in DDM tables, the code has been modified so that the DDM tables can be modified at run-time.

8.4 Implemented Functionality The functionality implemented to demonstrate the KM and DDM is mainly what is necessary for managing and querying the data dictionaries, and metadata exchange. In addition to procedures for boot/start-up and shut down, the following has been implemented for DDM: register/update/delete metadata item in LDD, update SDDD, LDD lookup, SDDD query, metadata exchange. For SMOF, a very simple solution to demonstrate thesaurus lookup has been implemented; this consists of looking up synonyms in a synonym file. KM Interface and KM Engine is partly implemented, focusing on DDM functionality and vocabulary match (SMOF). Exception handling is not shown in

Page 172: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

156

pseudocode or diagrams. We have omitted pseudocode of functionality that is not directly related to services offered by KM (Chapter 4) or DDM (Chapter 5), e.g., handling connections to the database. The function getKBInfo() (Chapter 6) for querying the knowledge base has not been implemented.

When the term ‘conceptID’ is used in the code, it means ‘concept term’, i.e., a term (from a controlled vocabulary/ontology) used in metadata descriptions and extracted to form SDDD links. The terms ‘metaResID’ and ‘infoItemID’ are used interchangeably, both refer to metadata resource IDs which are either a local LDD registration/entry ID or a ‘nodeID’. The term ‘nodeID’ denotes a unique identification for a network node in our application scenario, given a priori to the rescue operation.

8.4.1 Sequence Diagrams In the following high level sequence diagrams, we show functionality related to sharing information that is offered in the KM Interface, i.e., registering metadata in LDD, search in SDDD, LDD lookup, and vocabulary match/lookup. As the focus is on illustrating functionality implemented in KM and DDM, we skip details related to the database connection (JDBC API), and only show the call to execute the query in the Derby DBMS. We do not extend the sequence diagrams to calls inside Derby. To further simplify the figures, we have omitted activation (method-invocation) boxes, as well as calls to get and set values in objects, e.g., getMetaResID() in class ConceptLink, and all testing, e.g., to see if an item to be registered already exists in the database. In addition, we have chosen to only show the returns where values are returned. Relevant functions in each class are explained in the next section. Register new item in LDD (Figure 8.6): An application calls registerItem(), with metadata description, to the KM (instance of KMImpl) through the KM Interface. KM Engine creates a new InfoItem on the metadata and calls addInfoItem() in the DDM (instance of DDMImpl). To store the registration in the data dictionary, the DDM calls addInfoItem() in the class LDD, which creates, sends and executes the query to the database. The DDM then creates a new ConceptLink for each of the concept terms in the metadata description, and calls localUpdateSDDD() with all new links. For each of the links, DDM calls addEntrySDDD() in class SDDD, which creates queries and stores the link and the availability in the database.

Page 173: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

157

Figure 8.6 Register new item in LDD.

Search SDDD (Figure 8.7): An application calls findItem() with a set of search terms (concept terms, from vocabulary). The instance of KMImpl (KM Engine) calls findResources() in DDMImpl. DDMImpl calls searchSDDD() in class SDDD, which creates query and executes this to the database. For each result in the returned result set, a new ConceptLink is created. The results are returned to the KMImpl which converts the results into a string and returns this to the application.

Figure 8.7 Search SDDD for related items.

Page 174: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

158

LDD lookup (Figure 8.8): An application calls getRegItem() offered in the KM Interface, with the ID for a registration in the LDD (metaResID). The registration ID may have been retrieved through a search to the SDDD shown above. The KM Engine (KMImpl) calls getInfoItem() in DDM (DDMImpl) which calls getInfoItem() in class LDD. The instance of LDD calls a local method getInfoItemID() that creates a query and executes it to the database. The resulting metadata description is returned as a new instance of class InfoItem to the KM Engine, which converts the result to string and returns this to the application.

Figure 8.8 Retrieve metadata for item registered in LDD.

Vocabulary Match (Synonym Lookup): The two KM functions related to vocabulary match or synonym lookup, findSynonym() for finding synonyms for given terms and findSource() for finding nodes with information related to these synonyms, are typically used together, e.g., in cases of multiple subscription languages (ref. Chapter 4). Therefore, we have chosen to show these in one sequence diagram in Figure 8.9.

Page 175: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

159

Figure 8.9 Find synonyms and query for related nodes.

8.4.2 Functionality Implemented in Package km In this section we present the KM functionality (ref. Chapter 4) that has been implemented. In class KMImpl the following implemented functionality is described:

− Initiate and stop KM − Register metadata for resource to be shared − Update existing metadata registration (LDD) − Resolve local link/lookup in LDD − Search SDDD for shared information items − Vocabulary match/synonym lookup

In class SMOF, we describe the functionality for looking up synonyms, used in vocabulary match/synonym lookup. The class InfoItem represents a metadata description/registration in the LDD, and the class ConceptLink represents an entry in the SDDD. We have in the following chosen not to differentiate between class DDMImpl and interface DDM for simplification, thus ‘DDM’ is used to indicate either. The committing of changes to the database or the use of Statement is not commented on although it is shown in the pseudocode. To show function calls (except for calls to functions in the same class), we have chosen to write the class name in front of the function call like this ‘className.functionName()’, e.g., ‘LDD.addInfoItem()’, although the actual call will be to an object instance of that class, for example ‘myLDD.addInfoItem()’.

Page 176: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

160

• Initiate and stop KM KMImpl(): Creates an instance of the KM, sets the nodeID of this node, and initiates/starts the DDM (boot), and initiates SMOF. If input parameter ddmRestart is true, the DDM restores existing data dictionaries (LDD and SDDD tables). If ddmRestart is false, the LDD and SDDD tables are created from scratch.

public KMImpl(String theNodeID, boolean ddmRestart) stopKM(): Shuts down KM gracefully; Activates necessary procedures in SMOF and starts the shut down procedure in DDM, which initiates shut down of data dictionaries, and then stops the DDM. The DDM starts and stops Derby. LDD and SDDD are not cleared at this point, but can be discarded at start-up.

public void stopKM() • Register metadata for resource to be shared

registerItem(): Register metadata of information items for sharing. The format of the input metadata:

<conceptTerms><location><resName>

The input metadata is split into concept terms, location, and resource name, and a new ID for this metadata resource is created (a combination of letters from the concept terms together with a random integer). This is added to a new instance of InfoItem. DDM is then requested to add the new metadata registration to the LDD by calling addInfoItem() with the new InfoItem.

public void registerItem(String metadata) begin

remove first '<' and last '>' from metadata split metadata at "><" and add to array temp create InfoItem newItem create a metaResID add metaResID and contents of temp to newItem call DDM.addInfoItem with newItem end

• Update existing metadata registration (LDD)

updateItemReg(): Update already registered metadata entry. The input metaResID is used to retrieve the LDD registration. If the item does not exist, an exception is thrown and the function returns without making any changes. The format of the input update:

<conceptTerms><location><resName>

The input metadata is split into concept terms, location, and resource name. Together with the input/original metaResID, this is added to a new instance of InfoItem. DDM (DDMImpl) is then requested to update the metadata registration to the LDD by calling updateInfoItem() with the new InfoItem.

public void updateItemReg(String metaResID, String update) begin

call DDM.getInfoItem with metaResID returning InfoItem item if item is null then give error message (throw exception) return endif remove first '<' and last '>' from update

split update at "><" add to array temp

Page 177: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

161

create InfoItem newItem add metaResID and contents of temp to newItem call DDM.updateInfoItem with newItem

end

• Resolve local link/lookup in local LDD

getRegItem(): Lookup item in LDD given the metadata registration ID. This function requests the DDM to retrieve the metadata registration with the given metaResID, formats the results into a string, and returns the results. The format of the return String: <metaResID><conceptTerms><location><resourceName><timeLastUpdate>

public String getRegItem(String metaResID) begin

create resultString call DDM.getInfoItem with metaResID returning InfoItem item

if item is not null then Add formatted information from item to resultString endif return resultString end

• Search SDDD for shared information items

findItem(): Retrieves from the SDDD nodeIDs (and/or local LDD registration IDs) that are related to the given concept terms separated with ‘delim’. It is possible to state that the search is to be restricted to locally registered items (i.e., in local LDD) by setting input ‘onlyLocal’ to ‘true’. The function requests the DDM to find relevant resources by calling findItem(). If there are matching resources, the link information (concept term, metaResID (may be local LDD registration or a nodeID), linklevel, and availability) is formatted into a string. It may also be useful to return the time of last change. The format of the return String:

<concept,metaResID,linklevel,availability><...>

public String findItem(String concepts, String delim, boolean onlyLocal)

begin create resultString

call DDM.findResources with concepts, delim and onlyLocal returning linkList

if linkList is not null then for all conceptlinks in linkList add formatted information from current conceptlink to resultString endfor endif return resultString end

• Vocabulary match/synonym lookup

In KM there are two main functions for handling synonym lookup: findSynonym(), which looks up synonyms from different vocabularies, and findSource(), which finds nodes with information related to the vocabularies or synonym terms. These are used together, e.g., when solving cases of multiple query languages or subscription languages (see use cases in Chapter 4). In this implementation, the thesaurus consists of a synonym file that SMOF uses for synonym lookup. This means that the correct synonym file has to be available, and if the synonym file is changed, it has to be propagated to all nodes offering this service, so that results

Page 178: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

162

from synonym lookup do not vary on different nodes. The synonym files are created and changed outside of the rescue operation, i.e., in Phase 1 (a priori) and Phase 6 (post processing). findSynonym(): Retrieves synonyms for each of the terms (defined in the given vocabulary) in other vocabularies. The function uses SMOF to look up the synonyms in a thesaurus. Calls getSynonyms() for each of the input terms. The result for each term is a string with vocabulary IDs and terms in the format ‘voc1.syn1,voc2.syn1,...’. These synonyms are then added to an array (collecting all results) by calling addSynonyms(), which contains for each term a list of synonyms from each vocabulary. The collected results are formatted into a string by calling createReturnString() with the result array. The function returns sets of attribute synonyms from each/relevant vocabulary(vocID) or null on failure.

The format of the terms given as input: <attr1,attr2,...> The format of the return string: <<vocID1,<attr1,attr2,...>>,<vocID2,<attr1,attr2, ...>>,...>

public String findSynonym(String vocID,String terms) begin create array vocAndTerms to keep original vocID and terms add vocID to first element in vocAndTerms split terms and remove delimiters add all terms to vocAndTerms create theList to keep synonyms for each retrieved vocabulary add vocAndTerms to theList for each term in vocAndTerms //note: skipping first (vocID)

call SMOF.getSynonyms with vocID and current term returning theSyns if theSyns not equal null then call addSynonyms with theSyns and theList

endif endfor call createReturnString with theList returning returnString return returnString end findSource(): Retrieves nodeIDs of nodes that are associated with given terms from the vocabulary (vocID). After removing ‘<’ and ‘>’ surrounding the input terms, DDM is requested to find all related resources by calling findResources() with the vocID, terms, and ‘false’ to state that the search should include non-local metaResIDs. The linkLevel for each result is checked and only metaResID from links that point to non-local resources are added to the string of resulting nodeIDs. Returns a list of nodeIDs or null if there are no results.

The format of the terms given as input: <attr1,attr2,...> The format of the return string: <nodeID1,nodeID2,...>

public String findSource(String vocID, String terms) begin

remove first '<' and last '>' from terms add vocID and terms to searchTerms call DDM.findResources with searchTerms and "," and false

returning linkList add "<" to resultNodes for all conceptlinks in linkList

set clink = current conceptlink in linklist if clink.linkLevel equal 0 then //local link

if resultNodes equals "<" then

Page 179: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

163

//the first nodeID to resultNodes call clink.getMetaResID returning metaResID append metaResID to resultNodes else call clink.getMetaResID returning metaResID append "," and metaResID to resultNodes endif endif endfor append ">" to resultNodes if resultnodes equals "<>" then //no nodeIDs retrieved return null endif return resultNodes end

In Class SMOF, a very simple lookup into synonym file is implemented:

getSynonyms(): Retrieve synonyms for a given term and vocabulary. The synonyms are in a synonym file. Returns null if the vocabulary is not in this file. Synonyms are stored in the following format: “vocID.term”. This is to solve cases of ambiguity, e.g., where the same term is used differently in different vocabularies. Each line in the file contains synonyms across vocabularies, i.e., terms from different vocabularies that have similar/equivalent meaning. The format of the vocabulary and synonyms stored in the file:

voc1.term11,voc2.term21,voc3.term31,... voc1.term12,voc3.term32,voc4.term42,...

For example, the line below shows synonyms for the meaning ‘body temperature’ from three vocabularies:

fire1.bodytemp,med1.temperature,med2.temp

To find synonyms for, e.g., ‘vocID=med2’ and ‘term=temp’, the file is searched for the string ‘med2.temp’, and when there is a match, the full line (read from the file) of synonyms containing the match is returned. In this implementation we only have one file with synonyms, and the filename and location is set at start-up, so in this case we do not use DDM to find the synonym file for the requested relevant vocabulary. public String getSynonyms(String vocID, String term) begin open synonymFile //filename is set at start up create searchStr containing vocID and "." and term while read new line from file into str if searchStr is found in str then close synonymFile return str endif endwhile close synonymFile return null //not valid vocabulary end

8.4.3 Functionality Implemented in Package ddmgr The implemented DDM functionality (ref. Chapter 5) presented in this section is what we consider to be most relevant to demonstrate DDM functionality. Other implemented functionality includes update of timestamp for a metaResID in the SDDD_Availability

Page 180: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

164

table, and functionality to clear LDD and SDDD if they are to be discarded at start-up. In Class DDMImpl the following implemented functionality is described:

− DDM start-up and shut down − Register items in LDD − Update existing entry in LDD − Resolve link (lookup in LDD) − Query SDDD for resources − Add or update links to SDDD − Metadata exchange

In addition, we describe relevant functionality implemented in classes SDDD and LDD. At run time, there is an instance of class LDD handling access to LDD tables in the database, and an instance of class SDDD handling access to SDDD tables. In most cases, the DDMImpl (‘DDM’) calls functions in these for access to data dictionary tables. The class InfoItem represents a metadata description/registration in the LDD, and the class ConceptLink represents an entry in the SDDD. We do not comment on committing changes to the database or the use of Statement. • DDM start-up and shut down

boot(): Start-up/boot of (DDM and) data dictionaries. Options are given for restoring existing LDD and/or SDDD, or to create from scratch. Initial elements to register in the LDD are taken from a file (LDD_initFile). For this test implementation, the file name is assigned to LDD_initFile directly in the code, and its contents added in a text editor. A description of the boot procedure follows the pseudocode. public void boot(boolean restoreLDD, boolean restoreSDDD) begin

set isBooting = true /*boot the Data Dictionary

(create Derby & boot DataDictionary incl LDD & SDDD)*/ set conn = create database connection and connect to database if conn is null then set isBooting = false return endif if new database has been created then //add initial items to LDD - already done if keeps LDD call addInitialElementsLDD with LDD_initFile /*populate SDDD with LDD info – already done if keeps SDDD*/ call populateSDDD call SDDD.setTimeLastUpdate with system current time else //init SDDD time of last update set s = create new Statement on database connection call SDDD.selectLastUpdateTime with s returning lastTime call SDDD.setTimeLastUpdate with lastTime close s if restoreLDD is false then //clear and replace DDM dictionary contents call clearLDD call clearSDDD call clearExchangeList call addInitialElementsLDD with LDD_initFile call populateSDDD

Page 181: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

165

endif else if restoreSDDD is false then call clearSDDD call clearExchangeList call populateSDDD endif endif set isBooting = false end

A connection is made to a database (or a new one created). If it is a new database, initial elements (from LDD_initFile) are added to LDD by calling the function addInitialElementsLDD(), and the SDDD is populated by calling populateSDDD(), before the time of last update in the instance of class SDDD is set to the current time by calling setTimeLastUpdate(). The class SDDD is a helper class handling access to the SDDD tables. If a connection was made to an existing database, the time of last update in the current instance of class SDDD is set by retrieving the time of last update from the SDDD tables in the database and set the time to this. If the LDD is not to be restored, the existing LDD and SDDD tables and the list of previous exchanges are cleared, and the LDD and SDDD are populated as in the case of a new database. In cases where only the LDD is restored, the SDDD tables and exchange list are cleared and the SDDD populated anew, which results in a SDDD with only local links. A global boolean variable ‘isBooting’ is used to keep control of whether we are in a booting situation or not, and is used for instance when registering new LDD entries to know if a local update of SDDD should be performed. restart(): Restart of DDM, keeping both LDD and SDDD. This function only calls boot() with both restore of LDD and SDDD set as true.

public void restart() shutdown(): Shuts down the database connection (to Derby).

public void shutdown() addInitialElementsLDD(): Add initial elements to LDD (at start-up/boot). The initial LDD information is stored in a file with the given file name (fileName). The format of the elements stored in the LDD init file (one element per line):

infoItemID:concepts:resourceName:location

This very simple function reads the content from the init file and calls addInfoItem() for each line that contains the correct number of attributes (assuming the above format is used). private void addInitialElementsLDD(String fileName) begin open file fileName while read new line from file into str split str at ":" into array temp

if temp contains 4 elements then create a new InfoItem with contents from temp

call addInfoItem with the created InfoItem endif endwhile close file end

Page 182: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

166

populateSDDD(): Populates the SDDD with LDD contents at boot time, and performs metadata extraction from the LDD. The function retrieves all relevant information (concept terms and metaResIDs) from the LDD registrations, and for each entry splits the string of concept terms (keyconcepts) and creates a new ConceptLink for each concept term with linkLevel 1 (local) and the metaResID (infoItemID). If such a link does not exist in a list of concept links, it is added to the SDDD. All concept links added to SDDD are also added to a temporary list to avoid adding duplicate links (this has been done to save access to database, but as each link is checked against the database before it is added, in retrospect this may be unnecessary). private void populateSDDD() begin create ArrayList concAndID to keep links set s = create new Statement on database connection

call LDD.getAllInLDD with s returning rs while another element in rs

create new InfoItem newItem add data from rs element to newItem

call infoItemToLinks with newItem returning concLinkList

for all clink in concLinkList if clink is not contained in concAndID then

add clink to concAndID endif endfor endwhile close rs while another conceptid in concAndID call SDDD.addEntrySDDD with current conceptlink,true,s endwhile close s commit changes to database end

linfoItemToLinks(): This is a helper function to create a list of ConceptLink items from an InfoItem. This function is used when populating the SDDD with items from LDD, and also when registering and updating items in LDD. private ArrayList infoItemToLinks(InfoItem item) begin

create ArrayList cLinkList //keyconcepts separated by comma, need to split into terms set temp = get keyconcepts from item split temp at “,” and add the terms to conceptArray get infoItemID from item

set linkLevel = 1 //local metaResID for all terms in conceptArray create a new ConceptLink cLink

add current term to cLink add infoItemID and linkLevel to cLink

//add to cLinkList to avoid duplicate links if cLink is not contained in cLinkList then

add cLink to cLinkList endif endfor return cLinkList end

Page 183: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

167

• Register items in LDD addInfoItem(): Add/register information item in LDD (LDD entry). Input is an InfoItem containing the metadata description, metaResID, resource name and location. If the item does not already exist in the LDD, the instance of class LDD is requested to add the new registration to LDD by calling addInfoItem(). The class LDD is a helper class for accessing the LDD tables in the database. As long as we are not in a boot situation, the function infoItemToLinks() creating concept links from an InfoItem is called, returning a list of ConceptLink items. Then localUpdateSDDD() is called to update the SDDD with the list of new links. public void addInfoItem(InfoItem newInfoItem) begin set s = create new Statement on database connection if newInfoItem already exists in LDD then close s return endif call LDD.addInfoItem with newInfoItem and s if not booting then //isBooting = true call infoItemToLinks with newInfoItem returning concLinkList

call localUpdateSDDD with concLinkList and s endif close s commit changes to database end

• Update existing entry in LDD

updateInfoItem(): Update an existing entry in the LDD. Input is the new/changed metadata description as an InfoItem. The instance of class LDD is requested to get the LDD entry (oldItem) with the infoItemID of the updateItem. If the item exists, the instance of class LDD is requested to update the entry given the updateItem by calling updateInfoItem(). Then the updateItem is converted to a set of concept links, and a local update to SDDD on the list is performed. public void updateInfoItem(InfoItem updateItem) begin set s = create new Statement on database connection call LDD.getInfoItem with updateItem.infoItemID and s,

returning oldItem if oldItem is null then close s

return endif call LDD.updateInfoItem with updateItem and s call infoItemToLinks with newInfoItem returning concLinkList

call localUpdateSDDD with concLinkList and s close s commit changes to database end

• Resolve link (lookup in LDD) getInfoItem(): Request information about items registered in the LDD. Returns the metadata description in form of an InfoItem. The instance of class LDD is requested to retrieve the metadata registration with the given infoItemID (metaResID) by calling getInfoItem(). public InfoItem getInfoItem(String infoItemID) begin

set s = create new Statement on database connection

Page 184: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

168

call LDD.getInfoItem with infoItemID and s returning infoIT close s return infoIT end

• Query SDDD for resources findResources(): Find all resources related to given terms/concepts (separated with the delimiter given in ‘delim’). Returns all related SDDD links (as instances of ConceptLink) with link level and availability. The function splits the given terms (concepts) by the delimiter and requests the instance of class SDDD to search the SDDD tables for links with these terms. If ‘onlyLocal’ is false, the search is restricted to return only locally registered items (local LDD). public ArrayList findResources(String concepts, String delim, boolean onlyLocal) begin set conceptArray = split concepts string to each term by delim set s = create new Statement on database connection //search SDDD for (local)links with the concepts call SDDD.searchSDDD with conceptArray, onlyLocal, s returning resultArray close s return resultArray end

• Add or update links to SDDD

localUpdateSDDD(): Update SDDD after changes in LDD. Applications can not add items to SDDD. Adding entries to the SDDD are only initiated by the DDM at local update (after changes in LDD), at boot/start-up, or at exchange/merge. Input is a list of ConceptLink instances. As this is a local update, i.e., the links have been extracted from the LDD, the availability is set to true. The SDDD instance is requested to add a new entry in SDDD for each link in the list in the database by calling addEntrySDDD() with the link and availability. private void localUpdateSDDD(ArrayList conceptLinks, Statement s) begin while another link in conceptLinks set availability = true call SDDD.addEntrySDDD with current link, availability, and s endwhile end

addLink(): Add a new semantic link to SDDD_Linking table. Used in the merge step at metadata exchange, and in populating the SDDD (local update). If a link with the given conceptID (concept term) and metaResID does not already exist in the SDDD, a new ConceptLink is created, and the SDDD instance is requested to add the link by calling addLinkSDDD() with the new link. private void addLink(String conceptID, String metaResID) begin set s = create new Statement on database connection if a link with conceptID and metaResID already exist then close s return endif create a new ConceptLink clink with conceptID and metaResID call SDDD.addLinkSDDD with clink and s close s

Page 185: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

169

commit changes to database end

addAvailability(): Add availability for a newly added link in SDDD_Availability table. The SDDD_Availability table is checked to see if the metaResID is already registered. If it is registered, the SDDD is requested to update the availability for this metaResID by calling updateAvailabEntry(). If it does not exist, the metaResID and availability is added to the SDDD by calling addAvailabSDDD(). private void addAvailability(String metaResID, boolean availab, short linkLevel) begin set s = create new Statement on database connection call SDDD.selectAvailab with metaResID and s returning result if result not empty then call SDDD.updateAvailabEntry with metaResID, availab,

linkLevel, and s close s return endif call SDDD.addAvailabSDDD with metaResID,availab, linkLevel, and s close s commit changes to database end

updateAvailab(): Update availability (and timestamp) for a given metaResID to keep track of information availability in SDDD_Availability table. This function forwards the request of availability update to the instance of class SDDD by calling updateAvailab() with the metaResID and new availability. private void updateAvailab(String metaResID, boolean newAvailab) begin set s = create new Statement on database connection

call SDDD.updateAvailab with metaResID,newAvailab,s close s commit changes to database end

• Metadata exchange As described in Chapter 5, metadata exchange is initiated when the KM is notified from the system about an event of node in range, or prompted to initiate exchange by, e.g., a timer or a local update event, e.g., if there is a need for refreshing exchange when nodes have been in range for a long time. The DDM keeps all previous exchanges (nodeID and timestamp) in the DDM_Exchanges table. We repeat the procedure for exchange: Stepwise overview of metadata exchange procedure: 1) DDM is notified of event by system, or prompted for refresh exchange 2) DDM (on both nodes) communicates its nodeID 3) when DDM receives nodeID from the other node, it will gather and send updates: a) get updates for exchange: i) checks its list of previous exchanges with that node

ii) get time of last update for the SDDD iii) if no previous exchanges: gather all SDDD links else: only what was updated since last exchange b) use underlying message system to send updates

4) when DDM receives updates from other node:

Page 186: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

170

a) update SDDD tables with new info (merge into SDDD) b) update time of last update of SDDD in instance of

class SDDD 5) update time of last exchange for the node (or add the new nodeID and time)

sendUpdates(): For exchange: send all updates to a given nodeID reflecting changes after the previous exchange with the same nodeID. After collecting all updates for exchange by calling getUpdatesForExchange(), the underlying message system is used to send the updates to the input nodeID. This constitutes step 3 in the metadata exchange procedure above. public void sendUpdates(String otherNodeID) begin

call getUpdatesForExchange with otherNodeID returning updates send updates to otherNodeID using underlying message system

end

getUpdatesForExchange(): Retrieves updates to send to a given nodeID, reflecting changes after a previous exchange with the same nodeID. This constitutes step 3 a) in the metadata exchange procedure above. The list of previous exchanges (table DDM_Exchange) is checked for exchanges with input nodeID by calling function getTimeLastExchange(). Time of last changes in SDDD is retrieved by calling getTimeLastUpdate(). If there have not been any previous exchanges with the input nodeID, all SDDD links are retrieved by calling getSDDDLinks(). If a previous exchange has taken place, and there have been updates in SDDD after the time of last exchange with the node, all links updated since the last exchange are retrieved by calling getLinksSinceTime() with the time of last exchange. The links in the returned update are formatted as the following (using linkListToString(), a helper function for this purpose, the pseudocode is not included):

<link> <conceptID> conceptID </conceptID>

<metaResID> metaResID </metaResID> <available> availability </available> <linklevel> linklevel </linklevel> </link>

If there are no updates, "<-NO_UPDATES->" is returned. public String getUpdatesForExchange(String otherNodeID) begin call getTimeLastExchange with otherNodeID returning

lastExchange call SDDD.getTimeLastUpdate returning lastUpdate

set links = null if lastExchange equals null then /*no previous exchanges

-> return all SDDD linkinfo to node*/ call getSDDDLinks returning links else

if lastUpdate > lastExchange then call getLinksSinceTime with lastExchange returning links endif endif //in any other case, null is returned call linkListToString with links returning resultString

return resultString end

Page 187: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

171

getTimeLastExchange(): Returns the time for the last exchange for a given nodeID. This is part of step 3 a) in the metadata exchange procedure above (i). This function queries the table DDM_EXCHANGE for an entry with the input nodeID and returns the time of last exchange if the nodeID was found. If the query does not retrieve a result, null is returned. DDM_EXCHANGE is included in the schema for system tables (SYS) in DERBY. private Timestamp getTimeLastExchange(String nodeID) begin

set query = "select * from SYS.DDM_EXCHANGE " + "where (NODEID = '" + nodeID + "')"

set s = create new Statement on database connection call s.executeQuery with query returning rs if rs empty then return null endif call rs.getTimestamp with "TIMELASTEXCHANGE" returning result close rs close s return result end

getSDDDLinks(): Get all links from SDDD. Used in getUpdatesForExchange() in cases when there has not been any previous exchanges with the node the DDM is exchanging SDDDs with. This is part of step 3 a) in the metadata exchange procedure above (iii). The SDDD is requested to retrieve all links in SDDD tables by calling getAllLinks(). public ArrayList getSDDDLinks() begin set s = create new Statement on database connection call SDDD.getAllLinks with s returning resultList close s return resultList end

getLinksSinceTime(): Get all changes in SDDD since given time. Used in getUpdatesForExchange() when there have been previous exchanges with a nodeID. Retrieves the time of last update of SDDD by calling getTimeLastUpdate(). Checks if time of last SDDD update is ‘before’ the input time of last exchange. If there were updates after time of last exchange, links changed after the time of last exchange is retrieved by calling getLinksChangedAfter() with time of last exchange. Returns a list of ConceptLink items or null if there were no changes. This is part of step 3 a) in the metadata exchange procedure above (iii). public ArrayList getLinksSinceTime(Timestamp lastExchange) begin

call SDDD.getTimeLastUpdate returning timeLastUpdate if timeLastUpdate equals null then

return null endif if timeLastUpdate < lastExchange then

return null endif set s = create new Statement on database connection call SDDD.getLinksChangedAfter with lastExchange and s

returning resultList close s

Page 188: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

172

return resultList end

receiveAndMerge(): Receive SDDD contents from another node after exchange and merge this into SDDD. This function constitutes step 4 in the metadata exchange procedure above, and is called to simulate receiving SDDD content from another node. Step 4 b) is performed in mergeIntoSDDD() in class SDDD. If input from other node contains updates, a helper function (stringToLinkList() – description not included here) is called to convert the received string to a list of ConceptLink items. These links are then merged into the SDDD tables by calling mergeIntoSDDD() with the list of links and nodeID. After the merge, the list of previous exchanges is updated by calling updateExchangeList() with the nodeID of the “sending” node. public void receiveAndMerge(String sdddInfo, String otherNodeID) begin

if sdddInfo not equal “<-NO_UPDATES->” then set s = create new Statement on database connection call stringToLinkList with sdddInfo returns updateLinks call SDDD.mergeIntoSDDD with updateLinks,otherNodeID, and s call updateExchangeList with otherNodeID and s close s commit changes to database

endif end

updateExchangeList(): Update the list of previous exchanges in table DDM_EXCHANGE, which is included in the schema for system tables (SYS) in DERBY. This is step 5 in the metadata exchange procedure. The time of last exchange for the input nodeID is retrieved by calling getTimeLastExchange(). If there is no previous exchange with this node, its nodeID is added to the list of previous exchanges by making a new entry in the DDM_EXCHANGE table. A query with a procedure call is sent to the database. The system procedure ADD_EXCHANGE() is part of schema DDMPROC in Derby DBMS. If there is a previous exchange with the nodeID, a query updating the time of last exchange for this nodeID in DDM_ECHANGE table. private void updateExchangeList(String nodeID,Statement s) begin call getTimeLastExchange with nodeID returning timeLastExchange if timeLastExchange equals null then //adding new nodeID to DDM_EXCHANGE set query = "CALL DDMPROC.ADD_EXCHANGE('" + nodeID+ "')" call s.execute with query return endif //update time in DDM_EXCHANGE: set currentTime = current system time as Timestamp set query= "update SYS.DDM_EXCHANGE "

+ "set TIMELASTEXCHANGE = '" + currentTime + "' " + "where ('" + nodeID + "' = NODEID)"

call s.executeUpdate with query end

Page 189: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

173

• Relevant functionality in class SDDD: We list relevant SDDD methods that are called in the functions described above. The SDDD tables are SDDD_LINKING and SDDD_AVAILABILITY, which are included in the schema for system tables (SYS) in DERBY.

mergeIntoSDDD(): Merge into SDDD a list of links received from another node. Used in metadata exchange after having received updates from another node. Input is a list of ConceptLink instances. Each link in the list is added to the SDDD by calling addEntrySDDD() with the link and its availability. public void mergeIntoSDDD(ArrayList newLinks, String nodeID,

Statement s) begin if newLinks equal null then

return endif

for all links in newLinks set link = current link call addEntrySDDD with link, link.available, and s endfor end

searchSDDD(): Search SDDD for links related to a set of concept terms (concepts). It is possible to restrict the search to locally registered items (local LDD) by setting onlyLocal to true. A query is formatted containing the (concept) terms in ‘concepts’, and (if the search is only local) the link level, to retrieve necessary data from the SDDD tables (SDDD_LINKING and SDDD_AVAILAIBLITY). The results from query execution are converted to ConceptLink instances and added to a list that is returned. public ArrayList searchSDDD(String [] concepts, boolean onlyLocal,

Statement s) begin //format given input for query set whereContent = "(CONCEPTID = '"

add the first concept in concepts to whereContent add "') " to whereContent

while another concept in concepts append "or (CONCEPTID = '" to whereContent append concept + "') " to whereContent endwhile set ifOnlyLocalStr = " and (LINKLEVEL = 1) " set query = "select link.CONCEPTID, link.METARESID, " + "availab.AVAILABLE, availab.LINKLEVEL, "

+ "availab.TIMELASTUPDATE, link.TIMELASTUPDATE " + "from ( select CONCEPTID, METARESID, " + "TIMELASTUPDATE " + "from SYS.SDDD_LINKING " + "where " + whereContent +")link, " + "SYS.SDDD_AVAILABILITY as availab " + "where (link.METARESID = availab.METARESID)"

if onlyLocal equal true then append ifOnlyLocalStr to query endif call s.executeQuery with query returning rs create linkList while another element in rs create new ConceptLink cLink add information from current rs to cLink add cLink to linkList endwhile return linkList end

Page 190: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

174

addEntrySDDD(): Add a new entry in SDDD, both SDDD links and availability information are added. Link information, i.e., conceptID (term), metaResID, and linkLevel is retrieved from the input ConceptLink. Before adding a new link, the function checks if the link already exists by calling selectLink(). If not, the new link is added by calling addLinkSDDD(). The existence of an availability tracking entry for the metaResID is checked by calling selectAvailab(). If none exists, addAvailabSDDD() is called to add a new entry, while if an entry does exist, the availability is updated by calling updateAvailabSDDD(). public void addEntrySDDD(ConceptLink theLink, boolean available,

Statement s) begin call theLink.getMetaResID returning metaResID call theLink.getConceptID returning conceptID

call theLink.getLinkLevel returning linkLevel call selectLink with conceptID, metaResID, s returning rs if rs empty then call addLinkSDDD with theLink and s endif

//check if metaResID availability is already registered call selectAvailab with metaResID and s returning rs

if rs empty then call addAvailabSDDD with metaResID, available, linkLevel, and s

else call updateAvailabSDDD with metaResID, available, and s endif end

addLinkSDDD(): Add a new link to SDDD_LINKING table. metaResID and conceptID (concept term) is retrieved from the input ConceptLink. A query with a procedure call is sent to the database. The system procedure ADD_LINK() is part of schema SDDDPROC in Derby DBMS. public void addLinkSDDD(ConceptLink theLink, Statement s) begin call theLink.getMetaResID returning metaResID call theLink.getConceptID returning conceptID set query = "CALL SDDDPROC.ADD_LINK('"

+ conceptID + "','" + metaResID + "')"

call s.execute with query call setTimeLastUpdate with current system time end

addAvailabSDDD(): Add new availability to SDDD_AVAILABILITY table. Converts the boolean input for availability to data type short. Creates a query with procedure call to system procedure ADD_AVAILAB() to add a new entry. After executing the query, the time of last update in SDDD is refreshed by calling setTimeLastUpdate(). public void addAvailabSDDD(String metaResID, boolean available,

short linkLevel, Statement s) begin if available equal true then set shortAvailab = 1 else set shortAvailab = 0 endif set query = "CALL SDDDPROC.ADD_AVAILAB('"

+ metaResID + "', "

Page 191: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

175

+ shortAvailab + "," + linkLevel + ")"

call s.execute with query call setTimeLastUpdate with current system time

end

updateAvailab(): Update availability for a metaResID in SDDD_AVAILABILITY table. Converts the boolean input for availability to data type short. Creates an update query with input parameters metaResID and new availability. The time of last update in SDDD is refreshed by calling setTimeLastUpdate(). public int updateAvailab(String metaResID, boolean newAvailab,

Statement s) begin if newAvailab equal true then set shortAvailab = 1 else set shortAvailab = 0 endif set currentTime = current system time as Timestamp set query = "update SYS.SDDD_AVAILABILITY "

+"set AVAILABLE = " + shortAvailab + "," + "TIMELASTUPDATE = '" + currentTime + "' " + "where ('" + metaResID + "' = METARESID)"

call s.executeUpdate with query returning result call setTimeLastUpdate with currentTime return result end

selectLastUpdateTime(): Find the latest time of change in the SDDD tables, checking both SDDD_LINKING and SDDD_AVAILABILITY. This function is called at start up (boot) to initiate the time of last update in the SDDD. Resulting times are compared and the freshest time returned. If there are no entries in the checked tables, the current system time is returned. public Timestamp selectLastUpdateTime(Statement st) begin set latest = current system time set query = "select MAX (DISTINCT TIMELASTUPDATE) " + "from SYS.SDDD_LINKING" call st.executeQuery with query returning ResultSet rs if rs not empty then retrieve timestamp from rs into temp if temp not equal null then

set latest = temp endif endif set query = "select MAX (DISTINCT TIMELASTUPDATE) " + "from SYS.SDDD_AVAILABILITY" call st.executeQuery with query returning rs if rs not empty then retrieve timestamp from rs into temp if temp not equal null then if latest < temp then

set latest = temp endif

endif endif close rs return latest end

getAllLinks(): Get all links in the SDDD tables SDDD_Linking and SDDD_Availability. This function is used in cases of first time exchange with a

Page 192: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

176

node. A query is created selecting necessary link data from the SDDD tables, and a list of ConceptLink items is created from the results. public ArrayList getAllLinks(Statement s) begin set query = "select link.CONCEPTID, link.METARESID, "

+ "availab.AVAILABLE, availab.LINKLEVEL, " + "availab.TIMELASTUPDATE,link.TIMELASTUPDATE "

+ "from ( select CONCEPTID, METARESID, " + "TIMELASTUPDATE " + "from SYS.SDDD_LINKING) link, " + "SYS.SDDD_AVAILABILITY as availab "

+ "where (link.METARESID = availab.METARESID)" call s.executeQuery with query returning rs create linkList while another element in rs set cLink = new ConceptLink add information from current rs element to clink add cLink to linkList endwhile return linkList end

getLinksChangedAfter(): Get all links in SDDD updated after given time. A query is created selecting link data from the SDDD tables that were added after the input time of last exchange. A list of ConceptLink items is created from the results. public ArrayList getLinksChangedAfter(Timestamp lastExchange,

Statement st) begin set query = "select link.CONCEPTID, link.METARESID, " + "availab.AVAILABLE, availab.LINKLEVEL, " + "availab.TIMELASTUPDATE, link.TIMELASTUPDATE " + "from ( select CONCEPTID, METARESID, "

+ "TIMELASTUPDATE " + "from SYS.SDDD_LINKING " + "where TIMELASTUPDATE > '" + lastExchange +"')link, " + "SYS.SDDD_AVAILABILITY as availab " + "where (link.METARESID = availab.METARESID)" + "and(link.TIMELASTUPDATE > '" + lastExchange + "')"

call st.executeQuery with query returning rs create linkList while another element in rs set cLink = new ConceptLink add information from current rs element to clink add cLink to linkList endwhile return linkList end

• Relevant functionality in class LDD:

We list relevant LDD functions. The only LDD table in this implementation is LDD_INFOITEM, which is included in the schema for system tables (SYS) in DERBY. In the following, we use the term ‘LDD table’. getAllInLDD(): Retrieves all registered entries in the LDD table. This function is used for extracting terms from LDD metadata descriptions when populating the SDDD.

Page 193: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

177

public ResultSet getAllInLDD(Statement st) begin set query = "select * from SYS.LDD_INFOITEM" call st.executeQuery with query returning rs return rs end

addInfoItem(): Add a new entry to the LDD table. A query for a procedure call is created with data from the input InfoItem. The system procedure ADD_LDDITEM() is part of schema LDDPROC in Derby DBMS. public void addInfoItem(InfoItem theItem, Statement s) begin call theItem.getInfoItemID returning metaResID

call theItem.getKeyConcepts returning concepts call theItem.getResourceName returning resName call theItem.getResourceLocation returning resLoc

set query = "CALL LDDPROC.ADD_LDDITEM('" + metaResID + "', '"

+ concepts + "', '" + resName + "', '" + resLoc + "')") call s.execute with query end

getInfoItem(): Retrieves an entry with the given metaResID (infoItemID), returns a new instance of InfoItem or null. Calls getInfoItemFromID() with the given infoItemID, creates a new InfoItem with the data from the result set. public InfoItem getInfoItem(String infoItemID, Statement s) begin call LDD.getInfoItemFromID with infoItemID and s

returning rs if rs not empty then set item = new InfoItem populate item with information from rs return item endif return null end

getInfoItemFromID(): Retrieves LDD entry with the given metaResID (infoItemID). The function creates a query with the infoItemID and retrieves the item registered with this infoItemID from the LDD table. public ResultSet getInfoItemFromID(String infoItemID, Statement s) begin set query = "select * from SYS.LDD_INFOITEM " + "where ('" + infoItemID + "' = INFOITEMID) " call s.executeQuery with query returning rs

return rs end

updateInfoItem(): Updates existing LDD entry. The function retrieves the metaResID, terms (concepts), resource name (resName) and location (resLoc) from the given updateItem, sets the time of last update to the current time, and creates a query with this information. An update is then executed with this query on LDD table. public int updateInfoItem(InfoItem updateItem, Statement s) begin call updateItem.getInfoItemID returning metaResID

call updateItem.getKeyConcepts returning concepts call updateItem.getResourceName returning resName

Page 194: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

178

call updateItem.getResourceLocation returning resLoc set currentTime = current system time set query = "update SYS.LDD_INFOITEM "

+ "set KEYCONCEPTS = '" + concepts + "', " + "RESOURCENAME = '" + resName + "', " + "RESOURCELOCATION = '" + resLoc + "', " + "TIMELASTUPDATE = '" + currentTime + "' " + "where ('" + metaResID + "' = INFOITEMID)" call s.executeUpdate with query returning rs return rs

end

8.4.4 Testing The implemented functionality has been tested to see that the KM and DDM performs the intended functionality correctly (functionality testing), including management of the LDD and SDDD dictionaries, particularly dictionary contents at metadata exchange. The environment outside of the KM was simulated using a simple test application. Aspects tested include registration of items in LDD, extraction of LDD items into SDDD, search in SDDD returning relevant items, local lookup in LDD, vocabulary mapping using thesaurus lookup in SMOF, metadata exchange between nodes, start-up/bootstrap of the dictionaries, restart, and shut down.

8.5 Summary In this chapter we have described an example realisation of KM and DDM for proof-of-concept using the embedded version of the relational DBMS Derby. The rationale behind the choice of DBMS is stated before a brief description of the Derby DBMS. Class diagrams showing packages and classes together with an explanation of the relationship between implementation and the design of KM and DDM. We have also given a record of modifications to Derby DBMS code to incorporate the DDM tables in the Derby data dictionary. Metadata management functionality offered to applications in the KM Interface are illustrated in sequence diagrams, followed by pseudocode giving detailed description of the implementation of KM and DDM.

Page 195: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

179

Chapter 9 Evaluation In this thesis, solutions for supporting network wide information sharing in rescue and emergency situations through metadata and knowledge management have been presented. The focus of the work has been on the design of proposed solutions to a set of requirements. As we have no entire running system, only a part of the design has been implemented (Chapter 8). Therefore, the work is evaluated through seeing how our design fulfils the requirements. As a basis for this evaluation, we first summarise contributions and central concepts presented in the thesis, and draw lines between the designs presented in Chapters 4 and 5, ontology based update in Chapter 6, and the examples and proof-of-concept implementation in Chapters 7 and 8. Then we go through the issues and requirements for the overall middleware and specifically to knowledge management, before evaluating how these are met in this work. Finally, we discuss how the proposals in this thesis contribute to the claims.

9.1 Review of Contributions In this section, we take one step back and look at what has been done in this thesis. We review central concepts from the design – presented in Chapters 4, 5, and 6, and relate these to the example in Chapter 7, and the implementation in Chapter 8. The work in this thesis has focused on three aspects of sharing information in our application scenario: knowledge management, metadata management, and ontology based update. Knowledge Management In Chapter 4, we presented the KM and described its design. The KM is designed for supporting requirements to knowledge and information management in our application scenario. It consists of a set of sub-components, each addressing an issue for knowledge management in this context. The sub-components are the DDM, SMOF, PCM, QM, and the XML-P. The three first of these are metadata handling components, targeted at handling different kinds of metadata found to be central in our application scenario: structure and content description metadata, semantic metadata, and profile and context metadata. The QM and XML-P are tool components providing general services to applications and middleware components. In addition, the KM has a KM Engine for execution control and dispatching tasks to appropriate sub-components, and an interface, KM Interface, offering services to the application layer (and to other middleware components in the Ad-Hoc InfoWare architecture). We have outlined the design of each

Page 196: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

180

KM component together with requirements and resulting functionality for the KM. The functionality shown in Chapter 4 as offered through the KM Interface is only parts of the functionality that the KM actually would offer to fully fulfil its requirements. The reason for this is that we have focused on services related to metadata management, i.e., the domain of the DDM, and on vocabulary mapping, as the main focus of this thesis has been on metadata management and sharing in this environment. Metadata Management In Chapter 5, we presented our three-layered approach to metadata management, and focused on our design for one of the KM components, the DDM, together with requirements and a description of functionality. We presented our approach from three perspectives: levels of information, realisation, and data modelling technology. The three levels of information reflect the hierarchical view of knowledge and to some extent also the three kinds of metadata central in our application scenario. As candidates for data modelling technologies, RDF/OWL and Topic Maps were discussed. The realisation aspect is represented by the design of the DDM; two of the levels of our approach are realised by the DDM two-layered data dictionary. We presented the design and purpose for the DDM and its data dictionaries, the LDD and SDDD, and illustrated this with use cases and examples of how semantic links are exchanged between nodes to share information about shared resources. The use of semantic links connecting the two lower layers, i.e., the information and semantic context layers, plays an important part of our approach. These links, together with the availability of the resources they point to, are extracted from the LDD and stored in the SDDD. The metadata descriptions of information resources registered in the LDD to be shared in the network, use terms from an ontology or vocabulary, thus creating a connection to the third layer of our approach, the ontology layer. Three kinds of updates relevant in our application scenario were described, together with a proposal for a solution to one of these – metadata exchange, i.e., exchanging the semantic links between nodes, which is explained thoroughly through examples. Our proposed solution to ontology based dynamic updates is presented in Chapter 6. Ontology Based Updates Ontology based dynamic updates use ontologies for the purpose of achieving a better utilisation of scarce resources when sending updates. Due to the characteristics of both rescue operations and Sparse MANETs, there may be frequent updates that need to be distributed to relevant personnel at run-time, and our solution proposes to use prioritised update messages to allow updates of higher importance to have a higher priority in the system. This is solved through using information from profiles and contexts defined in an ontology. The solution is also intended to support rescue operation organisation through preserving the structure and flow of information inherent in the organisation of the rescue operation itself, by expressing these in an ontology. To illustrate how ontologies can be utilised in this approach, we give an example of a profile based rescue ontology. The example ontology consists of defined classes for user profile, device profile, rescue scenario profile, and information profile, as well as definitions of common rescue operation roles. Dynamic context is also defined in Chapter 6. This ontology supports filtering in that it has information profiles connected to rescue operation roles. In this chapter, we also described how the knowledge base would be populated in the different rescue scenario phases, which KM components are involved in ontology based update, and outlined how it is solved in our architecture. The example rescue ontology is used extensively in demonstrating areas of usage for the KM in a real rescue scenario in Chapter 7, where we also show how information from this ontology is used in ontology based

Page 197: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

181

dynamic updates though example contents of update messages and knowledge base contents. How it All Comes Together - Example As realising all the KM components and the surrounding environment to fully prove the applicability of our concepts during a rescue operation would be out of scope for this thesis, we presented in Chapter 7 a large example to illustrate how the KM would be used in a real scenario at run-time. Although theoretical, this constitutes a realisation of our approach and design, particularly demonstrating ontology based updates and how an ontology can be utilised in this. The example rescue scenario – a railway accident – is analysed into the six rescue scenario phases described in Chapter 2. Examples of different kinds of events are given for each phase. The rescue profile ontology from Chapter 6 is used extensively in these examples, illustrating how such an ontology can be used, contents of the knowledge base in the different phases, and how the ontology based updates would take place.

The railway accident in the example scenario is a medium size accident set in a remote mountain area, with limitations on impact of area and the number of people involved. All personnel are equipped with devices of varying size, participating in information sharing on the site. The structure and organisation of the operation is a Norwegian SAR operation, as described in Chapter 2. The main flows of communication are between team members from the same organisation, among members of task oriented ad-hoc teams, and between different levels in the rescue operation organisational hierarchy. Hindrances to the communication flow in this example scenario may be introduced by the accident being partly inside a tunnel, which may create network partitions through blocking wireless communication, and by extreme weather conditions possibly causing devices not to operate properly.

The phases used in analysing the railway scenario example are introduced in Chapter 2. These six phases cover the lifecycle of a rescue and emergency operation seen in the context of using Sparse MANETs. The relevant phases for illustrating uses of the KM are the a priori phase (Phase 1) giving examples of profiles created and distributed a priori to rescue operation, the briefing phase (Phase 2) with examples of created rescue scenario profile for the accident, and the running phase (Phase 4) showing examples from different events taking place at run time. The function of the DDM – registering metadata, search and metadata exchange – is not in focus in Chapter 7, as these are explained thoroughly with examples in Chapter 5.

The most interesting phase in relation to showing how the KM would be used is Phase 4, the running phase. Three groups of events relevant/typical for this phase were described: the arrival and departure of personnel and movement on site, performing updates to changes in profiles, and the collection, exchange and distribution of information. Examples from each of these event groups were given. The example items include possible contents of update messages sent to the OSC on arrival of new personnel, on a change in rescue operation role, and for updates of dynamic context, supplemented with corresponding changes in contents of database tables (in the knowledge base).

The examples best demonstrating the use of the profile ontology (Chapter 6), how it supports the rescue operation, and population of the knowledge base, are the examples from Phases 1 and 2. These examples mainly show contents of the knowledge base kept by the OSC, and the changes to this knowledge base after events, as well as examples of message content sent in update messages. Thus, the main portion of the examples presented in Chapter 7 mainly supports our approach to ontology based updates, presented in Chapter 6, in showing examples of profiles added during the rescue operation, and

Page 198: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

182

contents of ontology based update messages with priority information from these profiles, using the profile rescue ontology. Proof-of-Concept Implementation In Chapter 8, we presented a proof-of-concept implementation, i.e., the parts of the KM necessary to show the most important aspects of our design and solution: KM Engine, KM Interface, SMOF (minimal) and DDM (main part). The DDM data dictionaries are realised as database tables as part of the data dictionary of a database using an embedded version of the Derby DBMS by incorporating the DDM tables into the system tables constituting the data dictionary in Derby. The implementation is described through UML class and sequence diagrams, and pseudocode for relevant functionality. Modifications to Derby code is described in Appendix A.

The implemented functionality focused on our approach for metadata management realised in the DDM, i.e., registering information resources in the LDD for sharing, populating the SDDD with metadata extracts from the LDD (local update), metadata exchange, search of SDDD, and retrieving metadata descriptions from the LDD. In addition, a simple realisation of vocabulary mapping in KM (SMOF and DDM) using a synonym file has been implemented.

9.2 Fulfilment of Requirements In this section we first summarise the main requirements to middleware for information sharing in rescue scenarios using Sparse MANETs, and look at how the work in this thesis contributes solutions to these requirements. Then we look specifically at the requirements to knowledge management in this context, and how the KM fulfils these. Finally, we show how the DDM fulfils its requirements. Overall Requirements The overall requirements to the middleware for information sharing, presented in Chapter 2, in summary should include support for intra- and inter-organisational information flow and knowledge exchange; the means to announce and discover information sources; contextual support allowing better adaptation and fine tuning of applications; profiling and personalisation for filtering and presenting information tailored to users’ and devices’ capabilities and needs; support for organisational structure and creation of ad-hoc groups; dynamic security; available, reliable and efficient communication; supporting resource sharing and discovery; and allow graceful degradation. The requirements directly relating to the work in this thesis and the KM are intra- and inter-organisational information flow, context management, profiling and personalisation, and group- and organisational support. Below we look at how the KM contributes to each of these requirements. Intra- and inter-organisational information flow: KM supports this through metadata management (DDM), sharing metadata about available information, and through supporting a semantic metadata and ontology framework (SMOF). Vocabulary mapping supports multiple query languages and subscription languages. Context management: Through having a unit for context and profile management, PCM, the KM offers some possibility for managing contexts and profiles. Context management can be solved using ontologies, e.g., as in the example rescue ontology where simple context management is achieved for the current operation through keeping and constantly

Page 199: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

183

updating the profiles and contexts that are active in the current operation (see Chapters 6 and 7). Context management has not been the focus of this thesis. Profiling and personalisation: This is supported through using profiles and contexts for users and devices, handled by the PCM. Profiles, as used in the example rescue ontology (Chapter 6), can support filtering in that a rescue operation role has an information profile which states which kinds of information items are relevant for that role. Group- and organisational support: The KM contributes to this requirement through supporting the use and sharing of ontologies (through SMOF) in combination with context management, profiling and personalisation (PCM). Ontology based update (Chapter 6), supports the rescue operation organisation structure as well as organisation of teams/groups through allowing update messages to be prioritised according to rescue operation role of sender and receiver and the type of information to be updated. Through the example profile ontology, with extensive use of profiles as well as keeping dynamic context, requirements for context management and profiling and personalisation are met. This also supports the information flow within and across teams and organisations. Knowledge Manager Requirements The main requirements to the KM, briefly summarised from Chapter 4, include metadata management, support for use and sharing of ontologies, enabling query and retrieval of relevant information items, keeping track of availability of shared information items, personalisation and filtering, support of information exchange in standardised format, and scalable configuration.

The DDM, responsible for metadata management and availability tracking, keeps a two level data dictionary for metadata of information resources registered to be shared in the network. The DDM requirements are a specification of requirements of metadata management, and in brief include managing the storage, organisation and update of data dictionaries; keeping track of availability of registered information resources; and supporting information requests to data dictionaries. As the work in this thesis has focused to a great deal on the DDM, an evaluation of how this component fulfils its requirements is found later in this section.

Another important requirement is that the KM has to handle dynamic updates. By this we mean to handle updates in information sources at run-time, dynamically as they occur. These updates should be disseminated to relevant nodes as appropriate. This is solved through our approach to ontology based updates, which use a combination of profiles and context descriptions, together with priorities of rescue operation roles and the type of information to be updated, defined in an ontology. Ontology based updates involve sending prioritised update messages and knowledge base updates, and are handled by a cooperation of several KM sub-components. In addition, changes in metadata are handled dynamically as they occur through metadata exchange, which is the responsibility of the DDM.

The requirement for using and sharing ontologies is addressed by SMOF. The main requirements to SMOF include supporting information integration and simple reasoning, handling run-time changes in semantic metadata, and vocabulary mapping. The requirements for SMOF were presented in Chapter 4. SMOF has to be realised by an existing ontology framework, and the OWL framework is a promising candidate for our middleware component as it is targeted at machines (rather than humans) and based on

Page 200: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

184

formal logic which is important in relation to reasoning. DDM cooperates with SMOF through sharing metadata for shared ontologies/vocabularies and in localising the nodes that keep these in the network. The latter is used in vocabulary mapping in cases of multiple subscription languages and query languages, as described in Chapter 4.

The KM handles requirements regarding query and retrieval of information items through the QM, as described in Chapter 4. In cooperation with DDM, SMOF and PCM, the QM supports forwarding queries to semantically related nodes, mapping between different query languages, and filtering and ranking according to profiles and context. Thus, the QM also contributes in KM support for inter- and intra- organisational information flow. It cooperates with DDM in retrieval of information items.

The requirement of personalisation and filtering is in KM handled by PCM in cooperation with SMOF. Through using profiles and contexts, the PCM can support personalisation, ranking and filtering of information. Filtering is shown in relation to the profile based rescue ontology used in ontology based update in Chapter 6, where profiles for users and devices are used extensively in the organisation of the operation. Update messages are prioritised and targeted according to information stated in user profiles, e.g., organisation affiliation, current team, or user role in the rescue operation. For instance, an update message can be targeted to all users having the role as officer in charge, or to all members of a certain team, or to all users from a certain organisation. The information is requested from a knowledge base over the current operation kept by the current leader of the rescue operation (the OSC). Filtering is supported through the use of information profiles in connection to rescue operation roles, stating relevant types of information.

The requirements of exchange in standardised format is addressed by choosing XML, which has become a de facto standard for information exchange, as an exchange standard, and having a dedicated sub-component in the KM, the XML-P, offering related services.

The requirement of scalable configuration is solved in the following way. All nodes have a KM with a KM Engine and full DDM, so they can register information items and participate in exchanging and querying metadata about information shared in the network. Depending on the capabilities of the host node/device, the rest of the KM-components may offer only limited functionality/services. Data Dictionary Manager Requirements The requirements to the DDM, presented in Chapter 5, cover the KM requirements for metadata management and keeping track of availability. The DDM realises two of the three levels in our approach to metadata management through the two-layered data dictionary consisting of a local data dictionary (LDD) where metadata of all shared information resources are registered, and a semantic linked distributed data dictionary (SDDD) containing minimum metadata extracts from the LDDs; concept terms, metadata resource ID, and availability. A pair {concept term, metadata resource ID} we call a semantic link or SDDD link. The extracts are exchanged between nodes to share information about what is registered for sharing. Thus, the SDDD can contain metadata extracts from several nodes. The metadata description in the LDD contains concept terms from a controlled vocabulary or an ontology, which is registered in the LDD as a resource for sharing. The use of concept terms together with the exchange of metadata extracts plays a major part in creating and distributing the semantic links, i.e., how sharing information about resources available in the network is achieved in our approach to metadata management. In the following, we show how each DDM requirement is solved in the DDM.

The requirement concerning management of the data dictionaries and their contents is specified into a set of more specific requirements: register/update LDD

Page 201: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

185

contents; update SDDD contents; enable metadata extraction from LDD; manage metadata exchange; create or restore data dictionaries at time of start-up; save data dictionaries at shut down. In the following, we look at how each of these is solved in the DDM, specifically the proof-of-concept implementation presented in Chapter 8.

The requirement register/update LDD contents is fulfilled in the DDM through the functions addInfoItem() and updateInfoItem(). In the example implementation, the DDM registers new metadata descriptions through executing a query to calling a system procedure in Derby to add the new registration to the LDD table. For update of existing registrations, the DDM retrieves the item, makes the changes, and then calls an update query on the item in LDD. Registration and update of metadata in LDD is offered through the KM Interface by the functions registerItem() and updateItemReg() respectively.

Update SDDD contents is performed when there are changes in the LDD (local update), and as part of merge after a metadata exchange, and involve additions to and update of semantic links and availability. In the implementation, SDDD local update is initiated by the DDM at LDD registration and update by calling localUpdateSDDD(), which adds or updates link and availability information in the SDDD tables. Update of SDDD as part of merge is part of metadata exchange below.

Metadata exchange is performed in three steps; (1) exchange SDDD contents with DDM on another node, (2) merge received contents into SDDD and update the list of previous exchanges, and (3) update availability information if applicable. In the implementation, the exchange of SDDD contents is performed in the function sendUpdates(), where the DDM retrieves all changes since the last update with the node it is performing the current exchange with, and sends these to the node. In the function receiveAndMerge(), received contents are merged into the SDDD through either adding new links or updating availability and timestamp if already existing. This function also initiates update of the table with previous exchanges.

At start-up/bootstrapping, the DDM is required to create or restore the data dictionaries. This involves adding initial elements to LDD and populating the SDDD with extracts from the LDD. At creation time, the LDD is populated with initial information, i.e., register metadata for information that should always be shared, e.g., device and user profiles, ontologies in use, and default subscriptions. In the example realisation presented in Chapter 8, this information is stored in a file and loaded when booting the DBMS with the data dictionaries. At start-up it is possible to state if LDD and/or SDDD should be restored. If it is a restart, both LDD and SDDD are restored. The data dictionaries are always kept at shut down, and are by default restored at start-up. If they are not to be restored, the data dictionaries are scratched at start-up. If the SDDD is not restored, the list of previous exchanges is discarded. We have not looked into solutions for how the KM can decide when to restore the data dictionaries or not at DDM start-up. The kind of shut down and the context/situation (e.g., if the shut down is planned/controlled/normal or sudden/failure, during a rescue operation or not, which phase of the rescue operation, etc.) will influence such a decision. Populating the SDDD with metadata extracts at time of creation involves extracting relevant metadata from LDD; all metadata resource IDs together with all concept terms in the metadata description are extracted from the LDD, and a link created for each pair of concept term and metadata resource ID. These are then added to the SDDD tables together with availability for the resource (by default true for local items). In the proof-of-concept implementation presented in Chapter 8, the data dictionaries are stored in a database, and the content is not deleted at shut down, which meets the requirement to save data dictionaries at shut down. Options can be given at start-up to state whether to restore the previous LDD and/or SDDD.

Page 202: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

186

Tracking the availability of the shared information is solved through keeping availability for the resource related to a metadata resource ID in the SDDD. For a local item, the availability is added when a link for the metadata resource ID is created. Availability is updated on discovery of an item not being available and at metadata exchange. All nodes that have a copy of an information item have the item registered in its LDD, so a search in a SDDD that has (through metadata exchange) links to several metadata resource IDs (e.g., nodeID) may still have a link to an available copy of the resource.

The requirement for supporting requests to data dictionaries can be specified into finding sources with related information, i.e., querying the SDDD for metadata resources related (linked) to a set of search terms/concepts, and requesting information about registered items, i.e., looking up an item in the LDD. This DDM functionality is offered outside of KM through the KM Interface. Finding sources with related information, i.e., search SDDD, is fulfilled in the DDM by the function findResources(), which takes a set of terms and retrieves all links in the SDDD related to these. This functionality is offered outside of KM in the KM Interface functions findItem() and findSource(). Although the search will retrieve links to other nodes, it does not include search in remote SDDDs, which means that related information that exists in the network and that is unknown to this node, may not be discovered/returned. This can happen if there have been updates to the SDDD on another node, but this information has not yet had time to be propagated to this node through the metadata exchanges. As stated in Chapter 5, total consistency between nodes is not feasible as it would be very resource demanding in our application scenario. Due to network partitions, total consistency may in fact be partly impossible, independently of the access to resources. Requesting information about registered items is fulfilled in the DDM by the function getInfoItem(), which retrieves the metadata description for a given local metadata resource ID. This functionality is offered through the KM Interface as getRegItem().

9.3 Critical Evaluation of Claims The main claim and five sub-claims made in the introduction of this thesis are reviewed in this section. We discuss how each of the claims has been addressed in respect to the work presented in the thesis. The main claim of this thesis is:

Information overload has to be avoided through establishing who needs what information.

The main claim leads to a set of related sub-claims:

Claim 1: Information overload can be handled through the use of filtering and personalisation.

Claim 2: Vocabulary sharing or mapping has to be enabled to allow cross organisational information sharing.

Claim 3: Efficient metadata management is essential in a solution for information sharing in resource limited environments.

Claim 4: Ontologies can be utilised in dynamic update to accommodate both the organisation of the rescue operation and the dissemination of updates.

Page 203: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

187

Claim 5: Sharing metadata about what information is available and where it can be found is essential for efficient knowledge and information sharing.

Avoiding information overload is a central issue in relation to sharing information in our application scenario. The characteristics of rescue and emergency operations and of Sparse MANETs demand efficient use of available resources and allowing personnel to focus at the task at hand. Sharing available knowledge and information among personnel in a rescue operation, both across and within organisations, can be very useful in our application scenario, but if the system or users are overloaded with information, there will be a strain on both resources and personnel. In addition, communication can be very expensive resource wise, e.g., battery consumption. The information shared has to be filtered, so that personnel only need to know about what is relevant to their work, and to avoid wasting resources by communicating more information than needed between nodes. To be able to perform filtering, we need to establish who needs what information.

We use profiles to describe what kind of information is relevant in relation to the situation and role in a rescue operation. This has been demonstrated through ontology based dynamic updates (Chapter 6) and the example profile ontology. An example of how it is used is given in Chapter 7. The information profiles show typical types of information items and their priorities for different kinds of rescue operations and different rescue operation roles. Through combination of user profile, rescue operation role and information profile, we can link users to relevant types of information items. The context of a rescue operation is monitored by keeping all active profiles and dynamic contexts in a knowledge base stored on the device used by the rescue leader. Together, the use of profiles in profile ontology and the use of context enable that filtering can be achieved, thus supporting a solution for our claim that information overload can be handled through the use of filtering and personalisation.

One of the characteristics of rescue operations is that several participating organisations cooperate. The organisations may use different vocabularies and ontologies, as well as data models and standards. In addition, different query languages may be used. To be able to share information across organisational borders, mapping between different vocabularies is a necessity, as stated in the second sub-claim. We have shown how this can be achieved with our KM in Chapter 4 in a very simple example of synonym lookup – finding synonyms across vocabularies (which may be, e.g., query languages, subscription languages).

The claim that ontologies can be utilised in dynamic update both supporting rescue operation organisation and in distributing update messages, is addressed in Chapter 6 in our approach to ontology based updates, and indirectly in Chapter 4 through the SMOF, and profile and context management components of the KM. In our approach for ontology based update, updates are given priorities based on information from a knowledge based organised on a profile based ontology. The use of prioritised updates supports the organisation of the rescue operation and allows more important updates to be given a higher priority in distribution. An example of such an ontology is given in the rescue ontology. We indicate how ontology based updates can be handled in our architecture.

Another highly relevant aspect of filtering to avoid information overload when sharing information in this environment is reducing what is exchanged (communicated) about available information in the network. Our approach to metadata management functions is a form of filtering in that it reduces the amount of metadata that needs to be communicated to share information. This is shown in Chapters 4 and 5, and discussed in relation to the claim for efficient metadata management below.

Page 204: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

188

The claim that efficient metadata management is essential in sharing information in resource weak environments is addressed through our three-layered solution/approach for metadata management, and our solution for dynamic updates, realised through the DDM, as presented in Chapters 4 and 5. We have used the term ‘efficient’ to mean an efficient use of available resources as described in the following. To be able to share knowledge among devices/nodes, some information (metadata) about what shareable information is available in the network, will have to be propagated to other nodes so it can be discovered and retrieved by interested parties. In a resource weak environment, it is an objective to manage this metadata as efficient as possible, and to avoid using more resources than necessary in propagating this information. Our approach to metadata management allows sharing minimal extracts of the full metadata descriptions of available information. Nodes share the extracts, and a local query to the data dictionary retrieves the nodeID of nodes that have possible relevant information. The full metadata description can then be queried or retrieved (directly) from the node where it is registered, and then a decision can be made about which information resource to retrieve. Bearing in mind that communication can be costly resource wise, reducing the amount of communicated metadata for the purpose of sharing information results in a more efficient metadata management.

To be able to efficiently share information in the network, the nodes (users) need to know what information is available, and where it is found. This is in correspondence with findings within the field of knowledge management, where it is found that sharing metadata about “who knows what” within an organisation, is as effective as sharing the knowledge itself [Alavi2001]. The claim that distributing/sharing metadata about availability and location of what information can be shared is essential for efficient knowledge and information sharing, is mainly addressed in Chapter 5, through our approach to metadata exchange, i.e., exchange of semantic links and resource availability among the nodes in the network.

9.4 Summary In this chapter, the work presented in this thesis has been summarised and evaluated through showing how the KM and its components meet the requirements as well as contribute to the overall requirements for middleware supporting information sharing in Sparse MANETs and the Ad-Hoc InfoWare middleware. We also reviewed the claims made at the start of this thesis and evaluate how these have been shown in the thesis.

Page 205: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

189

Chapter 10 Conclusion In this chapter we briefly summarise our contributions in this thesis and give a critical assessment of the work before we look at possible directions for future work.

10.1 Summary of Contributions In this thesis, we have addressed issues of metadata and knowledge management for information sharing in rescue and emergency situations using Sparse MANETs as infrastructure. An example scenario and analysis of challenges we face, have been presented together with a set of resulting overall requirements for the Ad-Hoc InfoWare middleware architecture. We have also presented relevant background literature and related work.

To support requirements for knowledge and information management, we have designed the KM, which consists of two sets of sub-components addressing main issues that have to be handled. The first set, consisting of the DDM, the SMOF, and the PCM, handle aspects of the different types of metadata in our scenario; structure and content description metadata, semantic metadata, and profile and context metadata. The second set consists of the QM and the XML-P, which are tool components offering general services to applications and other middleware components. The KM offers an interface to the application layer and to other middleware components, and has an engine component for execution control.

Our approach to metadata management consists of three layers of increasing semantics; information layer, context layer, and ontology layer. We have presented three perspectives on our approach, these are ‘information levels’, reflecting the view of knowledge we have adopted in this work, ‘realisation’, which is represented by the DDM and its design, and ‘data modelling technology’, exemplified by RDF/OWL and Topic Maps. Detailed design of the DDM, which forms the base in our approach for sharing information in the network, is presented, and its function is illustrated through use cases and examples. Semantic links, managed by the DDM, connect the two lower layers in our approach, and these are propagated through the network to make available information about shared resources. The links are created from extractions of metadata descriptions registered for shared items. An important aspect of our approach is that the terms used in metadata descriptions are taken from vocabularies/ontologies, which connects the information item and its context to the ontology layer. The propagation of metadata extracts (links) is achieved through metadata exchange between neighbouring nodes.

Page 206: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

190

In our solution for ontology based dynamic updates, we propose to use prioritised update messages to achieve improved utilisation of scarce resources when sending updates. To do this, we employ information from profiles and contexts defined in an ontology. Example ontologies for this purpose have been given. By preserving the structure and flow of information inherent in the organisation of the rescue operation, such ontologies can also support the rescue operation itself. By connecting information profiles to rescue operation roles, the example ontology supports filtering. We have described how the knowledge base can be populated during various phases of a rescue scenario, and how the KM sub-components are involved in the support for ontology based update.

A full realisation of the KM components and the surrounding environment to demonstrate its usefulness in a real scenario at run-time is out of scope for this thesis. Therefore, we have illustrated how the KM will be used through a large example, where we particularly focus on ontology based updates, showing how an ontology can be utilised here. To do this, we have used the example rescue profile ontology given in this thesis. The example rescue scenario is based on the structure and organisation of Norwegian SAR operations and loosely modelled on reports from real accidents. We analyse the example scenario into six rescue scenario phases and show examples of various events in each rescue scenario phase.

An example realisation of an implementation of the KM has been given to show the most important aspects of our design and solution. The implementation has particularly focused on our approach for metadata management, including registration of shared information items in the data dictionaries, metadata exchange between nodes, and querying the data dictionaries for metadata information. In addition, simple vocabulary mapping has been implemented. The implementation design has been presented as UML diagrams and pseudocode for relevant functionality. Using an embedded version of Derby DBMS, the DDM data dictionaries are realised as database tables.

10.2 Critical Assessment It is clear that the KM (all components) is too wide a scope for one thesis alone. In retrospect, it can be argued that the focus of the thesis could have been narrowed at an earlier point in time and directed at a single KM component rather than looking at the whole KM. Focusing on the DDM is appropriate, as it is in many ways the backbone of sharing information items through its propagation of metadata extracts. But before we could know which component to focus on and go into detail about this component, we needed a complete overview of the KM, its build-up and function, as this would be the context for the work in this thesis. This also influenced coverage of related work, as we had to look into background literature from a wide area.

Work on the different parts of the middleware architecture in the Ad-Hoc InfoWare project has been carried out in parallel in a few PhD theses, thus a complete middleware architecture has not been available. Another issue is that we might have chosen a different example implementation for proof of concept. We have implemented parts of the solution for the KM, but as we did not have access to a complete and integrated system, we could not do real measurements. To test the DDM, a simpler solution than a DBMS could have been used, as (full) transaction management may not be necessary to test the DDM. But the possibility of using SQL given by the DBMS was a plus and extensively used in the example implementation. Alternatively, a more lightweight DBMS for mobile devices would have been advantageous, but this was not available at the time when the implementation started.

Page 207: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

191

The example implementation was run on a stationary computer. A more realistic example implementation would have been on small mobile devices, particularly for metadata exchange. This would more readily have allowed testing of resource and performance efficiency. Only functionality testing was performed on the example implementation, i.e., to see if implemented functionality behaved as intended, as well as simple tests of metadata exchange and vocabulary testing. We have no measurements of actual performance efficiency or any comparisons with other solutions to see how (or if) this solution is (more) efficient resource wise or performance wise. It would have been advantageous to have measured and compared actual resource usage on small mobile devices, as well as metadata dissemination (metadata exchange), e.g., in different contexts. But as we have not had access to a complete system, running such tests has not been possible.

A lot of time was spent digging into Derby code, necessary for adding the DDM tables to the data dictionary (which in Derby is part of the system tables), and in trying to separate and remove transaction management from the rest of the DBMS. Due to time limitations, we had to give up on removing transaction management. For the purpose of demonstrating and testing the DDM design, e.g., registering items, metadata exchange, the DDM tables could well have been stored as ordinary tables in the database. In addition, more DDM functionality could have been implemented so that we could have tested the solution for ontology based dynamic update.

10.3 Future Work We have chosen to separate future work into two parts; ‘short term’ future work , i.e., what we would have liked to do if we had more time and/or a wider scope of the thesis, and ‘long term’ future work, i.e., future directions or larger themes that may be suitable for further work in this area. Short Term Future Work We would have liked to implement the DDM using a more lightweight DBMS on mobile devices to test metadata exchange in a genuine mobile environment. Exploring alternative implementation solutions (not DBMS) might also have been interesting; to see if a simpler solution than a DBMS could have been sufficient for a very lightweight prototype. A major point in the approach of the DDM is to reduce resource usage though limiting the number of messages needed to find and retrieve semantically related information items by first finding nodes with related information and then sending further queries only to these nodes. Measurements of metadata dissemination (metadata exchange), for instance in different contexts, may be interesting in relation to this, as well as measures of actual resource usage to see if the improved efficiency is significant.

We would also have liked to explore different structures for LDD items, and it is also possible that it may be advantageous to have different configurations of LDD so that devices with more capabilities may have more extensive metadata descriptions. Different configurations of SDDD may also be of advantage, as discussed in the following.

The DDM has been designed to be a lightweight component of the KM that should be able to run on all devices. The SDDD, storing semantic links and availability information that have either been extracted from the LDD or received from neighbouring nodes, performs an important function in sharing information about what is available for sharing in the network. The information shared (and contained) in the current version of the SDDD is however only the most basic and necessary: concept terms extracted from

Page 208: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

192

metadata descriptions, availability, and the nodeID or metadata registration ID where the information is registered. The SDDD search for relevant information is limited to search using concept terms. For non-local items, once the nodeID of a node with relevant information is retrieved from the SDDD, it is possible to do a more advanced query to that particular node (or set of nodes).

To accommodate different resource limitations on devices, as well as to improve search facilities by allowing a more refined search, an extended version of the SDDD may be of advantage. One possible way to expand the SDDD is to add more semantic context information to the content stored in SDDD. This will allow a more refined search. If ontologies, or thesauri with (typed) relations, are used as the source for metadata terms, rather than using a list of terms as source (as in a fairly simple vocabulary), relations between concepts can be included in the SDDD. This can allow finding semantically related concepts through a search in the SDDD, and may also give the possibility to restrict search to concept terms from a particular domain. Another possible extension to the SDDD is to use typed links indicating what type of information item it (the pointed-to metadata registration) refers to. This is possible if information item types are specified in an ontology, and would allow restricting a search to include, for instance, only vocabulary/ontology/semantic metadata type information items, profile or context related information, or domain related information (e.g., medical data). If an application is looking for information of a specific kind, it can restrict the search to include only links that point to metadata registrations related to this type of information items, e.g., profile information. These extensions to SDDD will allow refining and narrowing the scope of search, and in this way further contribute to and improve the efficient use of available resources. Device profiles, indicating device capabilities, may be used in deciding which configuration of SDDD is the most appropriate. User profiles may also be used for this (related to rescue and emergency operations, the users role in the rescue operation may be relevant).

The vocabulary mapping used in the example implementation of KM is very simple; looking up synonyms in a thesaurus existing in the form of a synonym file. This means that vocabulary mapping is dependent on availability of the correct synonym file. To ensure that the vocabulary mapping is consistent across nodes, all nodes offering this service must have the same synonym file. More advanced methods for vocabulary mapping could be explored and implemented, e.g., with typed relations and/or utilising domain ontologies.

In this thesis, ontology based update and the use of knowledge base is partly realised through examples “on paper” (Chapter 7). Related functionality has not been implemented in the example implementation, which mainly focuses on metadata management. It would be advantageous to have implemented this functionality, which includes procedures for retrieving information from the knowledge base (getKBInfo()) and sending prioritised update messages, as well as functionality for storing and updating the knowledge base. Use cases for this functionality are presented in Chapter 6, Use Cases (8) and (9). Other related functionality that we would have liked to implement is the management of profiles and (dynamic) contexts at run-time – this is described in the example in Chapter 7. In addition, algorithms for deciding update message priority based on rules and policies regarding priorities of rescue operation roles and information items need to be implemented. We should perform measurements to see if our approach to ontology based dynamic update is feasible (e.g., considering resource usage, environment characteristics, etc.).

In relation to the other KM sub-components, we would have liked to test different lightweight XML parsers for resource usage and efficiency. And as XML is a very verbose language, Binary XML may be an alternative for sending messages, and we would have

Page 209: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

193

liked to explore possibilities for this, as well as conversion to/from Binary XML. In relation to Query Management, we would have liked to implement query forwarding to related nodes (Use Case (4), Chapter 4). Long Term Future Work The KM consists of several sub-components. Although each of these covers interesting areas, we found that there are existing solutions for several of the sub-components (SMOF, PCM, QM, XML-P) that we may use, therefore we have chosen in this thesis to focus on solutions for the DDM. Exploring existing solutions for the different sub-components is necessary to find what is suitable for use in our architecture and in our application scenario, particularly in relation to dynamic and resource weak environments.

An interesting issue in relation to metadata use, and highly relevant for the KM, is (semi-) automatic metadata generation. The DDM assumes that the applications create appropriate metadata descriptions before registering information items for sharing, but it does not offer any support in creating these metadata descriptions. Interesting questions in relation to (semi-) automatic metadata generation are how applications can find or retrieve metadata terms (concept terms) from appropriate vocabularies and ontologies, how to recognise which type of information item it is, and how we can utilise context information in selecting the appropriate metadata terms. Another interesting field to explore is information integration using ontologies for solving semantic heterogeneity in resource limited environments.

In relation to SMOF, and supporting the ontology layer, a number of issues are interesting in relation to the use of ontologies in resource limited environments. Handling ontologies are very often resource demanding, and we need solutions that can be used in resource weak and dynamic environments. We need solutions for reasoning in resource limited environments, particularly in relation to RDF/OWL, and we need to investigate resource use, what (reasoning) is possible and feasible on resource weak devices, and explore small lightweight reasoning engines, e.g., Pocket KRHyper [Sinner2005]. Another issue is finding solutions for the mapping or bridging of ontologies or vocabularies to support interoperability across domains and organisations, i.e., techniques for vocabulary mapping. We have indicated some directions for solutions in Chapter 4. Storage and distribution of ontologies on resource limited devices is another interesting issue. Some of the subjects that may be addressed, are exploring how much information from an ontology is ‘enough’ to do simple reasoning, see if it is possible to have “light-versions” of larger ontologies on small devices, i.e., having extracts from a main ontology on smaller devices, and explore how we can best distribute large ontologies across nodes. This is related to the problem of partially available semantic metadata (including ontologies); resource weak devices may not be able to store all needed metadata information, only the necessary minimum. If access is offered by other nodes, this may not be available due to network partitions, node movement, etc. Reasoning with partially available semantic metadata also needs addressing.

Another issue that needs exploring is consistency in relation to the view of shared available information, as it is not possible for nodes to have more than a partial view of what is available in the network at all times. A relevant question is what degree of consistency is ‘adequate’, what influences the level of adequacy that is acceptable, e.g., the context, type of information, demands of participating organisations, user preferences, etc. Another issue is how an adequate level of consistency can be achieved.

Other areas that are relevant for further work are context management and context awareness, which are highly relevant to our application scenario, as is clear from the overall requirements analysis (Chapter 2) and related work (Chapter 3).

Page 210: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

194

We have included some possible directions for future work in this chapter, but there are many other interesting issues, particularly related to the use of semantics and ontologies, that can be explored in relation to information sharing in wireless and resource weak environments.

Page 211: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

195

References [AHIW2003] First Ad-Hoc InfoWare Workshop, Oslo, Norway, December 2003.

[Alavi2001] Alavi, M., and Leidner, D., Knowledge Management and Knowledge Management Systems: conceptual foundations and research issues. MISQuarterly, Vol. 25, No.1, pp.107-136, March 2001.

[Baumung2006] Baumung, P., Penz, S., and Klein, M., P2P-Based Semantic Service Management in Mobile Ad-hoc Networks [online]. Seventh International Conference on Mobile Data Management (MDM 2006), Nara, Japan. May 2006. Available: http://hnsp.inf-bb.uni-jena.de/DIANE/docs/MDM06_P2PSSM.pdf [2007]

[Berners-Lee2001] Berners-Lee, T., Hendler, J., and Lassila, O., The Semantic Web. Scientific American, May 2001, pp 35 - 43.

[Boncz2003] Boncz, P. A., and Treijtel, C., AmbientDB: relational query processing in a P2P network [online]. Technical Report INS-R0306, CWI, Amsterdam, The Netherlands, June 2003. Available: http://www.cwi.nl/ftp/CWIreports/INS/INS-R0306.pdf

[Bonifacio2002] Bonifacio, M., Bouquet, R., and Cuel, R., Knowledge Nodes: the Building Blocks of a Distributed Approach to Knowledge Management. Journal of Universal Computer Science, vol. 8, no. 6 (2002), pp. 652-661.

[Brezillion1999] Brezillion, P., Context in Artificial Intelligence: II. Key elements of contexts. Computer & Artificial Intelligence, 18 (5), 1999, pp. 44-60.

[Carver2007] Carver, L. and Turoff, M., Human-Computer Interaction: The Human and Computer as a Team in Emergency Management Information Systems. Communications of the ACM, March 2007, Vol 50, No. 3, pp. 33-38. March 2007.

[CC/PP] Composite Capability/Preference Profiles (CC/PP): Structure and Vocabularies 2.0, W3C Working Draft [online]. April 2007. Available: http://www.w3.org/TR/2007/WD-CCPP-struct-vocab2-20070430/ [2007]

[CEN/TC251] CEN/TC 251, European Standardization of Health Informatics [online]. Available: http://www.centc251.org/ [2004]

[Chen2003] Chen, H., Finin, T., and Joshi, A., 2003: An Ontology for Context-Aware Pervasive Computing Environments. The Knowledge Engineering Review, vol. 18/3, pp.197 – 207, 2003, ISSN:0269-8889.

[Colouris2001] Colouris, G., Dollimore, J., and Kindberg, T., Distributed Systems. Addison-Wesley, 3rd ed., 2001. ISBN: 0201-61918-0.

[Coutaz2005] Coutaz, J., Crowley, J. L., Dobson S., and Garlan, D., Context is Key. Communications of the ACM, March 2005/Vol. 48, No. 3, pp.49-53.

[DAML] The DARPA Agent Markup Language (DAML) [online]. Available: http://www.daml.org/ [2007]

Page 212: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

196

[DAML+OIL] DAML+OIL (March 2001) Reference Description [online]. Available: http://www.w3.org/TR/daml+oil-reference [2007]

[Davies2003A] Davies, J., Fensel, D., and van Harmelen, F., Introduction. In: Davies, J., Fensel, D., and van Harmelen, F., (Eds.), Towards the Semantic Web: ontology-driven knowledge management, pp. 1 - 9. John Wiley & Sons Ltd 2003. ISBN 0-470-84867-7.

[Davies2003B] Davies, J., Duke, A., and Stonkus, A., OntoShare: Evolving Ontologies in a Knowledge Sharing System. In: Davies, J., Fensel, D., and van Harmelen, F., (Eds.), Towards the Semantic Web: ontology-driven knowledge management, pp. 161 - 177. John Wiley & Sons Ltd 2003. ISBN 0-470-84867-7.

[Derby] Apache Derby [online]. Available: http://db.apache.org/derby/ [2007]

[Derby10] Apache Derby 10.1.1.0 Release [online]. Available: http://db.apache.org/derby/releases/release-10.1.1.0.html [2007]

[DerbyGuide] Derby Developer’s Guide, version 10.1 [online]. Available: http://db.apache.org/derby/docs/10.1/devguide/derbydev.pdf [2007]

[Dey2000] Dey, A. K., Providing Architectural Support for Building Context-aware Applications [online]. PhD thesis, College of Computing, Georgia Institute of Technology, December 2000. Available: http://www.cc.gatech.edu/fce/ctk/pubs/dey-thesis.pdf [2007]

[Drugan2005] Drugan, O., V., Plagemann, T., Munthe-Kaas, E., Building resource aware middleware services over MANET for rescue and emergency applications. In: The 16th Annual IEEE International Symposium on Personal Indoor and Mobile Radio Communications, International Congress Center (ICC), Berlin, Germany, September, 2005.

[DublinCore] Dublin Core Metadata Element Set, Version 1.1 [online]. Available: http://dublincore.org/documents/dces/ [2007]

[ebXML] ebXML, Electronic Business using eXtensible Markup Language [online]. Available: http://www.ebxml.org/ [2007]

[Eclipse] Eclipse open source development platform [online]. Available: http://www.eclipse.org/ [2007]

[Elmasri2004] Elmasri, R., and Navathe S. B., Fundamentals of Database Systems, 4th edition, Addison-Wesley, 2004. ISBN 0-321-20448-4.

[Fensel2001] Fensel, D., van Harmelen, F., Horrocks, I., McGuinnes, D. L., and Patel-Schneider, P. F., OIL: An Ontology Infrastructure for the Semantic Web. IEEE Intelligent Systems, March/April 2001, pp.38 -45.

[Fensel2002] Fensel, D., Ontology-Based Knowledge Management. IEEE Computer, November 2002, pp. 56-59.

[Fensel2003] Fensel, D., van Harmelen, F., and Horrocks, I., OIL and DAML+OIL: Ontology Languages for the Semantic Web. In: Davies, J., Fensel, D., and van Harmelen, F., (Eds.), Towards the Semantic Web: ontology-driven knowledge management, pp. 11 - 33. John Wiley & Sons Ltd 2003. ISBN 0-470-84867-7.

[Fontijn2004] Fontijn, W. and Boncz, P., AmbientDB: P2P Data Management Middleware for Ambient Intelligence. Proceedings of the Second IEEE Annual Conference on Pervasive Computing and Communications Workshops (PERWARE04), p. 203, 2004. ISBN 0-7695-2106-1.

[Garshol] Garshol, L. M., Living with Topic Maps and RDF [online]. Available: http://www.ontopia.net/topicmaps/materials/tmrdf.html [2007]

Page 213: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

197

[Garshol2004] Garshol, L. M., Metadata? Thesauri? Taxonomies? Topic Maps! — Making sense of it all [online]. March 2004. Available: http://www.ontopia.net/topicmaps/materials/tm-vs-thesauri.html [2007]

[Gomez-Perez2004] Gomez-Perez, A., Fernandez-Lopez, M., and Corcho, O., Ontological Engineering. Springer 2004, ISBN: 1-85233-551-3.

[Gruber1993] Gruber, T. R., A Translation Approach to Portable Ontology Specifications [online]. Knowledge Acquisition, 5(2):199-220, 1993. Available: http://tomgruber.org/writing/ontolingua-kaj-1993.pdf [2007]

[Hanslo2004] Hanslo, W. and MacGregor, K., The efficiency of XML as an intermediate data representation for wireless middleware communication. In: G. Marsden, P. Kotze, and Adesina-Ojo, A., (Eds.), Proceedings of the 2004 Annual Research Conference of the South African institute of Computer Scientists and information Technologists on IT Research in Developing Countries. ACM International Conference Proceeding Series, vol. 75, pp. 279-283.

[Haynes2004] Haynes, D., Metadata for information management and retrieval. Facet 2004, ISBN 1-85604-489-0.

[Heflin2001] Heflin, J. D., Towards the semantic web: knowledge representation in a dynamic, distributed environment [online]. Ph.D. dissertation, 2001, University of Maryland. Available: http://www.cse.lehigh.edu/~heflin/pubs/heflin-thesis-orig.pdf [2007]

[Herman2007] Herman, I., Introduction to the Semantic Web [online], International Conference on Dublin Core and Metadata Applications, Singapore, August 2007. Available: http://www.w3.org/2007/Talks/0831-Singapore-IH/Slides.pdf [2007]

[HL7] Health Level Seven (HL7) [online]. Available: http://www.hl7.org/ [2007]

[Höpfner2007] Höpfner, H., and Levin, P., Open-Source-Datenbankmanagementsysteme – ein Überblick. Datenbank Spectrum, 22/2007, pp. 5-12.

[IDC10] International Classification of Diseases (ICD), 10th Revision (IDC-10) [online]. Available: http://www.who.int/classifications/icd/en/ [2007]

[IFRC2005] International Federation of Red Cross and Red Crescent Societies (IFRC), World Disasters Report 2005 [online], October 2005. Available: http://www.ifrc.org/publicat/wdr2005/index.asp [2007]

[ISO13250] ISO/IEC 13250:2003 Topic Maps [online], May 2002. Available: http://www1.y12.doe.gov/capabilities/sgml/sc34/document/0322_files/iso13250-2nd-ed-v2.pdf [2007]

[J2ME] Java Micro Edition Platform [online]. Available: http://java.sun.com/javame/index.jsp [2007]

[JBV] Jernbaneverket [online]. Available: http://www.jernbaneverket.no/ [2007]

[JSE] Java Standard Edition [online]. Available: http://java.sun.com/javase/ [2007]

[Kashyap1998] Kashyap, V., and Sheth, A., Semantic Heterogeneity in Global Information Systems: The Role of Metadata, Context and Ontologies [online]. In: Papazoglou, M., and Schlageter, G., (Eds.), Cooperative Information Systems. 1998. Available: http://citeseer.ist.psu.edu/295715.html [2007]

[King2006] King, W. R., Knowledge Sharing. In: Encyclopaedia of Knowledge Management, pp. 493-497, David G. Schwartz (Ed.), Idea Group Inc, 2006.

[Kiryakov2003] Kiryakov, A., Simov, K., and Ognyanov, D., Ontology Middleware and Reasoning. In: Davies, J., Fensel, D., and van Harmelen, F., (Eds.), Towards the Semantic Web: ontology-driven knowledge management, pp. 179 - 196. John Wiley & Sons Ltd 2003. ISBN 0-470-84867-7.

Page 214: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

198

[KITH] Norwegian Centre for Informatics in Health and Social Care (KITH) [online]. Available: http://www.kith.no/ [2004]

[KITH2006] KITH rapport nr. 16/06, Rammeverk for elektronisk meldingsutveksling i helsevesenet [online]. ISBN 82-7846-294-1. (Only available in Norwegian). Available: http://www.kith.no/upload/3368/R16-06RammeverkMeldingsutveksling.pdf [2007]

[König-Ries2002] König-Ries, B., and Klein, M., Information Services to Support E-Learning in Ad-hoc Networks. Proc. of 1st International Workshop on Wireless Information Systems (WIS), April 2-3, 2002, Ciudad Real, Spain.

[KQML] KQML (Knowledge Query and Manipulation Language) [online]. Available: http://www.cs.umbc.edu/kqml/ [2007]

[kXML] kXML lightweight parser [online]. Available: http://kxml.sourceforge.net/ [2007]

[LOM] Learning Object Metadata (LOM) [online]. Available: http://www.ieeeltsc.org/working-groups/wg12LOM/ [2007]

[LOOM] LOOM Project [online]. Available: http://www.isi.edu/isd/LOOM/ [2007]

[McGuinness2003] McGuinness, D. L., 2003, Ontologies Come of Age [online]. In: D. Fensel, D., Hendler, J., Lieberman, H., and Wahlster, W., (Eds.), Spinning the Semantic Web: Bringing the World Wide Web to Its Full Potential. MIT Press. Available: http://www.ksl.stanford.edu/people/dlm/papers/ontologies-come-of-age-mit-press-(with-citation).htm [2006]

[MPEG-21] MPEG-21 [online]. Available: http://www.chiariglione.org/mpeg/standards/mpeg-21/mpeg-21.htm [2007]

[MPEG-7] MPEG-7 [online]. Available: http://www.chiariglione.org/mpeg/standards/mpeg-7/mpeg-7.htm [2007]

[Munthe-Kaas2006] Munthe-Kaas, E., Drugan, O., Goebel, V., Plagemann, T., Puzar, M., Sanderson, N., and Skjelsvik, K., Mobile Middleware for Rescue and Emergency Scenarios. In: Bellavista, P., Corradi, A., (Eds.), Mobile Middleware, CRC Press, ISBN 0-8493-3833-6, September, 2006.

[NIME1994] The Norwegian Medical Association. (The Norwegian Index for Medical Emergencies). Norsk indeks for medisinsk nødhjelp. Stavanger: Aasmund S. Laerdal A/S, 1994.

[Nonaka1994] Nonaka, I., A Dynamic Theory of Organizational Knowledge Creation. Organization Science, Vol. 5, No. 1, February 1994, pp. 14 - 37.

[NorwSAR] The Norwegian SAR Service [online]. Available: http://www.regjeringen.no/upload/kilde/jd/bro/2003/0005/ddd/pdfv/183865-infohefte_engelsk.pdf [2007]

[NOU2000:30] Rapport fra Åsta ulykken: NOU 2000: 30, Åsta-ulykken, 4. januar 2000 [online]. ISBN 82-583-0543-3, Oslo 2000. (Only in Norwegian.) Available: http://odin.dep.no/jd/norsk/dok/andre_dok/nou/012001-020007/hov003-bn.html [2007]

[NOU2001:31] NOU 2001: 31, Når ulykken er ute - Om organiseringen av operative rednings- og beredskapsressurser [online]. (Only in Norwegian.) Available: http://www.regjeringen.no/Rpub/NOU/20012001/031/PDFA/NOU200120010031000DDDPDFA.pdf [2007]

[NOU2001:9] Rapport fra Lillestrøm-ulykken: NOU 2001: 9, Lillestrøm-ulykken 5. april 2000 [online]. (Only in Norwegian.) Available: http://www.regjeringen.no/nb/dep/jd/dok/NOUer/2001/NOU-2001-09.html?id=377038 [2007]

Page 215: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

199

[OASIS] OASIS, Organization for the Advancement of Structured Information Standards [online]. Available: http://www.oasis-open.org/home/index.php [2007]

[Obasanjo2003] Obasanjo, D., A Survey of APIs and Techniques for Processing XML. [Online]. Available: http://www.xml.com/pub/a/2003/07/09/xmlapis.html [2007]

[OKS] Ontopia Knowledge Suite (OKS) [online]. Available: http://www.ontopia.net/solutions/products.html [2007]

[Ontolingua] Ontolingua [online]. Available: http://www.ksl.stanford.edu/software/ontolingua/ [2007]

[OpenCyc] OpenCyc [online]. Available: http://www.opencyc.org/ [2007]

[OWL] Web Ontology Language [online]. Available: http://www.w3.org/2004/OWL/ [2007]

[OWLGuide] OWL Web Ontology Language Guide [online]. Available: http://www.w3.org/TR/owl-guide/ [2007]

[OWLtractabl] OWL 1.1 Web Ontology Language Tractable Fragments, Editor’s Draft of 6 April 2007 [online]. Available: http://www.webont.org/owl/1.1/tractable.html [2007]

[Özsu1999] Özsu, M. T., and Valduriez, P., Principles of Distributed Database Systems, 2nd edition, Prentice-Hall, 1999. ISBN 0-13-659707-6.

[Pepper2002] Pepper, S., The TAO of Topic Maps [online]. April 2002. Available: http://www.ontopia.net/topicmaps/materials/tao.html [2007]

[Pepper2006] Pepper, S., The Shape of Topic Maps to Come [online]. Topic Maps Norway /Emnekart 2006. Available: http://www.ontopia.net/topicmaps/materials/2006-03-29%20Emnekart%20Keynote.ppt [2007]

[Perich2002] Perich, F., Avancha, S., Chakraborty, D., Joshi, A., and Yesha, Y., Profile Driven Data Management for Pervasive Environments. Proceedings 13th International Conference on Database and Expert Systems Applications (DEXA 2002), September 2002.

[Pfoser2002] Pfoser, D., Pitoura, E., and Tryfona, N., Metadata Modeling in a Global Computing Environment. 19th ACM International Symposium on Advances in Geographical Information Systems (ACM-GIS), 2002.

[Pitoura2003] Pitoura, E. et al., DBGlobe: A service-oriented P2P system for global computing, ACM SIGMOD Record, 32(3), 77, 2003.

[Plagemann2003] Plagemann, T., Goebel, V., Griwodz, C., and Halvorsen, P., Towards Middleware Services for Ad-Hoc Network Applications. In: 9th IEEE Workshop on Future Trends of Distributed Computing Systems, San Juan, Puerto Rico, May 2003, pp. 249-257.

[Plagemann2004] Plagemann, P., Andersson, J., Drugan, O., Goebel, V., Griwodz, C., Halvorsen, P., Munthe-Kaas, E., Puzar, M., Sanderson, N., and Skjelsvik, K. S., Middleware Services for Information Sharing in Mobile Ad-hoc Networks. IFIP World Computer Congress (WCC2004), Workshop on Challenges of Mobility, August, 2004

[Protege] Protégé-OWL editor [online]. Available: http://protege.stanford.edu/overview/protege-owl.html [2007]

[RDF] Resource Description Framework (RDF) [online]. Available: http://www.w3.org/RDF/ [2007]

[RDFprimer] RDF Primer [online]. Available: http://www.w3.org/TR/2004/REC-rdf-primer-20040210/ [2007]

Page 216: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

200

[RDFS] RDF Vocabulary Description Language 1.0: RDF Schema [online]. Available: http://www.w3.org/TR/rdf-schema/ [2007]

[Riehl2002] Riehl, M., and Sterin, I., XML and Perl: Now Let’s Start Digging. [Online]. Available: http://www.informit.com/articles/article.asp?p=30010&seqNum=2&rl=1 [2007]

[Sanderson2004] Sanderson, N., Goebel, V., and Munthe-Kaas, E., Knowledge Management in Mobile Ad-hoc Networks for Rescue Scenarios. Workshop on Semantic Web Technology for Mobile and Ubiquitous Applications, 3rd International Semantic Web Conference (ISWC2004), November, 2004

[Sanderson2005] Sanderson, N., Goebel, V., and Munthe-Kaas, E., Metadata Management for Ad-Hoc InfoWare - A Rescue and Emergency Use Case for Mobile Ad-Hoc Scenarios. International Conference on Ontologies, Databases and Applications of Semantics (ODBASE05). In: Meersman, R. and Tari, Z. (Eds.): CoopIS/DOA/ODBASE 2005, LNCS 3761, pp.1365-1380, 2005. November, 2005

[Sanderson2006] Sanderson, N., Goebel, V., and Munthe-Kaas, E., Ontology Based Dynamic Updates in Sparse Mobile Ad-hoc Networks for Rescue Scenarios. International Workshop on Managing Context Information and Semantics in Mobile Environments (MCISME 2006). In conjunction with 7th int. conf. on Mobile Data Management (MDM), Nara, Japan, May, 2006

[Sanderson2007] Sanderson, N. C., Skjelsvik, K. S., Drugan, O. V., Puzar, M., Goebel, V., Munthe-Kaas, E., and Plagemann, T., Developing Mobile Middleware - An Analysis of Rescue and Emergency Operations. Research Report No: 358, University of Oslo; ISBN: 82-7368-316-8; ISSN: 0806-3036.

[Schmidt1998] Schmidt, A., Beigl, M., Gellersen, H.-W. There is more to context than location. Proc. of the Intl. Workshop on Interactive Applications of Mobile Computing (IMC98), Rostock, Germany, November 1998.

[Schulz2003] Schulz, S., Herrmann, K., Kalcklösch, R., and Schwotzer, T., Towards Trust-based Knowledge Management in Mobile Communities. In: Agent-Mediated Knowledge Management. Papers from 2003 AAAI Spring Symposium. Technical Report SS-03-01, Stanford University, USA, March 2003, AAAI Press.

[Schwotzer2002A] Schwotzer, T., and Geihs, K., Shark - a System for Management, Synchronization and Exchange of Knowledge in Mobile User Groups. Proceedings 2nd International Conference on Knowledge Management (I-KNOW '02), Graz, Austria, July 2002, pp. 149-156.

[Schwotzer2002B] Schwotzer, T., and Preuss, T., Knowledge Exchange in Spontaneous Networks – Towards Ubiquitous Knowledge. Proceedings of E-World@Syria(ET2EB), Damaskus, 2002.

[Schwotzer2003] Schwotzer, T., and Geihs, K., Mobiles verteiltes Wissen: Modellierung, Speicherung und Austausch, Datenbank-Spektrum, 5/2003, pp.30-39.

[Schwotzer2006A] Schwotzer, T., Context Driven Spontaneous Knowledge Exchange. Proceedings of the 1st German Workshop on Experience Management 2002 (GWEM02), Berlin / Germany, March 7/8, 2002.

[Schwotzer2006B] Schwotzer, T., Ein Peer-to-Peer Knowledge Management System basierend auf Topic Maps zur Unterstützung von Wissensflüssen. Dr. Ing. dissertation, TU Berlin, Berlin 2006.

[SemWeb] W3C Semantic Web. [Online]. Available: http://www.w3.org/2001/sw/ [2006]

[Shadbolt2006] Shadbolt, N., Hall, W., and Berners-Lee, T., The Semantic Web Revisited [online]. In: Intelligent Systems, Jan.-Feb. 2006, Vol. 21/3, pp. 96- 101, ISSN:1541-1672. Available:

Page 217: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

201

http://ieeexplore.ieee.org/iel5/9670/34311/01637364.pdf?tp=&isnumber=&arnumber=1637364 [2007]

[SHARK] The SHARK project [online]. Available: http://kbs.cs.tu-berlin.de/ivs/Projekte/Shark/index_en.html [2004]

[Sinner2005] Sinner, A, and Kleemann, T., KRHyper - In Your Pocket. In: R. Nieuwenhuis (Ed.): CADE 2005, LNAI 3632, pp. 452–457, Springer-Verlag Berlin Heidelberg 2005.

[Skjelsvik2006] Skjelsvik, K. S., Lekova, A., Goebel, V., Munthe-Kaas, E., Plagemann, T., and Sanderson, N., Supporting Multiple Subscription Languages by a Single Event Notification Overlay in Sparse MANETs. Proceeding of the ACM MobiDE 2006 Workshop, June, 2006.

[Sosnoski2001] Sosnoski, A., XML and Java technologies: Document models, Part 1: Performance [online]. Available: http://www-128.ibm.com/developerworks/xml/library/x-injava/ [2007]

[Sosnoski2002] Sosnoski, A., XML documents on the run [online]. Available: http://www.javaworld.com/javaworld/jw-04-2002/jw-0426-xmljava3.html [2007]

[SOUPA] SOUPA (Standard Ontology for Ubiquitous and Pervasive Applications) [online]. Available: http://pervasive.semanticweb.org/soupa-2004-06.html [2007]

[Spek1999] van der Spek, R., and Spijkervet, R., Knowledge Management: Dealing Intelligently With Knowledge. Kenniscentrum CIBIT, 1999, ISBN 90-75709-02-1.

[Stuckenschmidt2003] Stuckenschmidt, H., Ontology-Based Information Sharing in Weakly-Structure Environments [online]. Ph.D. Thesis, Faculty of Sciences, Vrije Universiteit Amsterdam, January 2003. Available: http://www.cs.vu.nl/~heiner/public/PhD.pdf [2007]

[Stuckenschmidt2005] Stuckenschmidt, H., and van Harmelen, F., Information Sharing on the Semantic Web. Springer-Verlag Berlin Heidelberg 2005, ISBN 3-540-20594-2.

[SUMO] Suggested Upper Merged Ontology (SUMO) [online]. Available: http://www.ontologyportal.org/ [2007]

[SyncML] SyncML (Synchronization Markup Language) [online]. Available: http://www.openmobilealliance.org/tech/affiliates/syncml/syncmlindex.html [2007]

[Tianl2004] Tian, M., Voigt, T., Naumowicz, T., Ritter, H., and Ritter, J., Performance considerations for mobile web services. Computer Communications, Vol. 27, Issue 11, 1 July 2004, pp. 1097-1105, Elsevier 2004.

[tinyTIM] tinyTim [online]. Available: http://tinytim.sourceforge.net/ [2007]

[TM4J] TM4J, Topic Map Engine for Java [online]. Available: http://sourceforge.net/projects/tm4j [2007]

[UMLS] Unified Medical Language System (UMLS) [online]. Available: http://www.nlm.nih.gov/research/umls/ [2007]

[UN/CEFACT] United Nations Centre for Trade Facilitation and Electronic Business (UN/CEFACT) [online]. Available: http://www.unece.org/cefact/ [2007]

[Vaardal2005] Vaardal, B., Lossius, H. M., Steen, P. A., and Johnsen, R., Have the implementation of a new specialised emergency medical service influenced the pattern of general practitioners involvement in pre-hospital medical emergencies? A study of geographic variations in alerting, dispatch, and response. Emergency Medical Journal 2005, vol. 22, pp. 216-219.

Page 218: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

202

[Vigdal2006] Vigdal, S. (2006:) Informasjonsdeling i redningsarbeid ved bruk av emnekart (Topic Maps), Master thesis, University of Oslo, 2006. (Only in Norwegian.)

[Volven] Volven [online]. Available: http://www.volven.no/ [2007]

[Wache2001] Wache, H., Vogele, T., Visser, U., Stuckenschmidt, H., Schuster, G., Neumann, H., and Hubner, S., Ontology-based integration of information - a survey of existing approaches. In: Stuckenschmidt, H., (Ed.), IJCAI-01 Workshop: Ontologies and Information Sharing, 2001, pp. 108-117.

[Westrheim2006] Westrheim, V., Praktisk beredskap i norsk jernbanedrift [online]. (Only in Norwegian). Available: http://www.sikkerhetsdagene.no/Tidligere%20konferanser/2006/Westrheim.pdf [2007]

[WonderWeb] WonderWeb OWL Ontology Validation [online]. Available: http://www.mygrid.org.uk/OWL/Validator [2007]

[Wong2003] Wong, W., Exploring the boundaries of Web services on resource limited devices. Thesis for Master of Science degree in Telematics from the University of Twente, Enschede, The Netherlands. November 2003 [online]. Available: http://asna.ewi.utwente.nl/assignments/completed/ARCH-2003-15.pdf [2005]

[Woodman] Listing of Topic Map engines and other tools [online]. Available: http://www.topicmap.com/topicmap/tools.html [2007]

[XML] Extensible Markup Language (XML) [online]. Available: http://www.w3.org/XML/ [2007]

[XTM] XML Topic Maps [online]. Available: http://www.topicmaps.org/xtm/1.0/index.html [2006]

[XTM4] XTM4XMLDB [online]. Available: http://sourceforge.net/projects/xtm4xmldb [2007]

[Zhao2004] Zhao, W., Ammar, M., and Zegura, E. A Message Ferrying Approach for Data Delivery in Sparse Mobile Ad Hoc Networks. Proceedings of ACM Mobihoc 2004(MobiHoc’04). Tokyo, Japan. 2004.

Page 219: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

203

Appendices Appendix A – Modifications to Derby In this appendix, we give additional details of modifications to Derby code for managing the data dictionaries, and explain more about what we needed to do to add system tables and stored procedures. This is an addition to the description in Chapter 8. In the remainder of this appendix, the term 'DDM tables’ is used to denote all tables related to DDM, i.e., the table for previous exchanges, and the LDD and SDDD tables, unless otherwise specified. In Derby, the data dictionary exists as part of the system tables. Each element (e.g., table, schema, row in table, etc.) has a unique universal identifier (UUID). The implemented schemas for the DDM tables The following schemas were used for the DDM tables.

LDD_INFOITEM (ITEMUUID, INFOITEMID, KEYCONCEPTS, RESOURCENAME, RESOURCELOCATION, TIMELASTUPDATE, COLCOUNT) SDDD_LINKING (LINKING_ID, CONCEPTID, METARESID, TIMELASTUPDATE, COLCOUNT) SDDD_AVAILABILITY (AVAILABILITY_ID, METARESID, AVAILABLE, LINKLEVEL, TIMELASTUPDATE, COLCOUNT) DDM_EXCHANGE (EXCHANGE_ID, NODEID, TIMELASTEXCHANGE, COLCOUNT)

Modifications Related to Stored Procedures Overview of changes made when adding a new stored procedure:

• In class DataDictionaryImpl: - modify function createLDD_SDDD_procedures for adding the DDM

procedures - add a function to retrieve the procedure schema descriptor for the new

stored procedure, e.g., ‘getDDMProcSchemaDescriptor()’. - make any other necessary changes in DataDictionaryImpl.

• Add the stored procedure to SystemProcedures. • Add the necessary code in LDD_SDDDImpl.

Related functions in class DataDictionaryImpl:

create_LDD_SDDD_procedures(): This function creates all the DDM stored procedures when the data dictionary is created. The code was added to the class DataDictionaryImpl. The DDM stored procedures are added to one of these schemas: LDDPROC, SDDDPROC, or DDMPROC.

Page 220: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

204

private final void create_LDD_SDDD_procedures(TransactionController tc) begin //the class where the routines are found set procClass = "org.apache.derby.catalog.SystemProcedures"

/* Add routines for LDDPROC schema*/ call getLDDProcSchemaDescriptor() returning lddDescr

call lddDescr.getUUID() returning lddProcUUID //adding LDDPROC.ADD_LDDITEM()

begin //procedure argument names set arg_names = {"INFOITEMID", "KEYCONCEPTS", "RESOURCENAME", "RESOURCELOCATION"} //procedure argument types create array arg_types of TypeDescriptors for argument types

set returnType = null as TypeDescriptor call createSystemProcedureOrFunction with "ADD_LDDITEM", lddProcUUID, arg_names, arg_types, 0, 0, RoutineAliasInfo.MODIFIES_SQL_DATA, returnType, tc, and procClass end /* Add routines for SDDDPROC schema */ call getSDDDProcSchemaDescriptor() returning sdddDescr

call sdddDescr.getUUID() returning sdddProcUUID // adding SDDDPROC.ADD_LINK() begin //procedure argument names set arg_names = {"CONCEPT", "METARESID"} //procedure argument types create array arg_types of TypeDescriptors for argument types set returnType = null as TypeDescriptor call createSystemProcedureOrFunction with "ADD_LINK", sdddProcUUID, arg_names, arg_types, 0, 0, RoutineAliasInfo.MODIFIES_SQL_DATA, returnType, tc, and procClass end // adding SDDDPROC.ADD_AVAILAB() begin //procedure argument names set arg_names = {"METARESID", "AVAILABILITY","LINKLEVEL"} //procedure argument types create array arg_types of TypeDescriptors for argument types set returnType = null as TypeDescriptor call createSystemProcedureOrFunction with "ADD_AVAILAB", sdddProcUUID, arg_names, arg_types, 0, 0, RoutineAliasInfo.MODIFIES_SQL_DATA, returnType, tc, and procClass end

/* Add routines for DDMPROC schema */ call getDDMProcSchemaDescriptor() returning ddmDescr

call ddmDescr.getUUID() returning ddmProcUUID

// adding DDMPROC.ADD_EXCHANGE() begin //procedure argument names set arg_names = {"NODEID"} //procedure argument types create array arg_types of TypeDescriptors for argument types set returnType = null as TypeDescriptor

Page 221: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

205

call createSystemProcedureOrFunction with "ADD_EXCHANGE", ddmProcUUID, arg_names, arg_types, 0, 0, RoutineAliasInfo.MODIFIES_SQL_DATA, returnType, tc, and procClass end end

getLDDProcSchemaDescriptor(): retrieves the descriptor for the LDDPROC schema.

public SchemaDescriptor getLDDProcSchemaDescriptor()

getSDDDProcSchemaDescriptor(): retrieves the descriptor for the SDDDPROC schema.

public SchemaDescriptor getSDDDProcSchemaDescriptor()

getDDMProcSchemaDescriptor(): retrieves the descriptor for the DDMPROC schema.

public SchemaDescriptor getDDMProcSchemaDescriptor()

Additions to class SystemProcedures: These stored procedures are added to the appropriate schema at creation of the data dictionary. Each procedure calls the appropriate functions in class LDD_SDDDImpl.

ADD_LDDITEM() – add a new entry to the LDD in LDD_INFOITEM. Added to schema LDDPROC. Calls function insertInfoItem() in LDD_SDDDImpl.

public static void ADD_LDDITEM(String infoItemID, String conceptIDs, String resName, String resLoc)

ADD_LINK() – stored procedure for adding an SDDD entry (link ) to SDDD_LINKING. Added to schema SDDDPROC. Calls function addLink() in LDD_SDDDImpl.

public static void ADD_LINK(String conceptID, String metaResID)

ADD_AVAILAB() – stored procedure for adding a new availability entry to SDDD_AVAILABILITY. Added to schema SDDDPROC. Calls function addAvailability() in LDD_SDDDImpl.

public static void ADD_AVAILAB(String metaResID, short availab, short linkLevel)

ADD_EXCHANGE() – stored procedure for adding nodeID with time of last exchange in DDM_EXCHANGE. Added to schema DDMPROC. Calls function addExchange() in LDD_SDDDImpl.

public static void ADD_EXCHANGE(String nodeID)

Related functions in class LDD_SDDDImpl: The class LDD_SDDDImpl implements functionality related to managing DDM tables. The following shows pseudocode of functions in LDD_SDDDImpl for adding new items to the DDM tables. Each row in a table has a unique universal identifier (UUID). A new UUID is created by calling an appropriate Derby function in the current instance of a Derby class UUIDFactory before inserting a new row.

Page 222: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

206

insertInfoItem(): insert a metadata description of an information item in the LDD table. Returns the number of rows inserted. This function is called by the stored procedure ADD_LDDITEM().

public int insertInfoItem(String infoItemID, String concepts, String resName, String resLocation) begin

get current UUIDFactory call UUIDFactory.createUUID returning newUUID convert newUUID to string set timestamp = current system time as string of SQLTimestamp set query = "insert into SYS.LDD_INFOITEM "+

"values('"+ newUUID +"','" + infoItemID + "','" + concepts + "','" + resName + "','" + resLocation + "','" + timestamp +"'," + LDD_INFOITEM_RowFactory.LDD_INFOITEM_COLUMN_COUNT +")"

create database connection and connect to database set st = create new Statement on database connection call st.executeUpdate with query returning rowcount

close st commit changes to database close database connection

return rowcount end

addLink(): add a link to the link table in SDDD. This function is called by the stored procedure ADD_LINK().

public void addLink(String conceptID, String metaResID) begin create database connection and connect to database set st = create new Statement on database connection call insertLink with conceptID, metaResID, st returning rowcount close st commit changes to database close database connection end

insertLink(): insert item to SDDD_LINKING table in SDDD. This function is called by addLink(). Returns number of inserted rows.

public int insertLink(String conceptID, String metaResID, Statement st) begin

get current UUIDFactory call UUIDFactory.createUUID returning newUUID convert newUUID to string

set timestamp = current system time as string of SQLTimestamp set query = "insert into SYS.SDDD_LINKING "+ "values('"+ newUUID+"','" + conceptID + "','" + metaResID + "','" + timestamp +"'," + SDDD_LINKING_RowFactory.SDDD_LINKING_COLUMN_COUNT +")"

call st.executeUpdate with query returning rowcount return rowcount

end

Page 223: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

207

addAvailability(): add item to the availability table in SDDD. This function is called by the stored procedure ADD_AVAILAB().

public void addAvailability(String metaResID, boolean available, short linkLevel) begin create database connection and connect to database set st = create new Statement on database connection call insertAvailability with metaResID, available, linkLevel, st returning rowcount close st commit changes to database close database connection end

insertAvailability(): insert item to SDDD_AVAILABILITY table in SDDD. This function is called by addAvailability(). Returns number of updated rows.

public int insertAvailability(String metaResID, boolean available, short linkLevel, Statement st) begin

get current UUIDFactory call UUIDFactory.createUUID returning newUUID convert newUUID to string

set timestamp = current system time as string of SQLTimestamp set query = "insert into SYS.SDDD_AVAILABILITY "+ "values('"+ newUUID +"','" + metaResID + "'," + available + "," + linkLevel + ",'" + timestamp +"'," + SDDD_AVAILABILITY_RowFactory.SDDD_AVAILABILITY_COLUMN_COUNT +")"

call st.executeUpdate with query returning rowcount return rowcount

end

addexchange(): add nodeID and time to table containing previous exchanges. This function is called by the stored procedure ADD_EXCHANGE().

public void addExchange(String nodeID) begin create database connection and connect to database set st = create new Statement on database connection call insertExchange with nodeID and st close st commit changes to database close database connection end

insertExchange(): insert item to exchange table. This function is called by addExchange(). Returns number of inserted rows.

public int insertExchange(String nodeID, Statement st) begin

get current UUIDFactory call UUIDFactory.createUUID returning newUUID convert newUUID to string

set timestamp = current system time as string of SQLTimestamp set query = "insert into SYS.DDM_EXCHANGE "+

"values('"+ newUUID +"','" + nodeID + "','" + timestamp +"'," + DDM_EXCHANGE_RowFactory.DDM_EXCHANGE_COLUMN_COUNT +")" call st.executeUpdate with query returning rowcount

return rowcount end

Page 224: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

208

Modifications for adding DDM tables to the Derby data dictionary In this section, we will describe modifications and aspects of adding the DDM tables to the Derby data dictionary (system tables). Overview of changes made when adding a DDM table to the Derby system tables:

• Create a new RowFactory class for the new table (in package derby.impl.sql.catalog), e.g., DDM_EXCHANGE_RowFactory. Need a new UUID (hardcoded).

• Create a descriptior class for the new table (in package derby.iapi.sql.dictionary)

• Add Catalog number in DataDictionary class (non-core catalogs) • Changes in DataDictionaryImpl:

- add catalog name (non-core catalogs) - increase the number of non-core catalogs (NUM_NONCORE) - add catalog number for the new table to lddSdddNumbers, and increase

numberOfLddSdddCatalogs - make any other necessary changes in DataDictionaryImpl.

RowFactory classes: Added code for creating rows in the DDM tables Each system table needs a ‘row factory’ class (in package derby.impl.sql.catalog) to create new rows in the table. DDM has the following classes for this purpose:

DDM_EXCHANGE_RowFactory, LDD_INFOITEM_RowFactory, SDDD_LINKING_RowFactory, and SDDD_AVAILABILITY_RowFactory

The row factory classes contain what is necessary for creating a row in the table (row factory). For example, the DDM_EXCHANGE_RowFactory class contains code for adding a new row in the DDM_EXCHANGE table. The row factory class declares necessary variables for the tuple, e.g., the table name, defines the columns (data type, name, nullability, etc.), the column order, and column number, unique universal identifier for this table (UUID), and more. Functions included are for creating an empty row for the table, build descriptor, index, and column list. Uses descriptor classes for retrieving information about the schema for the row, e.g., DDM_EXCHANGE_RowFactory uses the ExchangeDescriptor class. Descriptor classes: A descriptor class contains functions for retrieving information about a tuple in a table – used by the row factory classes. Every system table has a descriptor class (in package derby.iapi.sql.dictionary) functioning as an interface to a database row. The DDM has the following descriptor classes:

AvailabilityDescriptor (for SDDD_AVAILABILITY), LinkingDescriptor (for SDDD_LINKING), InfoItemDescriptor (for LDD_INFOITEM), and ExchangeDescriptor (for DDM_EXCHANGE).

Page 225: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

209

Modifications in interface class DataDictionary (package derby.iapi.sql.dictionary): Added catalogue numbers for DDM tables (catalogues). ... public static final int SDDD_LINKING_CATALOG_NUM = 15; public static final int SDDD_AVAILABILITY_CATALOG_NUM = 16; public static final int SDDD_RELCON_CATALOG_NUM = 17; public static final int LDD_DICTITEM_CATALOG_NUM = 18; public static final int LDD_INFOITEM_CATALOG_NUM = 19; public static final int DDM_EXCHANGE_CATALOG_NUM = 20; ...

Modifications in class SchemaDescriptor (package derby.iapi.sql.dictionary):

Schema names for DDM stored procedures were added to the list of system schema names. ... public static final String LDD_PROC_SCHEMA_NAME = "LDDPROC"; public static final String SDDD_PROC_SCHEMA_NAME = "SDDDPROC"; public static final String DDM_PROC_SCHEMA_NAME = "DDMPROC"; ...

UUIDs for DDM schemas (hardcoded): ... public static final String SDDD_PROC_SCHEMA_UUID = "80000000-00d2-b38g-4cda-000a0a422c01"; public static final String LDD_PROC_SCHEMA_UUID = "80000100-00d2-b38h-4dda-000a0a432c01"; public static final String DDM_PROC_SCHEMA_UUID = "80000100-00d2-b38h-4cbb-000a0a432c02"; ...

Other Modifications in Class DataDictionaryImpl The change made in the function DataDictionaryImpl.boot(): When a new data dictionary is created, a call to the function create_LDD_SDDD_procedures() is made to create the stored procedures for DDM. In the following, we show added declarations to the class DataDictionaryImpl (package derby.impl.sql.catalog).

Added declarations in class DataDictionaryImpl: DDM schema descriptors and schema names: ... protected SchemaDescriptor lddProcSchemaDesc; protected SchemaDescriptor sdddProcSchemaDesc; protected SchemaDescriptor ddmProcSchemaDesc; private String lddProcSchemaName; private String sdddProcSchemaName; private String ddmProcSchemaName; ...

DDM table names added to the non-core system table names: ... "SDDD_LINKING", "SDDD_AVAILABILITY", "LDD_INFOITEM", "DDM_EXCHANGE", ...

DDM schema added to the list of all ‘system’ schemas, i.e., to the list containing all schemas used by the system and created when the database is created. Users are not

Page 226: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

210

allowed to create or drop these schemas, or to create or drop objects in these schemas. ... SchemaDescriptor.LDD_PROC_SCHEMA_NAME, SchemaDescriptor.SDDD_PROC_SCHEMA_NAME, SchemaDescriptor.DDM_PROC_SCHEMA_NAME ...

The set of DDM catalog numbers and the number of DDM catalogues. ... private static int numberOfLddSdddCatalogs = 4; private static final int[] lddSdddNumbers = { SDDD_LINKING_CATALOG_NUM, SDDD_AVAILABILITY_CATALOG_NUM, LDD_INFOITEM_CATALOG_NUM, DDM_EXCHANGE_CATALOG_NUM }; ...

Appendix B – Example Details Chapter 7 In this appendix, we show the full version of the examples in Chapter 7. Phase 1 – Initial content of knowledge base Examples of possible initial contents of the KB added in Phase 1, e.g., personnel information and standard information profiles, shared vocabularies, general template profiles (e.g., rescue scenario profiles for a land operation, a train accident), roles that always are present (OSC), etc. Other content that may be added in this phase includes extensions of profile ontology for different organisations, and known types of Devices, e.g., a (template) profile for a standard Ambulance Device. We also show example of KB content on the devices of two paramedics used in our examples; Lars Lie and Hans Hansen. General Example of KB Content in Phase 1 The following is an example of initial population of the KB as distributed to involved organisations a priori to any operation. Here we only show samples related to the railway accident scenario. The following example of initial KB content includes an organisation, the different types of profiles, roles, information priorities, kind of devices, incidents (accident, natural disaster), and vehicles. Individual(org:VossSjukehus type(org:Hospital)

value(org:address "Sjukehusvegen 16, 5700 Voss"^^xsd:string) value(org:telephone 56 53 35 00))

Individual(pr:UProfile type(pr:UserProfile)) Individual(pr:DProfile type(pr:DeviceProfile)) Individual(pr:IProfile type(pr:InformationProfile)) Individual(pr:RSProfile type(pr:RescueScenarioProfile)) Individual(pr:TmM type(pr:TeamMember)) Individual(pr:TmL type(pr:TeamLeader)) Individual(pr:OiC type(pr:OfficerInCharge)) Individual(pr:OSCoo type(pr:OSC))

Individual(pr:Top type(pr:InformationPriority))

Page 227: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

211

Individual(pr:Urgent type(pr:InformationPriority)) Individual(pr:Normal type(pr:InformationPriority)) Individual(pr:Low type(pr:InformationPriority))

Individual(pr:CommunicationDevice type(pr:DeviceRole)) Individual(pr:ServerDevice type(pr:DeviceRole)) Individual(pr:AmbulanceDevice type(pr:DeviceRole))

Individual(pr:Earthquake type(pr:NaturalDisaster)) Individual(pr:Tsunami type(pr:NaturalDisaster))

Individual(pr:CarAccident type(pr:Accident)

value(pr:vehicleInvolved pr:Car)) Individual(pr:RailwayAccident type(pr:Accident)

value(pr:vehicleInvolved pr:Train)) Individual(pr:BoatAccident type(pr:Accident)

value(pr:vehicleInvolved pr:Boat))

Individual(pr:Car type(pr:Vehicle) Individual(pr:Train type(pr:Vehicle) Individual(pr:Boat type(pr:Vehicle)

Here we show an example of information items and a standard information profile for a rescue scenario. The different kinds of information items were explained above.

Individual(pr:InfoItem_31

type(pr:InformationItem) value(pr:priority pr:Top) value(pr:subject "http://www.ifi.uio.no/dmms/InfoItem#ImmediateBroadcastData")) Individual(pr:InfoItem_32 type(pr:InformationItem) value(pr:priority pr:Urgent) value(pr:subject "http://www.ifi.uio.no/dmms/InfoItem#SiteStatus")) Individual(pr:InfoItem_33 type(pr:InformationItem) value(pr:priority pr:Normal) value(pr:subject "http://www.ifi.uio.no/dmms/InfoItem#PatientRecord")) Individual(pr:InfoItem_34 type(pr:InformationItem) value(pr:priority pr:Normal) value(pr:subject "http://www.ifi.uio.no/dmms/InfoItem#SiteMap")) Individual(pr:InfoItem_35 type(pr:InformationItem) value(pr:priority pr:Low) value(pr:subject "http://www.ifi.uio.no/dmms/InfoItem#ForensicsReport")) Individual(pr:StandardInfoProfile type(pr:InformationProfile) value(pr:item pr:InfoItem_35) value(pr:item pr:InfoItem_32) value(pr:item pr:InfoItem_33) value(pr:item pr:InfoItem_34) value(pr:item pr:InfoItem_31))

The above example individuals are included in the appropriate ontologies (see Appendix C), therefore we do not show these in OWL XML Syntax here.

Page 228: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

212

Example showing person information for two paramedics connected to Voss Sjukehus, added in this phase.

Individual(ll:LL type(pr:MedicalPerson) value(pr:name "Lars Lie"^^xsd:string) value(pr:affiliation org:VossSjukehus) value(pr:hasHPRnumber “9149988”^^xsd:string)) Individual(hh:HH type(pr:MedicalPerson) value(pr:name "Hans Hansen"^^xsd:string) value(pr:affiliation org:VossSjukehus) value(pr:hasHPRnumber “9149987”^^xsd:string))

The same example in OWL XML Syntax:

<pr:MedicalPerson rdf:about=”&ll;LL”> <pr:name>Lars Lie</pr:name> <pr:affiliation rdf:resource=”&org;VossSjukehus”/> <pr:hasHPRnumber>9149988</pr:hasHPRnumber> </pr: MedicalPerson> <pr:MedicalPerson rdf:about=”&hh;HH”> <pr:name>Hans Hansen</pr:name> <pr:affiliation rdf:resource=”&org;VossSjukehus”/> <pr:hasHPRnumber>9149987</pr:hasHPRnumber> </pr:MedicalPerson>

Resulting Database tables: pr:Incident pr: IncId

pr: incident Type

pr:numberOf People

pr:incident Description

pr:region Name

pr: area Description

pr:area Conditions

pr: CarAccident

pr:Accident ... ... ... ... ...

pr: RailwayAccident

pr:Accident ... ... ... ... ...

pr: BoatAccident

pr:Accident ... ... ... ... ...

pr: Earthquake

pr:Natural Disaster

... ... ... ... ...

pr: Tsunami

pr:Natural Disaster

... ... ... ... ...

pr:Accident pr:IncId pr:vehicleInvolved ...pr:CarAccident pr:Car pr:RailwayAccident pr:Train pr:BoatAccident pr:Boat

pr:Vehicle pr:VId ... pr:Car pr:Boat pr:Train

pr:RescueOperationRole pr:RORId pr:RORoleType pr:reportsTo pr:

responsibility

pr: isMember Of

pr:has Update Priority

pr:TmM pr:TeamMember pr:TeamLeader ... ... 4 pr:TmL pr:TeamLeader pr:OfficerInCharge ... ... 3 pr:OiC pr:OfficerInCharge pr:OSC ... ... 2 pr:OSCoo pr:OSC ... ... ... 1

Page 229: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

213

pr:Person pr:PId pr:personType pr:name pr:affiliation ... ll:LL pr:MedicalPerson Lars Lie org:VossSjukehus hh:HH pr:MedicalPerson Hans Hansen org:VossSjukehus

pr:MedicalPersonpr:MPId pr:hasHPRnumber ...ll:LL 9149988 hh:HH 9149987

pr:InformationProfile pr:IPId pr:itempr:StandardInfoProfile ii:ImmediateBroadcastDatapr:StandardInfoProfile ii:SiteStatus pr:StandardInfoProfile ii:PatientRecord pr:StandardInfoProfile ii:SiteMap pr:StandardInfoProfile ii:ForensicsReport

pr:InformationItem pr:IId pr:subject pr:prioritypr: InfoItem_31

http://www.ifi.uio.no/dmms/InfoItem#ImmediateBroadcastData

pr:Top

pr: InfoItem_32

http://www.ifi.uio.no/dmms/InfoItem#SiteStatus pr:Urgent

pr: InfoItem_33

http://www.ifi.uio.no/dmms/InfoItem#PatientRecord pr:Normal

pr: InfoItem_34

http://www.ifi.uio.no/dmms/InfoItem#SiteMap pr:Normal

pr: InfoItem_35

http://www.ifi.uio.no/dmms/InfoItem#ForensicsReport pr:Low

pr:InformationPriority pr:IPrId ... pr:Top pr:Urgent pr:Normal pr:Low

pr:DeviceRole pr:DRId ... pr:CommunicationDevice pr:ServerDevice pr:AmbulanceDevice

pr:Profile pr:PId pr:ProfileType pr:UProfile pr:UserProfile pr:DProfile pr:DeviceProfile pr:IProfile pr:InformationProfile pr:RSProfile pr:RescueScenarioProfile org:Organisationorg:orgId org:orgType org:orgNumber org:telephone org:address org:VossSjukehus org:Hospital ... 56 53 35 00 Sjukehusvegen

16, 5700 Voss Example KB Content for Paramedic Hans Hansen

Individual(hh:HH type(pr:MedicalPerson) value(pr:name "Hans Hansen"^^xsd:string) value(pr:affiliation org:VossSjukehus) value(pr:hasHPRnumber “9149987”^^xsd:string))

Page 230: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

214

Individual(hhpr:HHProfile type(pr:UserProfile) value(pr:person hh:HH))

Individual(pc:PC56 type(pr:Device) value(pr:model "Apple iBook G4"^^xsd:string) value(pr:nodeID "nod1"^^xsd:string))

Individual(pc:PC56Profile type(pr:DeviceProfile) value(pr:device pc:PC56)

value(pr:currentUser hhpr:HHProfile) value(pr:deviceRole pr:AmbulanceDevice))

The same shown in OWL XML Syntax

<pr:MedicalPerson rdf:ID=”&hh;HH”> <pr:name>Hans Hansen</pr:name> <pr:affiliation rdf:resource=”&org;VossSjukehus”/> <pr:hasHPRnumber>9149987</pr:hasHPRnumber> </pr:MedicalPerson> <pr:UserProfile rdf:ID=”&hhpr;HHProfile”> <pr:person rdf:resource=”&hh;HH”/> </pr:UserProfile>

<pr:Device rdf:ID=”&pc;PC56”> <pr:model>Apple iBook G4</pr:model> <pr:nodeID>nod1</pr:nodeID> </pr:Device> <pr:DeviceProfile rdf:ID=”&pc;PC56Profile”> <pr:device rdf:resource=”&pc;PC56”/>

<pr:currentUser rdf:resource=”&hhpr;HHProfile”/> <pr:deviceRole rdf:resource=”&pr;AmbulanceDevice”/>

</pr:DeviceProfile> Resulting database tables on Hans Hansens device: pr:UserProfile pr:UPId pr:person pr:role ...hhpr:HHProfile hh:HH ...

pr:Person pr:PId pr:personType pr:name pr:affiliation ... hh:HH pr:MedicalPerson Hans Hansen org:VossSjukehus

pr:MedicalPerson pr:MPId pr:hasHPRnumber ... hh:HH 9149987

pr:DeviceProfile pr:DPId pr:deviceRole pr:device pr:currentUser pr:owner pc:PC56Profile pr:AmbulanceDevice pc:PC56 hhpr:HHProfile ...

pr:Device pr:DevId pr:model pr:capability pr:nodeID ... pc:PC56 Apple iBook G4 ... nod1

Example KB Content for Paramedic Lars Lie

Individual(ll:LL type(pr:MedicalPerson) value(pr:name "Lars Lie"^^xsd:string)

value(pr:affiliation org:VossSjukehus) value(pr:hasHPRnumber “9149988”^^xsd:string))

Page 231: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

215

Individual(llpr:LLProfile type(pr:UserProfile) value(pr:person ll:LL))

Individual(pc:PDA22 type(pr:Device) value(pr:model "Nokia xyz"^^xsd:string) value(pr:nodeID "nod2"^^xsd:string))

Individual(pc:PDA22Profile type(pr:DeviceProfile) value(pr:device pda:PDA22)

value(pr:currentUser llpr:LLProfile)) The same as above in OWL XML Syntax

<pr:MedicalPerson rdf:ID=”&ll;LL”> <pr:name>Lars Lie</pr:name> <pr:affiliation rdf:resource=”&org;VossSjukehus”/> <pr:hasHPRnumber>9149988</pr:hasHPRnumber> </pr:MedicalPerson>

<pr:UserProfile rdf:ID=”&llpr;LLProfile”> <pr:person rdf:resource=”&ll;LL”/> </pr: UserProfile >

<pr:Device rdf:ID=”&pc;PDA22”> <pr:model>Nokia xyz</pr:model> <pr:nodeID> nod2 </pr:nodeID> </pr:Device> <pr:DeviceProfile rdf:ID=”&pc;PDA22Profile”> <pr:device rdf:resource=”&pc;PDA22”/>

<pr:currentUser rdf:resource=”&llpr;LLProfile”/> </pr:DeviceProfile>

Resulting database tables on Lars Lies device: pr:UserProfile pr:UPId pr:person pr:role ...llpr:LLProfile ll:LL ...

pr:Person pr:PId pr:personType pr:name pr:affiliation ... ll:LL pr:MedicalPerson Lars Lie org:VossSjukehus

pr:MedicalPersonpr:MPId pr:hasHPRnumber ...ll:LL 9149988

pr:DeviceProfilepr:DPId pr:deviceRole pr:device pr:currentUser pr:owner pc:PDA22Profile ... pc:PDA22 llpr:LLProfile ...

pr:Device pr:DevId pr:model pr:capability pr:nodeID ...pc:PDA22 Nokia xyz ... nod2

Page 232: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

216

Phase 2 – Briefing phase – initiation of rescue operation Example KB Content Added in Phase 2 Note that person information (records) and devices were added a priori to the operation (Phase 1), thus not shown here. So the personal information for Hans Hansen was added in Phase 1. User profile for Hans Hansen:

Individual(hhpr:HHProfile type(pr:UserProfile) value(pr:person hh:HH))

The device profile for Hans Hansens device:

Individual(pc:PC56Profile type(pr:DeviceProfile) value(pr:device pc:PC56)

value(pr:currentUser hhpr:HHProfile) value(pr:deviceRole pr:AmbulanceDevice))

On initiation of the rescue operation, the necessary information about this rescue operation is added to the KB. The train accident is given ID “bba:Bergensbanen23”:

Individual(bba:Bergensbanen23 type(pr:Accident) value(pr:vehicleInvolved pr:Train) value(pr:numberOfPeople 400) value(pr:regionName “Bergensbanen,Raundalen,

Hordaland”^^xsd:String) value(pr:areaDescription “remote mountain area,

access difficult”^^xsd:String) value(pr:incidentDescription “rockslide, train off tracks due to

rocks ”^^xsd:string) value(pr:areaConditions “deep snow, minus10 degrees

celsius”^^xsd:string)) An instance of a new rescue operation for the current accident:

Individual(bba:ROperationBB23 type(pr:LandOperation)) An instance for the role of OSC for this rescue operation is created:

Individual(bba:OSCBergensb23 type(pr:OSC)) And the rescue scenario profile:

Individual(bba:RSProfileBB23 type(pr:RescueScenarioProfile) value(pr:rescueOperation bba:ROperationBB23) value(pr:roleCoveredByScenario bba:OSCBergensb23) value(pr:incident bba:Bergensbanen23))

There may also be additional Information Items added to the information profile, these are not shown in this example. The same content shown in OWL XML Syntax:

<pr:UserProfile rdf:ID="&hhpr;HHProfile"> <pr:person rdf:resource="&hh;HH"/>

</pr:UserProfile>

<pr:DeviceProfile rdf:ID=”&pc;PC56Profile”> <pr:device rdf:resource=”&pc;PC56”/>

Page 233: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

217

<pr:currentUser rdf:resource=”&hhpr;HHProfile”/> <pr:deviceRole rdf:resource=”&pr;AmbulanceDevice”/>

</pr:DeviceProfile>

<pr:Accident rdf:ID=”&bba;Bergensbanen23”> <pr:vehicleInvolved rdf:resource=”&pr:Train”/> <pr:numberOfPeople> 400 </pr:numberOfPeople> <pr:regionName> Bergensbanen, Raundalen, Hordaland </pr:regionName> <pr:areaDescription>remote mountain area,

access difficult </pr:areaDescription> <pr:incidentDescription>rockslide, train off tracks due to

rocks </pr:incidentDescription> <pr:areaCondotions> deep snow, minus10 degrees celsius </pr:areaConditions>

</pr:Accident> <pr:LandOperation rdf:ID=”&bba;ROperationBB23”/>

<pr:OSC rdf:ID=”&bba;OSCBergensb23”/> <pr:RescueScenarioProfile rdf:ID=”&bba;RSProfileBB23”> <pr:rescueOperation rdf:resource=”&bba;ROperationBB23”/>

<pr:roleCoveredByScenario rdf:resource=”&bba;OSCBergensb23”/> <pr:incident rdf:resource=”&bba;Bergensbanen23”/>

</pr:RescueScenarioProfile>

Resulting database tables for Phase 2: pr: RescueScenarioProfile

pr:RSPId pr: rescueOperation

pr:roleCoveredByScenario

pr:infoProfile ForScenario

pr:incident

bba:RSProfileBB23 bba: ROperationBB23

bba: OSCBergensb23

... bba: Bergensbanen23

pr:RescueOperation pr:ROpId pr:operationTypebba:ROperationBB23 pr:LandOperation

pr:Incident pr:IncId pr:

incident Type

pr: number Of People

pr:incident Description

pr:region Name

pr: area Description

pr:area Conditions

bba: Bergensbanen23

pr: Accident

400 rockslide, train off the tracks due to rocks

Bergensbanen, Raundalen, Hordaland

remote mountain area, access difficult

deep snow, minus 10 degrees celsius

pr:Accident pr: IncId

pr:vehicleInvolved ...

bba:Bergensbanen23 pr:Train pr:UserProfile pr:UPId pr:person pr:role ...hhpr:HHProfile hh:HH ...

pr:RescueOperationRole pr: RORId

pr: RORoleType

pr:reportsTo

pr:responsibility

pr:isMember Of

pr: hasUpdate Priority

bba:OSCBergensb23 pr:OSC ... ... ... ...

Page 234: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

218

pr:DeviceProfile pr:DPId pr:deviceRole pr:device pr:currentUser pr:owner pc:PC56Profile pr:AmbulanceDevice pc:PC56 hhpr:HHProfile ...

CurrentOperation COId ... curOp234

CurrentOperationProfiles COId profile curOp234 bba:RSProfileBB23 curOp234 hhpr:HHProfile curOp234 pc:PC56Profile

Phase 4 – Running phase Event Group 1: Personnel Arrival/Departure and Registering Movement On Site. Example of Update of KB CONTENT Event 1b: The user profile and device profile for Lars Lie, which were sent to OSC (and OiC health) on arrival:

Individual(llpr:LLProfile type(pr:UserProfile) value(pr:person ll:LL)) Individual(pc:PDA22Profile type(pr:DeviceProfile) value(pr:device pda:PDA22)

value(pr:currentUser llpr:LLProfile)) Instance for a new health team is created for the new team Lars Lie will be team leader for:

Individual(bba:hlthTm4 type(pr:Team))

New instance of teamleader hlth, the role that Lars Lie will take: Individual(bba:hlthTL1 type(pr:TeamLeader)

value(pr:memberOf bba:hlthTm4))

Updated Rescue Scenario Profile – roles covered by scenario with the new role: Individual(bba:RSProfileBB23 type(pr:RescueScenarioProfile)

... value(pr: roleCoveredByScenario bba:hlthTL1))

Information added in User Profile for Lars Lie related to role and team membership in this operation:

Individual(llpr:LLProfile type(pr:UserProfile) value(pr:role bba:hlthTL1))

In OWL XML Syntax (updates to the KB Event 1b):

<pr:UserProfile rdf:ID="&llpr;LLProfile"> <pr:person rdf:resource="&ll;LL"/>

</pr:UserProfile> <pr:DeviceProfile rdf:ID=”&pc;PDA22Profile”> <pr:device rdf:resource=”&pc;PDA22”/>

<pr:currentUser rdf:resource=”&llpr;LLProfile”/> </pr:DeviceProfile>

<pr:Team rdf:ID=”&bba;hlthTm4”/>

Page 235: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

219

<pr:TeamLeader rdf:ID=”&bba;hlthTL1”> <pr:memberOf rdf:resource=”&bba;hlthTm4”/> </pr:TeamLeader>

<pr:RescueScenarioProfile rdf:ID=”&bba;RSProfileBB23”> <pr:roleCoveredByScenario rdf:resource=”&bba;hlthTL1”/>

</pr:RescueScenarioProfile>

<pr:UserProfile rdf:ID=”&llpr;LLProfile”> <pr:role rdf:resource=”&bba;hlthTL1”/> </pr:UserProfile>

EVENT 1 d. : Update of Dynamic Context:

Example of Update of KB CONTENT Event 1d: Change of location for LL device, a few minutes after Lars Lie’s arrival (dynamic context update):

Individual(event1:ctxUpd25 type(cxt :DeviceContext) value(cxt:profile pc:PDA22Profile) value(cxt:time 10:43^^xsd:time) value(cxt:position

Individual(event1:pos23x type(cxt:Position) value(cxt:latitude “60n39”^^xsd:string) value(cxt:longitude “6e26”^^xsd:string))))

OWL XML Syntax:

<cxt:DeviceContext rdf:ID=”&event1;ctxUpd25”> <cxt:profile rdf:resource=”&pc;PDA22Profile”/>

<cxt:time>10:43</cxt:time> <cxt:position> <cxt:Position rdf:ID=”&event1;pos23x”> <cxt:latitude>60n39</cxt:latitude>

<cxt:longitude>6e26</cxt:longitude> </cxt:Position>

</cxt:position> </cxt:DeviceContext>

Resulting database tables for Event 1a and 1d: Note that information added before Event 1 is shown in grey. Only tables relevant to Event 1 are shown here. We do not show tables where there have not been any changes, e.g., regarding person information. Hans Hansen is the current OSC for this accident. Lars Lie’s UserProfile is added. pr:UserProfile pr:UPId pr:person pr:role ...hhpr:HHProfile hh:HH bba:OSCBergensb23 llpr:LLProfile ll:LL bba:hlthTL1

A new instance of a team (bba:hlthTm4) and team leader (bba:hlthTL1) has been created (An instance of OSC for this accident was added in Phase 2, at the initiation of the operation.) pr:RescueOperationRole pr: RORId

pr: RORoleType

pr:reports To

pr:responsibility

pr:isMemberOf

pr: hasUpdate Priority

bba: OSCBergensb23

pr:OSC ... ... ... ...

bba:hlthTL1 pr:TeamLeader ... ... bba:hlthTm4 ...

Page 236: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

220

pr:Team pr:TId ... bba:hlthTm4 ...

pr:RescueScenarioProfile pr:RSPId pr:rescueOperation pr:roleCoveredBy

Scenario pr:infoProfile ForScenario

bba:RSProfileBB23 bba:ROperationBB23 bba:OSCBergensb23 ... bba:RSProfileBB23 bba:ROperationBB23 bba:hlthTL1 ...

pr:DeviceProfile pr:DPId pr:deviceRole pr:device pr:currentUser pr:owner pc:PC56Profile pr:AmbulanceDevice pc:PC56 hhpr:HHProfile ... pc:PDA22Profile ... pc:PDA22 llpr:LLProfile ...

cxt:Context cxt:CPId cxt:CxtType cxt:profile cxt:time ... event1:ctxUpd25 cxt:DeviceContext pc:PDA22Profile 10:43

cxt:DeviceContext cxt:CPId cxt:Position ...event1:ctxUpd25 event1:pos23x

cxt:Position cxt:PId cxt:longitude cxt:latitudeevent1:pos23x 6e26 60n39

Current Operation tables for events 1b and 1d: Event 1a. update after receiving message of new personnel arrival (profile): Lars Lie has arrived the scene, his user profile (llpr:LLProfile) and Device profile (pc:PDA22Profile), sent to OSC, is added to the Current Operation profiles. CurrentOperationProfiles COId profile curOp234 hhpr:HHProfile curOp234 bba:RSProfileBB23 curOp234 pc:PC56Profile curOp234 llpr:LLProfile curOp234 pc:PDA22Profile

Event 1d. update after received message of update of dynamic context. CurrentOperationContexts COId context curOp234 event1:ctxUpd25 Event Group 2: Update Changes in Profiles. Example of Update in KB: Changing role in UserProfile for Hans Hansen. OWL Concrete Abstract Syntax: Before Event 1 and Event 2: on arrival, Hans Hansen had assumed the role as (temporary) OSC as he was first on scene. The role description was added to his user profile.

Individual(hhpr:HHProfile type(pr:UserProfile) value(pr:person hh:HH)

... value(pr:role bba:OSCBergensb23))

Page 237: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

221

In the meantime, other personnel have arrived, and many roles are filled. These are not shown in this example. Now, some time later, the police have arrived and a police officer has taken over the role as OSC. Thus Hans Hansen is given a new rescue operation role. First, a new instance of a health team member for health team 4 (bba:hlthTm4) is added to the rescue scenario profile for the ongoing operation (bba), and then Hans Hansen’s user profile is changed. The changes are illustrated by strikethroughs of old values.

Individual(bba:hlthTmM34 type(pr:TeamMember) value(pr:isMemberOf bba:hlthTm4))

Individual(hhpr:HHProfile type(pr:UserProfile) value(pr:person hh:HH)

... value(pr:role bba:OSCBergensb23 bba:hlthTmM34))

The new role is added to the rescue scenario profile for the ongoing operation:

Individual(bba:RSProfileBB23 type(pr:RescueScenarioProfile) ... value(pr:roleCoveredByScenario bba:hlthTmM34))

The same shown in OWL XML Syntax:

<pr:TeamMember rdf:ID=”&bba;hlthTmM34”> <pr:isMemberOf rdf:resource=”&bba;hlthTm4”/> </pr:TeamMember> <pr:UserProfile rdf:ID="&hhpr;HHProfile">

... <pr:role> bba:OSCBergensb23 bba:hlthTmM34</pr:role>

</pr:UserProfile>

<pr:RescueScenarioProfile rdf:ID=”&bba;RSProfileBB23”> <pr:roleCoveredByScenario rdf:resource=”&bba;hlthTmM34”/>

</pr:RescueScenarioProfile>

Resulting Database Tables for Event 2: We do not show tables where there have not been any changes, e.g., regarding person information, and the Team bba:htlthTm4. Hans Hansen (first on site) took on the role as OSC, but now (in Event 2) his role is changed to team member of health team 4 (hlthTm4). The old value is shown in strikethrough. pr:UserProfile pr:UPId pr:person pr:role ... hhpr:HHProfile hh:HH bba:OSCBergensb23 bba:hlthTmM34

Rescue operation role for OSC for the current accident was added in Phase 2, and the role for team leader added in Event 1. pr:RescueOperationRole pr: RORId

pr: RORoleType

pr:reports To

pr:responsibility

pr:isMemberOf

pr: hasUpdate Priority

bba:OSCBergensb23 pr:OSC ... ... ... ... bba:hlthTL1 pr:TeamLeader ... ... bba:hlthTm4 ... bba:hlthTmM34 pr:TeamMember ... ... bba:hlthTm4 ...

Page 238: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

222

pr: RescueScenarioProfile pr:RSPId pr:

rescueOperationpr:roleCoveredByScenario

pr:infoProfile ForScenario

pr:incident

bba:RSProfileBB23 bba: ROperationBB23

bba: OSCBergensb23

... bba: Bergensbanen23

bba:RSProfileBB23 bba: ROperationBB23

bba:hlthTL1 ... bba: Bergensbanen23

bba:RSProfileBB23 bba: ROperationBB23

bba:hlthTmM34 ... bba: Bergensbanen23

pr:DeviceProfile pr:DPId pr:deviceRole pr:device pr:currentUser pr:owner pc:PC56Profile pr:AmbulanceDevice pc:PC56 hhpr:HHProfile ... pc:PDA22Profile ... pc:PDA22 llpr:LLProfile ... cxt:Context cxt:CPId cxt:CxtType cxt:profile cxt:time ... event1:ctxUpd25 cxt:DeviceContext pc:PDA22Profile 10:43

cxt:DeviceContext cxt: CPId cxt:Position ...event1:ctxUpd25 event1:pos23x

cxt:Position cxt:PId cxt:longitude cxt:latitudeevent1:pos23x 6e26 60n39

There were no changes in the tables for CurrentOperation in Event 2.

Appendix C – Example Rescue Ontology in OWL The ontologies have been validated using the WonderWeb OWL Validation [WonderWeb]. Ontology consistency has been checked using the Protege-OWL editor [Protege]. The profile ontology imports the organisation ontology, and the dynamic context ontology imports both the profile ontology and the organisation ontology. Therefore, we present the ontologies in the following order: organisation ontology, profile ontology, and dynamic context ontology. According to the WonderWeb OWL Validator and Protege-OWL editor, the organisation ontology is OWL Lite, the profile ontology is OWL DL, and the dynamic context ontology is OWL Full. Organisation ontology Using the WonderWeb OWL Validator, the organisation ontology has been validated to be OWL Lite. <?xml version="1.0"?> <!DOCTYPE owl[ <!ENTITY rdf "http://www.w3.org/1999/02/22-rdf-syntax-ns#"> <!ENTITY rdfs "http://www.w3.org/2000/01/rdf-schema#"> <!ENTITY xsd "http://www.w3.org/2001/XMLSchema#"> <!ENTITY owl "http://www.w3.org/2002/07/owl#"> <!ENTITY org "http://www.ifi.uio.no/dmms/organisation#"> ]> <rdf:RDF xmlns:rdf = "&rdf;" xmlns:rdfs = "&rdfs;" xmlns:owl = "&owl;"

Page 239: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

223

xmlns:xsd = "&xsd;" xmlns:org = "&org;" xml:base = "http://www.ifi.uio.no/dmms/organisation#"> <owl:Ontology rdf:about=""> <rdfs:comment rdf:datatype="&xsd;string" > Organisations </rdfs:comment> <rdfs:label rdf:datatype="&xsd;string" >Organisation Ontology</rdfs:label> </owl:Ontology> <owl:Class rdf:ID="Hospital"> <rdfs:subClassOf> <owl:Class rdf:ID="Organisation"/> </rdfs:subClassOf> </owl:Class> <owl:DatatypeProperty rdf:ID="telephone"> <rdfs:domain rdf:resource="#Organisation"/> <rdfs:range rdf:resource="&xsd;string"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="address"> <rdfs:range rdf:resource="&xsd;string"/> <rdfs:domain rdf:resource="#Organisation"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="orgNumber"> <rdfs:range rdf:resource="&xsd;string"/> <rdfs:domain rdf:resource="#Organisation"/> </owl:DatatypeProperty> <org:Hospital rdf:ID="VossSjukehus">

<org:address>Sjukehusvegen 16, 5700 Voss</org:address> <org:telephone>56 53 35 00</org:telephone> </org:Hospital> </rdf:RDF>

Profile ontology Using the WonderWeb OWL Validator, the profile ontology has been validated to be OWL DL. <?xml version="1.0"?> <!DOCTYPE owl [ <!ENTITY rdf "http://www.w3.org/1999/02/22-rdf-syntax-ns#"> <!ENTITY rdfs "http://www.w3.org/2000/01/rdf-schema#"> <!ENTITY xsd "http://www.w3.org/2001/XMLSchema#"> <!ENTITY owl "http://www.w3.org/2002/07/owl#"> <!ENTITY org "http://www.ifi.uio.no/dmms/organisation#"> <!ENTITY pr "http://www.ifi.uio.no/dmms/profileontology#"> ]> <rdf:RDF xmlns:rdf = "&rdf;" xmlns:rdfs = "&rdfs;" xmlns:owl = "&owl;" xmlns:xsd = "&xsd;" xmlns:org = "&org;" xmlns:pr = "&pr;" xml:base = "http://www.ifi.uio.no/dmms/profileontology#"> <owl:Ontology rdf:about=""> <owl:imports rdf:resource="http://www.ifi.uio.no/dmms/organisation"/> <rdfs:comment rdf:datatype="&xsd;string" > Example Profile Ontology for rescue scenarios </rdfs:comment> <rdfs:label rdf:datatype="&xsd;string" >Profile Ontology</rdfs:label> </owl:Ontology> <owl:Class rdf:ID="TeamMember"> <rdfs:subClassOf> <owl:Class rdf:ID="RescueOperationRole"/>

Page 240: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

224

</rdfs:subClassOf> <rdfs:subClassOf> <owl:Restriction> <owl:someValuesFrom> <owl:Class rdf:ID="TeamLeader"/> </owl:someValuesFrom> <owl:onProperty> <owl:ObjectProperty rdf:ID="reportsTo"/> </owl:onProperty> </owl:Restriction> </rdfs:subClassOf> <rdfs:subClassOf> <owl:Restriction> <owl:onProperty> <owl:DatatypeProperty rdf:ID="hasUpdatePriority"/> </owl:onProperty> <owl:hasValue rdf:datatype="&xsd;int" >4</owl:hasValue> </owl:Restriction> </rdfs:subClassOf> </owl:Class> <owl:Class rdf:ID="RescueScenarioProfile"> <rdfs:subClassOf> <owl:Class rdf:ID="Profile"/> </rdfs:subClassOf> </owl:Class> <owl:Class rdf:ID="SeaOperation"> <rdfs:subClassOf> <owl:Class rdf:ID="RescueOperation"/> </rdfs:subClassOf> </owl:Class> <owl:Class rdf:ID="InformationPriority"/> <owl:Class rdf:ID="OfficerInCharge"> <rdfs:subClassOf rdf:resource="#RescueOperationRole"/> <rdfs:subClassOf> <owl:Restriction> <owl:onProperty> <owl:ObjectProperty rdf:about="#reportsTo"/> </owl:onProperty> <owl:someValuesFrom> <owl:Class rdf:ID="OSC"/> </owl:someValuesFrom> </owl:Restriction> </rdfs:subClassOf> <rdfs:subClassOf> <owl:Restriction> <owl:onProperty> <owl:DatatypeProperty rdf:about="#hasUpdatePriority"/> </owl:onProperty> <owl:hasValue rdf:datatype="&xsd;int" >2</owl:hasValue> </owl:Restriction> </rdfs:subClassOf> </owl:Class> <owl:Class rdf:about="#OSC"> <rdfs:subClassOf rdf:resource="#OfficerInCharge"/> <rdfs:subClassOf> <owl:Restriction> <owl:onProperty> <owl:ObjectProperty rdf:about="#reportsTo"/> </owl:onProperty> <owl:someValuesFrom> <owl:Class rdf:ID="RSC"/> </owl:someValuesFrom> </owl:Restriction> </rdfs:subClassOf> <rdfs:subClassOf> <owl:Restriction>

Page 241: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

225

<owl:onProperty> <owl:DatatypeProperty rdf:about="#hasUpdatePriority"/> </owl:onProperty> <owl:hasValue rdf:datatype="&xsd;int" >1</owl:hasValue> </owl:Restriction> </rdfs:subClassOf> </owl:Class> <owl:Class rdf:ID="UserProfile"> <rdfs:subClassOf rdf:resource="#Profile"/> </owl:Class> <owl:Class rdf:about="&org;Organisation"/> <owl:Class rdf:ID="InformationItem"/> <owl:Class rdf:ID="LandOperation"> <rdfs:subClassOf rdf:resource="#RescueOperation"/> </owl:Class> <owl:Class rdf:ID="Responsibility"/> <owl:Class rdf:ID="Device"/> <owl:Class rdf:ID="DeviceRole"/> <owl:Class rdf:ID="Team"> <rdfs:subClassOf rdf:resource="&owl;Thing"/> <rdfs:subClassOf> <owl:Restriction> <owl:onProperty> <owl:ObjectProperty rdf:ID="hasTeamLeader"/> </owl:onProperty> <owl:cardinality rdf:datatype="&xsd;int" >1</owl:cardinality> </owl:Restriction> </rdfs:subClassOf> </owl:Class> <owl:Class rdf:ID="MedicalPerson"> <rdfs:subClassOf> <owl:Class rdf:ID="Person"/> </rdfs:subClassOf> </owl:Class> <owl:Class rdf:ID="Accident"> <rdfs:subClassOf> <owl:Class rdf:ID="Incident"/> </rdfs:subClassOf> </owl:Class> <owl:Class rdf:ID="DeviceProfile"> <rdfs:subClassOf rdf:resource="#Profile"/> </owl:Class> <owl:Class rdf:about="#Incident"> <rdfs:comment rdf:datatype="&xsd;string" >e.g., accident(train/boat/car), earthquake</rdfs:comment> </owl:Class> <owl:Class rdf:ID="NaturalDisaster"> <rdfs:subClassOf rdf:resource="#Incident"/> </owl:Class> <owl:Class rdf:ID="Vehicle"> <rdfs:comment rdf:datatype="&xsd;string" > e.g., train/boat/car, can be several involved in same accident, e.g., car and train</rdfs:comment> </owl:Class> <owl:Class rdf:about="#TeamLeader"> <rdfs:subClassOf rdf:resource="#RescueOperationRole"/> <rdfs:subClassOf> <owl:Restriction> <owl:onProperty> <owl:ObjectProperty rdf:about="#reportsTo"/> </owl:onProperty> <owl:someValuesFrom rdf:resource="#OfficerInCharge"/> </owl:Restriction> </rdfs:subClassOf> <rdfs:subClassOf>

Page 242: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

226

<owl:Restriction> <owl:onProperty> <owl:DatatypeProperty rdf:about="#hasUpdatePriority"/> </owl:onProperty> <owl:hasValue rdf:datatype="&xsd;int" >3</owl:hasValue> </owl:Restriction> </rdfs:subClassOf> </owl:Class> <owl:Class rdf:ID="AirOperation"> <rdfs:subClassOf rdf:resource="#RescueOperation"/> </owl:Class> <owl:Class rdf:ID="InformationProfile"> <rdfs:subClassOf rdf:resource="#Profile"/> </owl:Class> <owl:Class rdf:ID="DeviceCapability"/> <owl:ObjectProperty rdf:ID="deviceRole"> <rdfs:domain rdf:resource="#DeviceProfile"/> <rdfs:range rdf:resource="#DeviceRole"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="hasMember"> <rdfs:range rdf:resource="#Person"/> <rdfs:domain rdf:resource="#Team"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="priority"> <rdfs:domain rdf:resource="#InformationItem"/> <rdfs:range rdf:resource="#InformationPriority"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="infoProfileForScenario"> <rdfs:domain rdf:resource="#RescueScenarioProfile"/> <rdfs:range rdf:resource="#InformationProfile"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="person"> <rdfs:range rdf:resource="#Person"/> <rdfs:domain rdf:resource="#UserProfile"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="device"> <rdfs:domain rdf:resource="#DeviceProfile"/> <rdfs:range rdf:resource="#Device"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:about="#hasTeamLeader"> <rdfs:domain rdf:resource="#Team"/> <rdfs:range rdf:resource="#Person"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="hasInfoProfile"> <rdfs:range rdf:resource="#InformationProfile"/> <rdfs:domain rdf:resource="#RescueOperationRole"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="currentUser"> <rdfs:domain rdf:resource="#DeviceProfile"/> <rdfs:range rdf:resource="#UserProfile"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="capability"> <rdfs:domain rdf:resource="#Device"/> <rdfs:range rdf:resource="#DeviceCapability"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="rescueOperation"> <rdfs:domain rdf:resource="#RescueScenarioProfile"/> <rdfs:range rdf:resource="#RescueOperation"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="affiliation"> <rdfs:range rdf:resource="&org;Organisation"/> <rdfs:domain rdf:resource="#Person"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="owner"> <rdfs:domain rdf:resource="#DeviceProfile"/> <rdfs:range> <owl:Class rdf:ID="DeviceOwner">

Page 243: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

227

<owl:unionOf rdf:parseType="Collection"> <owl:Class rdf:about="#Person"/> <owl:Class rdf:about="&org;Organisation"/> </owl:unionOf> </owl:Class> </rdfs:range> </owl:ObjectProperty> <owl:ObjectProperty rdf:about="#isMemberOf"> <rdfs:domain rdf:resource="#RescueOperationRole"/> <rdfs:range rdf:resource="#Team"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="item"> <rdfs:domain rdf:resource="#InformationProfile"/> <rdfs:range rdf:resource="#InformationItem"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="incident"> <rdfs:range rdf:resource="#Incident"/> <rdfs:domain rdf:resource="#RescueScenarioProfile"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="vehicleInvolved"> <rdfs:range rdf:resource="#Vehicle"/> <rdfs:domain rdf:resource="#Accident"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="roleCoveredByScenario"> <rdfs:domain rdf:resource="&owl;Thing"/> <rdfs:domain rdf:resource="#RescueScenarioProfile"/> <rdfs:range> <owl:Class rdf:ID="CoveredRole"> <owl:unionOf rdf:parseType="Collection"> <owl:Class rdf:about="#DeviceRole"/> <owl:Class rdf:about="#RescueOperationRole"/> </owl:unionOf> </owl:Class> </rdfs:range> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="role"> <rdfs:range rdf:resource="#RescueOperationRole"/> <rdfs:domain rdf:resource="#UserProfile"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:about="#reportsTo"> <rdfs:domain rdf:resource="#RescueOperationRole"/> <rdfs:range> <owl:Class> <owl:unionOf rdf:parseType="Collection"> <owl:Class rdf:about="#RescueOperationRole"/> <owl:Class rdf:about="#RSC"/> </owl:unionOf> </owl:Class> </rdfs:range> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="responsibility"> <rdfs:range rdf:resource="#Responsibility"/> <rdfs:domain rdf:resource="#RescueOperationRole"/> </owl:ObjectProperty> <owl:DatatypeProperty rdf:ID="hasHPRNumber"> <rdfs:range rdf:resource="&xsd;string"/> <rdfs:domain rdf:resource="#MedicalPerson"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="model"> <rdfs:domain rdf:resource="#Device"/> <rdfs:range rdf:resource="&xsd;string"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="areaConditions"> <rdfs:comment rdf:datatype="&xsd;string" > description of conditions in the rescue area, e.g., weather, snow depth </rdfs:comment> <rdfs:domain rdf:resource="#Incident"/> <rdfs:range rdf:resource="&xsd;string"/>

Page 244: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

228

</owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="name"> <rdfs:domain rdf:resource="#Person"/> <rdfs:range rdf:resource="&xsd;string"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="currentTask"> <rdfs:range rdf:resource="&xsd;string"/> <rdfs:domain rdf:resource="#Team"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="subject"> <rdfs:range rdf:resource="&xsd;string"/> <rdfs:domain rdf:resource="#InformationItem"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:about="#hasUpdatePriority"> <rdfs:range rdf:resource="&xsd;int"/> <rdfs:domain rdf:resource="#RescueOperationRole"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="nodeID"> <rdfs:range rdf:resource="&xsd;string"/> <rdfs:domain rdf:resource="#Device"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="incidentDescription"> <rdfs:domain rdf:resource="#Incident"/> <rdfs:range rdf:resource="&xsd;string"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="regionName"> <rdfs:range rdf:resource="&xsd;string"/> <rdfs:domain rdf:resource="#Incident"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="numberOfPeople"> <rdfs:comment rdf:datatype="&xsd;string" > Likely estimated number, e.g., passengers, train driver etc. (not rescue personnel) </rdfs:comment> <rdfs:range rdf:resource="&xsd;integer"/> <rdfs:domain rdf:resource="#Incident"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="areaDescription"> <rdfs:comment rdf:datatype="&xsd;string" > e.g., rural, mountain, public road, tunnel etc.</rdfs:comment> <rdfs:domain rdf:resource="#Incident"/> <rdfs:range rdf:resource="&xsd;string"/> </owl:DatatypeProperty> <pr:UserProfile rdf:ID="UProfile"/> <pr:DeviceProfile rdf:ID="DProfile"/> <pr:InformationProfile rdf:ID="IProfile"/> <pr:RescueScenarioProfile rdf:ID="RSProfile"/> <pr:TeamMember rdf:ID="TmM"/> <pr:TeamLeader rdf:ID="TmL"/> <pr:OfficerInCharge rdf:ID="OiC"/> <pr:OSC rdf:ID="OSCoo"/> <pr:InformationPriority rdf:ID="Top"/> <pr:InformationPriority rdf:ID="Urgent"/> <pr:InformationPriority rdf:ID="Normal"/> <pr:InformationPriority rdf:ID="Low"/> <pr:DeviceRole rdf:ID="CommunicationDevice"/> <pr:DeviceRole rdf:ID="ServerDevice"/> <pr:DeviceRole rdf:ID="AmbulanceDevice"/> <pr:NaturalDisaster rdf:ID="Earthquake"/> <pr:NaturalDisaster rdf:ID="Tsunami"/> <pr:Vehicle rdf:ID="Car"/> <pr:Vehicle rdf:ID="Train"/> <pr:Vehicle rdf:ID="Boat"/> <pr:Accident rdf:ID="CarAccident"> <pr:vehicleInvolved rdf:resource="#Car"/> </pr:Accident> <pr:Accident rdf:ID="RailwayAccident">

Page 245: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

229

<pr:vehicleInvolved rdf:resource="#Train"/> </pr:Accident> <pr:Accident rdf:ID="BoatAccident"> <pr:vehicleInvolved rdf:resource="#Boat"/> </pr:Accident> <pr:InformationProfile rdf:ID="StandardInfoProfile"> <pr:item> <pr:InformationItem rdf:ID="InfoItem_31"> <pr:subject> http://www.ifi.uio.no/dmms/InfoItem#ImmediateBroadcastData </pr:subject> <pr:priority rdf:resource="#Top"/> </pr:InformationItem> </pr:item> <pr:item> <pr:InformationItem rdf:ID="InfoItem_32"> <pr:subject> http://www.ifi.uio.no/dmms/InfoItem#SiteStatus </pr:subject> <pr:priority rdf:resource="#Urgent"/> </pr:InformationItem> </pr:item> <pr:item> <pr:InformationItem rdf:ID="InfoItem_33"> <pr:subject> http://www.ifi.uio.no/dmms/InfoItem#PatientRecord </pr:subject> <pr:priority rdf:resource="#Normal"/> </pr:InformationItem> </pr:item> <pr:item> <pr:InformationItem rdf:ID="InfoItem_34"> <pr:subject> http://www.ifi.uio.no/dmms/InfoItem#SiteMap </pr:subject> <pr:priority rdf:resource="#Normal"/> </pr:InformationItem> </pr:item> <pr:item> <pr:InformationItem rdf:ID="InfoItem_35"> <pr:subject> http://www.ifi.uio.no/dmms/InfoItem#ForensicsReport </pr:subject> <pr:priority rdf:resource="#Low"/> </pr:InformationItem> </pr:item> </pr:InformationProfile> </rdf:RDF>

Dynamic Context ontology Using the WonderWeb OWL Validator, the dynamic context ontology has been validated to be OWL Full. <?xml version="1.0"?> <!DOCTYPE owl[ <!ENTITY rdf "http://www.w3.org/1999/02/22-rdf-syntax-ns#"> <!ENTITY rdfs "http://www.w3.org/2000/01/rdf-schema#"> <!ENTITY xsd "http://www.w3.org/2001/XMLSchema#"> <!ENTITY owl "http://www.w3.org/2002/07/owl#"> <!ENTITY pr "http://www.ifi.uio.no/dmms/profileontology#"> <!ENTITY cxt "http://www.ifi.uio.no/dmms/contextontology#"> ]> <rdf:RDF xmlns:rdf = "&rdf;" xmlns:rdfs = "&rdfs;" xmlns:owl = "&owl;"

Page 246: folk.uio.nofolk.uio.no/noruns/thesisFinal.pdf · Abstract Characterised by being highly dynamic, heterogeneous and hectic, rescue and emergency operations are cooperative efforts

230

xmlns:xsd = "&xsd;" xmlns:cxt = "&cxt;" xmlns:pr = "&pr;" xml:base = "http://www.ifi.uio.no/dmms/contextontology#"> <owl:Ontology rdf:about=""> <rdfs:comment rdf:datatype="&xsd;string" > Example Dynamic Context Ontology </rdfs:comment> <owl:imports rdf:resource="http://www.ifi.uio.no/dmms/organisation"/> <owl:imports rdf:resource="http://www.ifi.uio.no/dmms/profileontology"/> <rdfs:label rdf:datatype="&xsd;string" > Dynamic Context Ontology</rdfs:label> </owl:Ontology> <owl:Class rdf:ID="Task"/> <owl:Class rdf:ID="UserContext"> <rdfs:subClassOf> <owl:Class rdf:ID="Context"/> </rdfs:subClassOf> </owl:Class> <owl:Class rdf:ID="Position"/> <owl:Class rdf:ID="DeviceContext"> <rdfs:subClassOf rdf:resource="#Context"/> </owl:Class> <owl:ObjectProperty rdf:ID="currentTask"> <rdfs:range rdf:resource="#Task"/> <rdfs:domain rdf:resource="#UserContext"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="position"> <rdfs:range rdf:resource="#Position"/> <rdfs:domain rdf:resource="#DeviceContext"/> </owl:ObjectProperty> <owl:ObjectProperty rdf:ID="profile"> <rdfs:range rdf:resource="&pr;Profile"/> <rdfs:domain rdf:resource="#Context"/> </owl:ObjectProperty> <owl:DatatypeProperty rdf:ID="time"> <rdfs:range rdf:resource="&xsd;time"/> <rdfs:domain rdf:resource="#Context"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="latitude"> <rdfs:domain rdf:resource="#Position"/> <rdfs:range rdf:resource="&xsd;string"/> </owl:DatatypeProperty> <owl:DatatypeProperty rdf:ID="longitude"> <rdfs:range rdf:resource="&xsd;string"/> <rdfs:domain rdf:resource="#Position"/> </owl:DatatypeProperty> </rdf:RDF>