agreement negotiation for reaching the right level of privacy_346

Upload: ubiquitous-computing-and-communication-journal

Post on 08-Apr-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/7/2019 Agreement Negotiation for Reaching the Right Level of Privacy_346

    1/12

    AGREEMENT NEGOTIATION FOR REACHING

    THE RIGHT LEVEL OF PRIVACY

    Susana Alcalde Bags

    1 2

    , Andreas Zeidler

    1

    1Siemens AG, Corporate Research and Technologies, Munich, Germany

    [email protected], [email protected]

    Carlos Fernandez Valdivielso2, Ignacio R. Matias

    2

    2Public University of Navarra, Department of Electrical and Electronic Engineering,

    Navarra, Spain

    [email protected], [email protected]

    ABSTRACT

    In this article, we present new ways of establishing and enforcing binding

    agreements with third-parties for personal privacy protection, regulating the way

    the data is being used after initial collection. For that, we present a novel model of

    privacy enforcement, complementing the notion of enterprise privacy with theincorporation of an extra level of privacy enforcement, called personal privacy

    enforcement. The core idea is twofold: First, to allow users of a pervasive service

    to negotiate the desired level of privacy in a rather automated and simple way

    before using the service and thereby granting access to personal data; and secondly,

    to track and monitor the whole life-cycle of data after it was released. For this we

    introduce the specification and use of an agreement negotiation protocol on a set of

    obligations, which is described in this article.

    Keywords: personal privacy, obligations, pervasive computing, privacy

    architecture

    1 INTRODUCTIONEnterprises today collect already large amounts

    of personal data from their customers. In the

    upcoming era of pervasive computing tracking the

    flow and interchange of data most probably will get

    out of individuals' control entirely, due to the huge

    increase in scale, manner, and motivation of

    collection, as well as the addition of new data types

    such as context data. To ease privacy concerns,

    enterprises more and more publish privacy

    statements that outline the enterprise privacy criteria,

    i.e. how data is being collected, used and shared.

    These privacy statements can be found written innatural language or described using P3P [1]. In

    general, they merely constitute privacy promises and

    need to be backed up by technological means. In the

    work of Casassa et al. [2] the use of an ObligationManagement System (OMS), as a solution, is

    proposed. There, Obligations are policies that dictate

    expectations and duties for enterprises on how to

    handle personal data and how to deal with its life-

    cycle management. So far, most efforts related to

    privacy protection are centered on developing

    privacy control for enterprises, like OMS [2], E-P3P

    [3], EPAL [4], XACML [5].

    In our work, we address the idea of extending privacy control in a substantial manner towards a

    holistic privacy management, with the collaboration

    of personal privacy and enterprise privacyenforcement systems. It is not sufficient to rely only

    on enterprise's willingness to support individual's

    privacy requirements, as it is typically deemed

    sufficient today. Protecting individuals' privacy in

    pervasive computing requires an additional level of

    privacy protection. Together with the enforcement of

    enterprise's privacy statements, government policies

    and privacy laws, it is necessary to deploy

    mechanisms specifically for defining,

    communicating and enforcing people's privacy

    preferences. Privacy is a very subjective concept,

    what is acceptable to one person, may be

    unacceptable to another. Thus, our approach is toallow individuals, as users of pervasive services, and

    enterprises to collaborate on the protection of

    personal and sensitive data.

    We particularly address in this paper how this

    collaboration between personal privacy and

    enterprise privacy solutions can be established. We

    propose the use ofagreements for creating automatic

    bindings between parts and to ensure that data

    protection requirements are adhered to. The main

    problem, obviously, is that users still have to trust to

    some degree in enterprises' promises. Thus, we

    present the idea of employing observable bindings

    between personal and enterprise frameworks. This isrealized by introducing an agreement negotiation

    protocol together with a notification mechanism.

    Special Issue of Ubiquitous Computing Security Systems

    UbiCC Journal - Volume 5 www.ubicc.org 1758

    http://www.ubicc.org/journal_detail.aspx?id=34http://www.ubicc.org/journal_detail.aspx?id=34
  • 8/7/2019 Agreement Negotiation for Reaching the Right Level of Privacy_346

    2/12

    Consequently, we present in this article our

    privacy enforcement model designed to allow the

    integration of enterprise and personal privacy control

    solutions. We detail an agreement negotiation

    protocol, on a set of obligations, designed to reach

    the right level of privacy and show how agreements

    can be tracked by using notifications. Finally, we present the first prototype of the Obligations

    application, developed to allow non-expert users to

    manage obligations by themselves.

    2 PRIVACY PROTECTION IN PERVASIVECOMPUTING

    Laws and enterprise privacy alone are not

    sufficient to prevent an unwanted intrusion into the

    private sphere of an individual. Users of pervasive

    computing systems need mechanisms to control and

    manage their own privacy concerns. For instance, by

    delimiting who has access to what data and underwhich circumstances, which may be achieved by the

    specification of privacy policies.

    However, purely technological solutions, such as

    policy systems for access control, by themselves, can

    achieve privacy goals only in limited situations, i.e.,

    they are necessary but not sufficient. Privacy control

    in pervasive computing requires the integration of

    several complementary mechanisms into a new

    design space, to address privacy concerns raised by a

    continuous collection and access to personal

    contextual information.

    The privacy design space defined by Jiang et al.

    [6] and shown in Figure 1 is taken as the referencedesign space within the context of our work. We

    consider it to represent and cover quite accurately the

    new privacy needs of users of pervasive computing

    applications. It categorizes privacy protection

    technologies into measures for prevention, avoidance,

    and detection at the different stages of the data

    lifecycle: collection, access, and secondary use.

    Pervasive computing privacy requires then the

    deployment of measures for: prevention, to ensure

    that undesirable use of personal data will not occur.

    Within the group of preventative measures we

    include access control mechanisms, obfuscation and

    anonymity. Privacy needs also measures foravoidance, to minimize the risks and maximize the

    benefits associated with data exchanges. By using

    mechanisms for awareness, feedback and usage

    control, the flow of information from data consumers

    back to data owners can be increased. Users need to

    be aware of personal privacy issues to properly

    administrate privacy control and avoid risky

    situations during the three phases of the data life

    cycle. And finally, measures fordetection are needed,

    to detect unwanted misuses of disclosed data and act

    accordingly.

    Protection during collection refers to the event at

    which personal data is initially collected. Important

    decisions should be made at this phase including

    who can collect which data, under which

    circumstances, what format, and how trusted is the

    data collector. Access refers to the point at which

    data is accessed, for a particular purpose, by the

    ultimate recipient. Important decisions concerned

    this phase are how accurate and how confidential thedata should be, who should be allowed to access it,

    for what purpose, how accesses should be logged or

    how data is suppose to be stored. Secondary use

    refers to the use of data after initial access has been

    made. Secondary use may also include passing data

    from one party, who might have been authorized, to

    another party, who might not. Important decisions

    made at this phase should include who else should be

    able to access the data, secondary use purposes, and

    whether it is allowed to share the data even further

    with others or how long the data can be stored.

    The focus of this paper is on how to enable

    mechanisms for secondary use and how thosemeasures can be integrated with mechanisms used

    during collection and access. For secondary use, we

    consider the specification and negotiation of

    obligations as a preventative mechanism, and their

    enforcement, by the enterprise recipient, an

    avoidance measure. By ensuring that the service,

    recipient of the data, is able to enforce a binding

    agreement on a set of obligations, it can be avoided

    that uncontrolled misuses of data after initial

    collection takes place. With respect to detection

    measures, common mechanisms are data logs and

    legal audits. In our work we introduce additionally

    the use of notifications to actively monitor theongoing use of disclosed data.

    Figure 1: Privacy Design Space for Pervasive

    Computing

    Today, there are mechanisms that can be applied

    independently for each of the areas covered by the privacy design space, although measures for

    Special Issue of Ubiquitous Computing Security Systems

    UbiCC Journal - Volume 5 www.ubicc.org 1759

  • 8/7/2019 Agreement Negotiation for Reaching the Right Level of Privacy_346

    3/12

    avoidance and detection need to be further developed,

    and, even more important, an integration model

    needs to be defined. Thus, we defined the privacy

    enforcement model presented in Section 3.

    2.1 Related WorkThe use of obligations in computer systems by

    itself is not a new topic. It has been largely used to

    specify actions that must be performed by some

    entity. In daily situations where people interact,

    individuals are held responsible for their own actions,

    for the obligations they assume, and they may be

    punished if they fail to do what they have promised.

    In 1996, Van the Riet et al. translated the

    concept of obligations to the cyberspace In [7]

    they conclude that although software entities cannot

    take real responsibility, their specification still

    should take into account what such entities must do

    and what happens if they do not fulfill what has been

    specified. For instance, within Ponder [8] obligationsare defined as event-triggered policies that carry-out

    management tasks on a set of target objects or on the

    subject itself, e.g. when a print error event occurs a

    policy management agent will notify all operators of

    the error and log the event.

    Obligations for secondary use control, also

    called usage control, are requirements that must be

    agreed on by obligation subjects, before authorizing

    the access to the data. Obligations can be seen as a

    binding statement to take some course of action in

    the future by the obligation subject [9]. They are

    crucial to restrict the flow and use of personal data in

    the highly dynamic distributed environmentsassumed in pervasive computing. Nevertheless, in

    many policy systems, obligations have been defined

    tightly coupled to the enforcement of access control

    policies and in general they cannot be used for

    secondary use control. This is the approach followed

    by EPAL [4] and XACML [5]. Within these access-

    control languages, obligations are a set of operations

    associated with a policy that must be executed by the

    Policy Enforcement Point(PEP) together with an

    authorization decision before allowing the access to

    the data. In the work of Park and Sandhu [10]

    obligations are requirements that have to be fulfilled

    by a subject at the time of the request for allowingaccess to a resource. E.g., a user must give his name

    and email address to download a companys white

    paper.

    Therefore, the mentioned approaches do not

    cover the role of obligations in the context of

    secondary use control. In that context, obligations

    should provide binding agreements between two

    parts, the requesting third-party and the target (owner

    of the data), regarding the use of data after its initial

    collection. The work presented in [11] describes an

    approach to archive digital signed commitments on

    obligations between distributed parties. They

    introduced the Obligation of Trust (OoT) protocol,

    which executes two consecutive steps: Notification

    of Obligation and Signed Acceptance of Obligation.

    The OoT is built upon the XACML standard

    following its Web Services Profile (WS-XACML).

    The disadvantages of this approach are that it does

    not cater for the enforcement and monitoring of such

    obligations, on the one hand, and that it seems to berather complicated for a common user to manage

    obligations following this protocol, on the other hand.

    3 OUR PRIVACY ENFORCEMENT MODEL

    Figure 2: Main characteristics of our Privacy

    Enforcement Model

    For addressing the integration of personal

    privacy enforcement in pervasive computing, we

    have developed a three-tier privacy enforcement

    model, shown in Figure 2, which serves as the base

    for the specification of the involved entities and for

    the formalization of their collaboration. One of the

    models main features is that it caters for the

    integration of measures to address prevention,avoidance and detection during the lifecycle of

    personal data, i.e. collection, access and secondary

    use. Thus, it covers the privacy design space

    described in Section 2. The model we propose

    consists of the following three pillars:

    1. Data Collector: This pillar providesabstractions for representing any data collector,

    as typical source of context information found in

    the environment. For dealing with them, we

    classify them into two groups: i) The group of

    data collectors under the direct control of a user,

    such as, for example, a GPS tracking device. In

    general, they are sources of personal context thatdo not represent a privacy risk,per se. ii) Those

    kind of data collectors that are run by enterprises,

    e.g. mobile network operators. Langheinrich

    addressed in his work the privacy implications of

    those kind of data collectors and provided a

    Privacy Awareness System (PawS) to help

    individuals deal with them. In our model we

    assume that a similar framework to PawS [12] is

    in place for dealing with newly encountered data

    collectors.

    2. Trusted Privacy Manager: This central pillar provides abstraction for a personal and trusted

    privacy enforcement system. The role of the

    Special Issue of Ubiquitous Computing Security Systems

    biCC Journal - Volume 5 www.ubicc.org 1762

  • 8/7/2019 Agreement Negotiation for Reaching the Right Level of Privacy_346

    4/12

    Trusted Privacy Manager is to orchestrate any

    disclosure of personal data and to manage the

    collaboration between entities with the goal to

    protect an individual's privacy. Thus, it must

    provide functionality for the evaluation of

    service-side privacy policies (data collectors and

    data consumers), the enforcement of its users' privacy policies, for guiding the negotiation of

    agreements on secondary use, and facilitating the

    monitoring of disclosed data.

    3. Data Consumer: This last pillar holdsabstractions for representing data consumers, in

    particular consumers of context information that

    depend on external data collectors to implement

    their respective services, e.g a Restaurant

    Finder or a Buddy Finder service. Obviously,

    these are the main types of emerging services in

    pervasive computing. However, context

    providers and service providers are typically

    decoupled and operate independently from eachother and, thus, must be orchestrated.

    The privacy enforcement model we present here

    is built around direct, one-to-one binding agreements

    between the Trusted Privacy Managerand the other

    two pillars, as shown in Figure 3. The Trusted

    Privacy Manager acts as mediator between data

    collectors and data consumers. It reduces and

    simplifies potentially complex relationships between

    them to contractual two-part agreements, on

    previously negotiated obligations, avoiding direct

    exchange of data between data collectors and data

    consumers. The idea behind the use of agreements istwofold: to specify actions allowed on disclosed data

    and to avoid any uncontrolled disclosure that may

    lead to unwanted exchanges of sensitive data.

    The collaboration between pillars is built around

    three functional blocks, Bilateral Privacy Access

    Control, Obligations Negotiation and Management,

    and Obligations Trackingas it is shown in Figure 2.

    They are complementary and depend on each other

    to guarantee a holistic privacy protection.

    The first level of collaboration (Bilateral Privacy

    Access Control) demands from each part to include

    privacy-aware access control mechanisms. Note that

    only in the case of the Trusted Privacy Manageraccess control should be context-aware, since is the

    only tier that should have access to any user context

    data. Bilateral access control means that both the

    service side privacy policy and a user's privacy

    policy should be evaluated before revealing any data.

    How this is achieved may differ in each case.

    The second level, the Obligations Negotiation

    and Management, requires from all three pillars to

    adopt a common model for the specification of

    obligations and trust that data collectors and data

    consumers will handle personal data according to

    such obligations. The third level Obligations

    Tracking was introduced as an initial measure for

    trust management and to allow lifecycle awareness

    of the data. Our approach to establish a trusted

    relationship between an enterprise service and the

    Trusted Privacy Manager is based on the possibility

    to subscribe to notifications about the use of

    disclosed data.

    In the collaboration of the Data Collector pillarwith other parts of the system, as it was mentioned

    already, we rely on the idea elaborated by

    Langheinrich. There, the privacy awareness system

    for ubiquitous computing [12] allows data collectors

    to describe their collection policies in a machine-

    readable format and communicate these to their data

    subjects. If the user agrees, the services can collect

    information and users can utilize the services. Thus,

    we can assume that a user is notified before a new

    data collection is started, (e.g. video surveillance or

    indoor security tracking), the user then can check the

    service's privacy policy and either agrees to or denies

    the collection. Due to the implicit nature of this typeof data collectors, a user in general can only accept

    or reject the collection of the data in a take or

    (physically) leave fashion.

    In such a situation, in our model, additionally,

    we propose the negotiation of an agreement on a set

    of obligations before any data collection is

    commenced, to avoid unwanted misuses of the data

    and furthermore, to control accesses to data being

    collected. Thereby, a bilateral privacy access

    contractis negotiated.

    Symmetrically, for the Data Consumer we

    assume that any consumer service first has to register

    with an instance of the Trusted Privacy Manager andupon registration discloses its privacy policy. The

    service privacy policy (e.g. P3P) shall have at least

    the following elements: data consumer description,

    data elements the service requests, and purpose of

    the request. If the registration is successful, the

    Trusted Privacy Manager agrees on providing

    context broker functionality for the subscribed

    service according to the now established bilateral

    privacy access contract. From this moment on and

    for each service request, a Trusted Privacy Manager

    entity evaluates its users privacy policies to decide

    whether or not permission is granted. The data is

    transmitted only when all the restrictions defined bya user are fulfilled. This avoids unwanted take or

    leave situations with data consumers.

    However, bilateral privacy access control only

    addresses situations where information is disclosed

    for the present use. This means that it only provides

    preventative and avoidance measures during

    collection and access. Thus, it does not cover cases

    where information may be stored for future use or

    even be sold in a potentially different situation. Here

    is where the second and third functional blocks that

    define the collaboration between the pillars come

    into play, introducing more mechanisms for

    detection and secondary use control. As a result, this

    Special Issue of Ubiquitous Computing Security Systems

    UbiCC Journal - Volume 5 www.ubicc.org1 1761

  • 8/7/2019 Agreement Negotiation for Reaching the Right Level of Privacy_346

    5/12

    avoidance and detection need to be further developed,

    and, even more important, an integration model

    needs to be defined. Thus, we defined the privacy

    enforcement model presented in Section 3.

    2.1 Related WorkThe use of obligations in computer systems by

    itself is not a new topic. It has been largely used to

    specify actions that must be performed by some

    entity. In daily situations where people interact,

    individuals are held responsible for their own actions,

    for the obligations they assume, and they may be

    punished if they fail to do what they have promised.

    In 1996, Van the Riet et al. translated the

    concept of obligations to the cyberspace In [7]

    they conclude that although software entities cannot

    take real responsibility, their specification still

    should take into account what such entities must do

    and what happens if they do not fulfill what has been

    specified. For instance, within Ponder [8] obligationsare defined as event-triggered policies that carry-out

    management tasks on a set of target objects or on the

    subject itself, e.g. when a print error event occurs a

    policy management agent will notify all operators of

    the error and log the event.

    Obligations for secondary use control, also

    called usage control, are requirements that must be

    agreed on by obligation subjects, before authorizing

    the access to the data. Obligations can be seen as a

    binding statement to take some course of action in

    the future by the obligation subject [9]. They are

    crucial to restrict the flow and use of personal data in

    the highly dynamic distributed environmentsassumed in pervasive computing. Nevertheless, in

    many policy systems, obligations have been defined

    tightly coupled to the enforcement of access control

    policies and in general they cannot be used for

    secondary use control. This is the approach followed

    by EPAL [4] and XACML [5]. Within these access-

    control languages, obligations are a set of operations

    associated with a policy that must be executed by the

    Policy Enforcement Point(PEP) together with an

    authorization decision before allowing the access to

    the data. In the work of Park and Sandhu [10]

    obligations are requirements that have to be fulfilled

    by a subject at the time of the request for allowingaccess to a resource. E.g., a user must give his name

    and email address to download a companys white

    paper.

    Therefore, the mentioned approaches do not

    cover the role of obligations in the context of

    secondary use control. In that context, obligations

    should provide binding agreements between two

    parts, the requesting third-party and the target (owner

    of the data), regarding the use of data after its initial

    collection. The work presented in [11] describes an

    approach to archive digital signed commitments on

    obligations between distributed parties. They

    introduced the Obligation of Trust (OoT) protocol,

    which executes two consecutive steps: Notification

    of Obligation and Signed Acceptance of Obligation.

    The OoT is built upon the XACML standard

    following its Web Services Profile (WS-XACML).

    The disadvantages of this approach are that it does

    not cater for the enforcement and monitoring of such

    obligations, on the one hand, and that it seems to berather complicated for a common user to manage

    obligations following this protocol, on the other hand.

    3 OUR PRIVACY ENFORCEMENT MODEL

    Figure 2: Main characteristics of our Privacy

    Enforcement Model

    For addressing the integration of personal

    privacy enforcement in pervasive computing, we

    have developed a three-tier privacy enforcement

    model, shown in Figure 2, which serves as the base

    for the specification of the involved entities and for

    the formalization of their collaboration. One of the

    models main features is that it caters for the

    integration of measures to address prevention,avoidance and detection during the lifecycle of

    personal data, i.e. collection, access and secondary

    use. Thus, it covers the privacy design space

    described in Section 2. The model we propose

    consists of the following three pillars:

    1. Data Collector: This pillar providesabstractions for representing any data collector,

    as typical source of context information found in

    the environment. For dealing with them, we

    classify them into two groups: i) The group of

    data collectors under the direct control of a user,

    such as, for example, a GPS tracking device. In

    general, they are sources of personal context thatdo not represent a privacy risk,per se. ii) Those

    kind of data collectors that are run by enterprises,

    e.g. mobile network operators. Langheinrich

    addressed in his work the privacy implications of

    those kind of data collectors and provided a

    Privacy Awareness System (PawS) to help

    individuals deal with them. In our model we

    assume that a similar framework to PawS [12] is

    in place for dealing with newly encountered data

    collectors.

    2. Trusted Privacy Manager: This central pillar provides abstraction for a personal and trusted

    privacy enforcement system. The role of the

    Special Issue of Ubiquitous Computing Security Systems

    biCC Journal - Volume 5 www.ubicc.org 1762

  • 8/7/2019 Agreement Negotiation for Reaching the Right Level of Privacy_346

    6/12

    users. The communication between a CAMS and the

    UCPF services is realized by the Sentrys PEP

    component implementing an Interceptor pattern on

    messages passing through the system. The Context

    Handler provides access control to the system

    repositories and acts as a mediator between the

    Sentry and external context collectors. It alsomanages the semantic models used to represent

    policies and context data in the system.

    The interaction of end-users with the UCPF is

    made possible through the Sentry Manager Interface.

    It is implemented as an API used to generate,

    upgrade or delete privacy policies, receive

    information about the current applicable policies, or

    getting feedback on potential privacy risks and

    obligations state.

    The Noise Module is a modular functional

    component that injects additional information into

    the policy matching mechanism, typically for

    altering the data disclosed. Currently implementedare different levels of obfuscation, called

    Transformations [16], within the LocalTransformation Point (LTP) and a disclosure

    mechanism for a Virtual Context within the White

    Lying Generator (WLG) [17].

    The Sentry Registry is the only component that

    is not co-located with the rest of the elements in the

    gateway. This component is shared among Sentries

    and thus must be hosted in the Internet. It tracks the

    availability of peoples context and provides the

    pointer to the appropriate Sentry service instance.

    Services, organizations and Sentries need to be

    registered with the UCPF before starting any contextrequest, which is done in the Sentry Registry. The

    Sentry Registry has an important role in making

    interactions possible between CAMS and Sentries,

    and also between different Sentry instances,

    providing Flexibility to deal with distributed and

    changing entities. Finally, the Obligation Manager,

    see next section, negotiates agreements and tracks

    the obligations agreed on by third parties.

    4 OBLIGATIONS IN THE UCPFIn the UCPF obligations are used to enable the

    integration of measures forprevention, avoidanceand detection on secondary use of personal data. The

    privacy enforcement model presented in Section 3,

    distributes the enforcement of users' privacy

    preferences between the UCPF and the enterprise

    side. In Figure 3 we highlight those measures

    enabled by the negotiation of agreements and their

    posterior enforcement by the enterprise system.

    Without the use of obligations, the UCPF could only

    enforce preventative measures during access and

    collection. Obligations are used to create automatic

    bindings with CAMS and ensure that data protection

    requirements are adhered to. Obligations in the

    UCPF have the following purposes:

    They specify actions that should be performed

    by a service, acting as the recipient of a user

    data.

    Obligations are used to automatically exchange

    users preferences on secondary use with

    enterprises. For this reason, they are embedded

    into agreements to be negotiated before the datais given away.

    They enable the exchange of notifications between the UCPF and third-party services and

    with that post-disclosure life-cycle control.

    The Obligation Manager (OM) is the UCPF's

    component in charge of managing, negotiating and

    tracking obligations. The OM consists of the

    Obligation Handler and the Obligation Monitor

    components as shown in Figure 4. The first is in

    charge of creating agreements with the obligations

    selected by a user and its negotiation. The Obligation

    Monitor monitors the state of obligations based onnotifications and data logs. It also compiles

    notifications to be sent by the Sentry to the service,

    which can just confirm a received notification,

    request a data log, or inform of an occurred violation

    including sanctions to be carried out, e.g. unregister

    the service.

    The following two examples illustrate the type

    of obligations handled by the OM in the UCPF.

    Example 1: Bob will always allow to be tracked

    by indoor security systems, if his data is never

    disclosed, and he can access the data generated.

    Example 2:Bob requires from all services usinghis activity and location, that they do not store hisdata or if yes to delete them within one week or a

    maximum of one month. Moreover, Bob wants to get

    a notification if a service discloses his location to the

    same recipient more than 5 times a day.

    4.1 Model of Privacy ObligationsWe created a set of predefined obligations and

    classified them into system obligations and

    negotiable obligations. Obligations play a key role in

    the establishment of the privacy enforcement model

    presented in this paper, thus not all obligations are

    negotiable. For instance, the obligation Data MUSTNOT be disclosed to any third-party, is a mandatory

    obligation used to avoid uncontrolled retransmissions

    of data. System obligations are designed to be agreed

    on during the registration process of a new service

    without negotiation. The goals of system obligations

    are to control disclosures to third-parties, monitor

    changes on a service privacy policy and enable the

    access to collected data and data logs. They are

    mandatory obligations which are by default

    established independently of a user's preferences and

    beforehand of any commercial transaction with a

    service.

    Negotiable obligations, on the other hand, are

    Special Issue of Ubiquitous Computing Security Systems

    UbiCC Journal - Volume 5 www.ubicc.org 1763

  • 8/7/2019 Agreement Negotiation for Reaching the Right Level of Privacy_346

    7/12

    selected by a user and negotiated with the service,

    which is the recipient of the data. Users can specify

    obligations to be agreed on during the evaluation of a

    service request, which will be subject of the

    agreement negotiation protocol. They represent the

    privacy constraints that a user may impose on an

    enterprise service when data is disclosed. In ourdefinition, negotiable obligations have two aspects:

    first, it is a second-class entity subject to the

    enforcement of a policy in the UCPF and compiled

    during the evaluation of a service request. And

    second, when a policy evaluation contains

    obligations, these obligations are not enforced by the

    Sentrys PEP but activate a negotiation protocol.

    Then, obligations are first-class entities used to

    exchange personal privacy preferences with an

    enterprise service. Negotiable obligations are used to

    control the information disclosed among individuals

    (users of a particular service), authorized purposes,

    number of accesses and retention time of userresources.

    In the representation of obligations basically we

    follow the schema adopted by the OMS designed by

    Casassa et al. [18] to facilitate collaboration with the

    enterprise privacy enforcement system. Obligations

    in the UCPF are XML documents withEvent,Action,

    Notification and Metadata elements. Figure 5 shows

    the template used to create obligations in the UCPF,

    together with a XML instance of the obligation 13.

    Some tags defined by HPs OMS were left out in our

    definition (e.g. Target), which only can be used

    together with the OMS, and a new tag, the

    Notification element, was added.

    Figure 5: Obligation Template and XML Example

    Examples of both types of obligations are shownin Figure 6 and Figure 7. The XML instances of

    Figure 6 represent system obligations, they can be

    used in the example 1, to control that unwanted

    disclosures to third-parties do not occur. On the other

    hand, the XML instances of Figure 7 belong to

    negotiable obligations; they can be used in the

    example 2 to limit the retention of the data and the

    number of disclosures to a same subject.

    An obligation is activated at the moment that its

    Event occurs. We have considered in our

    implementation eight different events, namely:

    Disclosure request, Data leaking, Policy changed,

    Data collection, Access request, Storage, Timeout,

    andDatalogrequest. An Eventis described with two

    properties, theEventType that identifies the event

    that will trigger the execution of the obligation, and

    the property HasExtParameter, which ties its

    applicability to a specific value. For instance, in

    Figure 6 for the obligation number 11, the event

    Disclosure requestis constrained by the identity ofthe recipient (Recipient parameter) and the number

    of disclosures allowed per day (Times parameter).

    The values of these parameters are not included in

    the definition of obligations, but in the agreement

    document, as we will describe in the next section.

    Figure 6: Example of System Obligations

    Once an obligation is activated, and its event

    parameters are true, its execution has two effects,

    one is that an action must be performed, specified

    with the property ActionType, and the second is that

    a notification must be sent to a Sentry instance. The

    type of notification that needs to be compiled is

    identified by the NotificationType property. The

    predefined obligations are distributed as an XML file.

    These obligations are common to all data collectors

    and data consumers. Therefore, we need agreement

    documents to personalize and bind obligations with

    third-parties.

    Figure 7: Example of Negotiable Obligations

    4.2 Agreement Negotiation ProtocolThe schema adopted in the definition of

    agreements is shown on the left hand side in Figure 8,

    an agreement is an XML document with Agreement,

    Resource, URLNotification, Timestamp,Negotiation,

    ListObligations, and IfViolated elements.

    Agreements are used to bind a list of obligations

    between two parts, a UCPF and a third-party service,

    and allow for their negotiation. The use of

    agreements allows the personalization of general

    obligations and with that, the use of a limited number

    Special Issue of Ubiquitous Computing Security Systems

    UbiCC Journal - Volume 5 www.ubicc.org 1764

  • 8/7/2019 Agreement Negotiation for Reaching the Right Level of Privacy_346

    8/12

    of predefined obligations, which are reused in each

    agreement, instead of creating new instances.

    The Agreement tag identifies the involved parts

    and the priority of such agreement. The priority

    property can have one out of the following values:

    Mandatory, Middle, Optionaland Request. It applies

    only if after finishing all the negotiation rounds theagreement state is denied. The Resource element

    identifies the data owner with the properties Owner

    and Scope, the scope describe the type of resource

    (e.g. location, activity, calendar). The Negotiationtag specifies the state and the current round of a

    negotiation, the OM always sets an agreement in

    state Negotiating and controls the negotiation

    round. A service can change the state property from

    Negotiating to Denied or Accepted, but not

    the negotiation round.

    Figure 8: Agreement and Notification Template

    In an agreement, obligations are associated with

    the element ListObligations. This element does notcontain the specification of obligations but a

    reference to one or more of the obligations included

    in the list of common obligations. Each obligation is

    represented with an ID and a state value. The State

    property should be changed by the service from its

    value Pending to Rejected or Accepted.

    Moreover, an obligation, in an agreement, has the

    property EventParameter to personalize the event

    that should trigger the execution of the obligation.

    The parameter included depends on the type of

    obligation, some obligations may add the parameter

    Recipient (without parameter it is considered as

    any recipient), other obligations require the parameter Times, to identify the number of times

    that the data can be accessed, the parameter Date is

    used to specify when the data should be removed and

    the parameter Purpose identifies allowed

    secondary uses of the data. Finally, agreements

    include the tag IfViolated to inform about the

    consequences of violating any of the obligations

    agreed on during the negotiation.

    Agreements are negotiated per service and

    resource if an agreement exists; there is no need to

    re-negotiate it in following requests. Only if the

    privacy policy of the requesting service changes an

    agreement needs to be re-negotiated.

    4.2.1Negotiation Protocol.The agreement negotiation protocol, cf. Figure 9

    starts after the policy evaluation of a service request

    within a Sentry instance concludes. If the policy

    effectcompiled contains obligations, the PEP queries

    the OM for an agreement over the pendingobligations, step 1 in Figure 9. Then, if there is not a

    previous agreement, the OM returns the agreement to

    be negotiated, step 2. The Sentry launches the

    negotiation, steps 3 to 14, which is repeated at most

    for three rounds.

    This protocol enables per-service and per-user

    resource agreements negotiations that are guaranteed

    to terminate after at most three negotiation rounds.

    This is inspired by the work of Walker et al. [19] in

    the Or Best Offer protocol. The Obligation

    Manager makes a first proposal in the Negotiating

    Optimum stage. The enterprise side cannot make any

    counter-proposal at this stage, since the user shouldnot be involved during the negotiation, for the sake

    of simplicity and user friendliness. Therefore, the

    service provider is limited to check the list of

    obligations attached and to reject or bind them. If the

    agreement is denied by the enterprise, which means

    that one or more obligations are rejected, the OM

    issues the second proposal: Negotiating Acceptable

    stage. It includes a new set of obligations where the

    rejected obligations of the first set are replaced by

    their acceptable equivalents. The enterprise service

    may accept the second proposal, or start the third and

    last round: Negotiating Minimum stage, in which a

    new set of obligations classified as minimum replacesthose rejected. The goals of this negotiation strategy

    are: i) to allow more than take or leave situations,

    ii) to enable an automatic setup of user's privacy

    preferences on secondary use, and iii) to execute the

    obligation binding process transparent to the user.

    Figure 9: Agreement Negotiation Protocol Sketch

    In a situation where an enterprise does not accept

    Special Issue of Ubiquitous Computing Security Systems

    UbiCC Journal - Volume 5 www.ubicc.org 1765

  • 8/7/2019 Agreement Negotiation for Reaching the Right Level of Privacy_346

    9/12

    the third and last proposal, no agreement is reached

    and the priority of the rejected agreement is taken

    into account by the OM. Each agreement is labeled

    with a priority value. Priority Mandatory means that

    the service (enterprise) MUST accept the agreement

    otherwise permission will be denied (step 14).

    Priority Middle means that the service SHOULDaccept the agreement otherwise data quality will

    decrease in accuracy. A priority equals to Optionalmeans that the service MIGHT accept the agreement

    and entails the disclosure of the requested data

    anyway but the user will be notified that no

    agreement was reached; a user may modify his

    privacy settings based on the obligations rejected

    then. If the priority is set to Requestthe user will be

    asked whether or not he wants to disclose the data

    anyway.

    4.3 Notification MechanismOur approach to establish a trusted relationship

    between an enterprise service and the UCPF also is

    based on the possibility to subscribe to notifications

    about the use of disclosed data. Obligations are used

    to create automatic bindings between both parts, and

    ensure that data protection requirements are adhered

    to. However, in cases where such bindings cannot be

    monitored, checking the compliance with the

    obligation is almost impossible. The concept of Non-

    observable Obligations is described in the work of

    Hilty et al. [20] they suggest that a possible solution

    is the use of nontechnical means, such as audits or

    legal means. We propose, additionally, the idea of

    employing observable bindings between our UCPFand enterprise frameworks. This is realized by

    introducing a notification exchange mechanism. This

    mechanism is activated by the execution of an

    obligation. Due to the fact that the enforcement of an

    obligation in our model, follows the sequenceEvent-

    Action-Notification, instead of a mere Event-Action

    used by previous approaches.

    Figure 10: Type of Notifications

    We describe here two complementary

    notification types, notifications used to notify a

    UCPF and notifications developed to notify back a

    service. Figure 10 describes and summarizes the

    different notifications implemented. There are 13

    notifications types, eight are used for notifying the

    UCPF (as subscriber), namely: Leaking, Policy,

    Repository, Datalog, Request, Granted Disclosure, Access and Deletion, and five for notifying the

    service in return: Reminder, Getdatalog,

    Confirmation, Response and OnViolation.

    Notifications as agreements are XML documents; the

    schema used to create new instances of a notification

    is shown on the right hand side in Figure 8. The main

    element of a notification is the Notification tag; it

    identifies the type of notification and holds relevant

    information in the form of parameters. Depending on

    the notification type used, different parameters need

    to be provided. E.g. a Deletion notification should

    include the time frame in which some data was

    removed from the repository. On the other hand, aGranted Disclosure notification should include also

    the recipient of the data and the number of

    disclosures that occurred in the period of time

    considered. The parameters required by each

    notification are shown in Figure 10. The use of

    notifications allows for monitoring the status of the

    active obligations and to communicate actions

    (penalties) in case of a violation state. We introduced

    the tag Violation for this case. It describes the

    sanctions carried out once the OM observes such a

    violation.

    In summary, notifications are used for: i)

    enabling monitoring of active obligations, ii)auditing the enterprise service, iii) getting access to

    personal data in the service's repository, iv) knowing

    when the service's obligation policy changes in order

    to re-negotiate agreements, and v) controlling when

    an active obligation is violated.

    4.4 Obligations Management from the UserPerspective

    Our work is driven by the vision that (non-

    expert) users of the UCPF shall be empowered to

    control their own personal privacy. Especially for the

    area of obligation management, we addressed its

    implementation from the perspective of normal users.The main questions wereHow can a non-technically-

    savvy user understand and manage obligations? and

    How can we assist such users in negotiating

    obligations with the providers of services used?'

    Our solution to the above questions was to

    implement an application, as part as the UCPF GUI,

    called Obligations, in which a user can specify his

    obligations easily. In the design stage we discarded

    the idea of defining new obligations at the time of

    adding a new rule. Mainly because it means an

    overload for the process of adding new permissions

    into the system and obligations, for us, are entities

    independent from access control. We believe that the

    Special Issue of Ubiquitous Computing Security Systems

    UbiCC Journal - Volume 5 www.ubicc.org 1766

  • 8/7/2019 Agreement Negotiation for Reaching the Right Level of Privacy_346

    10/12

    natural way of thinking about obligations, from the

    perspective of users, is as a set of privacy

    preferences to be enforced by the service provider on

    disclosed resources. Thus, it follows that

    obligations should be specified per service and

    resource and not to depend on a rule specification.

    Therefore, we implemented a separate applicationwhere users can set up obligations per tuple (service,

    resource).

    For the development of the Obligation Manager

    and its application we followed the rule of providing

    some default setting, in a way that users do no need

    to take care of critical situations by themselves.

    Mandatory obligations, needed for keeping a proper

    level of privacy, were labeled as System obligations

    and excluded from the negotiation process. System

    obligations should be agreed-on by service providers

    during the registration process. The process is

    transparent to the user and only services that fulfill

    the registration requirements are available. Negotiable obligations were added to offer users a

    way to negotiate their privacy preference on

    secondary use. Both obligation types should be

    enforced by the enterprise privacy system. Hence,

    obligations must be defined in advance and known

    by both parties. A user then is limited to choose

    among known obligations, what means that he

    cannot specify new types of obligations. In our

    implementation, due to the lack of a standard

    definition of obligations, we developed our own set

    of negotiable obligations, listed in Figure 11.

    Figure 11: Predefined Obligations sets

    The question remaining at this point obviously

    is: How can a user setup his obligation policies with

    respect to the optima-l, acceptable-, and minimum

    agreement round? For this we grouped negotiable

    obligations in sets of three obligations, optimal,acceptable and minimum, one for each of the three

    negotiation rounds allowed. These three obligations

    of a set do not need to be different; they may differ

    only on the event parameter e.g., the timeout to

    delete data from a repository: 5 days, 1 month or 1

    year. The application offers a selection of five

    predefined sets, shown in Figure 11 namely: 1)

    Limiting retention time, 2) Controlling data

    disclosures, 3) Controlling data disclosures same

    subject, 4) Controlling data accesses, 5)

    Limiting number of disclosures per day to the same

    subject. If a user selects one or more of these sets,

    for a registered service, they need to be negotiated

    before disclosing any new data to that service.

    4.5 Obligations ApplicationFigure 12 shows a screenshot of our application

    prototype for managing sets of obligations. A user

    can access this application directly from his Privacy

    Desktop home window by clicking into the panelObligations. The access window that is opened

    now, shown on the left hand side in Figure 12,

    guides users in the selection of a service provider and

    a resource among location, activity and calendar,

    before allowing the access to the main screen of the

    obligations application. In the main screen, shown on

    the right hand side the predefined sets of obligations

    are listed. If one or more obligations sets are selected

    they will be included into an agreement that needs to

    be negotiated before disclosing the resource to the

    service.

    With this application a user can change the

    parameters of the obligations belonging to a set andset the priority of each selected set. A set always

    consists of the three mandatory obligations, labeled

    as Optimum, Acceptable, and Minimum. They

    can be predefined and be re-used, obviously, and do

    not have to be defined separately each time. The

    application offers also the possibility of checking

    existing obligations and editing or canceling them.

    The data displayed by this application is

    gathered from a database instance, implemented

    using MySQL, where predefined obligations,

    obligations sets and user selected obligation sets are

    saved. Each obligation set, associated with a tuple

    (service, resource, and user), receives an identifiernumber, which will be referenced by the rule effect

    send to the Sentry's PEP for that selected service and

    user's resource.

    Figure 12: UCPF GUIs Obligation Application

    5 CONCLUSIONS AND OUTLOOKIn this paper, we presented how privacy control

    can be extended in a substantial manner toward

    holistic privacy management, with the collaboration

    Special Issue of Ubiquitous Computing Security Systems

    UbiCC Journal - Volume 5 www.ubicc.org 1767

  • 8/7/2019 Agreement Negotiation for Reaching the Right Level of Privacy_346

    11/12

    of personal privacy and enterprise privacy

    enforcement. We introduced the notion of binding

    agreements for each privacy-related resource. An

    agreement is an XML document holding a set of

    obligations selected by a user. Those selected

    obligations need to be accepted by a third-party

    service before a granted permission is given toaccess the resource. Obligations describe the rights

    and requirements for processing, storing or deleting

    data that the recipient of a user's resource must

    enforce. In order to avoid situations where

    obligations may not be accepted by a service and

    lead to simple denial of service, we developed a

    well-defined negotiation protocol for trying to find

    an agreement based on different classes of

    obligations such as optimum, acceptable and

    minimum type.

    An obligation in the UCPF is defined as an

    event-action-notification entity, instead of a mere

    event-action. It follows that the enforcement of anobligation by the enterprise privacy system, activates

    the exchange ofnotifications. We concluded that justusing agreements on a set of obligations is not far-

    reaching enough and introduced notifications to

    enable obligation tracking and with that the

    monitoring of the fulfillment of agreements and the

    usage of disclosed data after releasing it. In this

    paper, we showed that this is an easy but effective

    way to enable trust between the client and the service.

    We also presented the first prototype of the

    Obligation application, a user interface created to

    be managed by non-expert users, and used to add

    new obligations sets to be negotiated with aregistered service.

    Our future work includes applying our

    framework to privacy-sensitive applications in the

    context of patient-monitoring and -supervision, as

    being found in integrated hospital IT landscapes.

    There we are about to start work in the context of the

    ITEA2 project AIMES[21].

    6 REFERENCES[1].L. Cranor, M. Langheinrich, M. Marchiori, J.

    Reagle. The platform for privacy preferences 1.0

    (P3P 1.0) specification. W3C Recommendation,Apr. 2002.

    [2].M. Casassa Mont and R. Thyne. A Systemic

    Approach to Automate Privacy Policy

    Enforcement in Enterprises. In Proceedings of

    the PEP 06, LNCS 4258, pag. 118-134,

    Springer Berlin, 2006.

    [3].G. Karjoth, M. Schunter and M. Waidner. Theplatform for enterprise privacy practices- privacy

    enabled management of customer data. In

    Proceedings of PEP02. LNCS, 2002.

    [4].P. Ashley, S. Hada, G. Karjoth, C. Powers andM. Schunter. Enterprise Privacy Authorization

    Language (EPAL 1.2). W3C Submission,

    November 2003

    [5].Tim Moses. OASIS standard. eXtensible AccessControl Markup Language TC v2.0, February

    2005.

    [6]. Jiang, X., Hong, J. I., and Landay, J. A.Approximate Information Flows: Socially-based

    Modeling of Privacy in Ubiquitous Computing.In proceedings of the 4th International

    Conference on Ubiquitous Computing, 2002.

    [7].R. P. van de Riet and J. F. M. Burg. Linguistic

    tools for modelling alter egos in cyberspace:

    Who is responsible? Journal of Universal

    Computer Science, 2(9):623636, 1996.

    [8]. Nicodemos Damianou, Naranker Dulay, EmilLupu, and Morris Sloman. Ponder: A language

    for specifying security and management policies

    for distributed systems, 2000.

    [9].Lalana Kagal. A Policy-Based Approach toGoverning Autonomous Behavior in Distributed

    nvironments. Phd Thesis, University ofMaryland Baltimore County, September 2004.

    [10]. Jaehong Park and Ravi Sandhu. Theuconabc usage control model. ACM Trans. Inf.

    Syst.Secur., 7(1):128174, 2004.

    [11]. Uche M. Mbanaso, G. S. Cooper, David W.Chadwick, and Anne Anderson. Obligations or

    privacy and confidentiality in distributed

    transactions. In EUC Workshops, 6981, 2007.

    [12]. M. Langheinrich. A privacy awarenesssystem for ubiquitous computing environments.

    In Proceedings of the 4th International

    Conference on Ubiquitous Computing. LNCS

    No. 2498, Springer-Verlag, September 2002.[13]. S. Alcalde Bags, A. Zeidler, C. Fernandez

    Valdivielso, I. R. Matias. Sentry@Home -

    Leveraging the Smart Home for Privacy in

    Pervasive Computing. International Journal of

    Smart Home Vol. 1 No. 2, 2007.

    [14]. S. Alcalde Bags, J. Mitic, E. Emberger.

    The CONNECT Platform: An Architecture for

    Context-Aware Privacy in Pervasive

    Environments. IEEE CS,Workshop on Secure

    and Multimodal Pervasive Environments 2007.

    [15]. Knopflerfish Open Source OSGi,http://www.knopflerfish.org/

    [16]. S. Alcalde Bags, A. Zeidler, C. Fdez-Valdivielso, I. R.Matias. A User-Centric Privacy

    Framework for Pervasive Environments. In

    proceedings of the OTM Workshops (2) 2006:

    1347-1356, LNCS.

    [17]. S. Alcalde Bags, A. Zeidler, C. Fdez-Valdivielso, I. R. Matias. Disappearing For A

    While- Using White Lies in Pervasive

    Computing. In Proceedings of the 2007 ACM

    workshop on Privacy in Electronic Society 2007,

    Alexandria, Virginia, USA.

    [18]. M, Casassa Mont, Dealing with Privacy

    Obligations: Important Aspects and Technical

    Approaches, TrustBus 2004.

    Special Issue of Ubiquitous Computing Security Systems

    UbiCC Journal - Volume 5 www.ubicc.org 1768

  • 8/7/2019 Agreement Negotiation for Reaching the Right Level of Privacy_346

    12/12

    [19]. Daniel D. Walker, Eric G. Mercer, and Kent

    E. Seamons. Or Best Offer: A Privacy Policy

    Negotiation Protocol. In Proceedings of the

    IEEE Workshop on Policies for Distributed

    Systems and Networks, p. 173 180, 2008.

    [20]. M. Hiltya, D. A. Basin, and A. Pretschner.

    On Obligations. In Proceedings of the 10thEuropean Symposium on Research in Computer

    Security (ESORICS 2005), 2005.[21]. ITEA2 Project AIMES: Project Homepage,

    http://www.aimes-project.eu/

    Special Issue of Ubiquitous Computing Security Systems

    UbiCC Journal - Volume 5 www.ubicc.org 1769