Transcript
Page 1: [IEEE 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) - Seattle, WA, USA (2011.03.21-2011.03.25)] 2011 IEEE International

Demystifying Privacy In Sensory Data: A QoI based approach

Supriyo Chakraborty, Haksoo Choi and Mani B. SrivastavaUniversity of California, Los Angeles{supriyo, haksoo, mbs}@ucla.edu

Abstract—There is a growing consensus regarding the emer-gence of privacy concerns as a major deterrent towards thewidespread adoption of emerging technologies such as mobilehealthcare, participatory sensing and other social networkbased applications. In this paper, we motivate the need for pri-vacy awareness, present a taxonomy of the privacy problems,and the various existing solutions. We highlight the tensionthat exists between quality of service at the receiver and theprivacy requirement at the source and present a linear programformalization to model the tradeoff between the two objectives.We further present the design and architecture of SensorSafe,a framework which allows privacy-aware sharing of sensoryinformation.

Keywords-Privacy; Trust; Information Sharing; Access Con-trol;

I. INTRODUCTION

The canonical privacy problem for databases can besummed up as: given a collection of personal records fromindividuals, how do we disclose either the data or “useful”function values such as correlations, population character-istics computed over the data, without revealing individualinformation. This notion of absolute privacy is analogous tothe principle of semantic security formulated for cryptosys-tems by Goldwasser et. al. [1]. A trivial solution to the aboveproblem is to disclose random data, bearing no relation tothe actual database. However, the implicit notion of utilityassociated with the disclosed data prohibits the use of sucha scheme. In addition, the utility requirement in conjunctionwith adversarial access to auxiliary information makes itimpossible to achieve absolute privacy [4]. Current researchin database privacy has thus evolved into a study of thetradeoff involving degradation in the quality of informationshared, owing to privacy concerns, and the correspondingeffect on the quality of service [9][10].

While database privacy has been extensively studied overthe past decade, the proliferation of mobile smartphoneswith embedded and wireless connected wearable sensorshas added a new dimension to the problem. The miniaturesensors which can be easily placed on an individual can beused to unobtrusively collect large amounts of personal data.This data, which is richly annotated with both temporal andspatial information is in turn used by a variety of partici-patory sensing applications [11], healthcare studies [32][33],behavioral studies to extract both population- and individual-level inferences. As evident, sensory data pertaining toan individual’s daily life can be used to infer extremely

sensitive inferences and hence adequate privacy measures arequintessential. Concerns about privacy has the potential toreduce user participation, which is critical for the success ofpopulation based studies, and/or reduce application fidelitydue to random degradation in data quality. Thus, on onehand, from an data source’s perspective the fundamentalquestions while sharing sensory data are: (1) Whom to sharethe data with? (2) How much data should we share? and (3)How to account for the neighborhood influences (collusion)while sharing information? On the other hand, a data receiveris typically concerned about the quality of information(QoI) received and the resulting quality of service (QoS).These seemingly contradicting objectives of the source andthe receiver creates a tension which is fundamental to theprivacy problem.

In this paper, we present a linear program formulation ofthe above risk mitigation problem. We use the notion of trustto quantify the risk of leakage and use a source specifiedtolerance parameter to simultaneously maximize applicationQoS and minimize information leakage. Varying the toler-ance parameter allows the source to compute the QoS for agiven privacy level and thus provides a greater flexibilityof operation. Using the trust graph provides insight intothe possible collusion possibilities between receivers furthereffecting the quality of the data shared. In addition, we alsobriefly discuss the design and architecture of SensorSafe -a framework which allows users, in a participatory sensingsetting, to share data in a private way. It does so by providingusers with fine-grained access control primitives and alibrary of obfuscation algorithms to control information andinference disclosure.

Section II discusses recent privacy attacks and also out-lines possible privacy concerns in future applications. Thefollowing section categorizes the privacy threats. This isimportant for two reasons. First it allows the source to betterestimate the possible threats and their consequences. Second,it allows a better selection of the information disclosuremechanism. Section IV discusses the importance of trustestablishment and its role as a precursor to privacy concerns.Section V summarizes the current approaches for solvingthe privacy problem in various application domains. This isfollowed by Section VI and Section VII where we presentour work. We conclude in Section VIII.

The Third International Workshop on Information Quality and Quality of Service for Pervasive Computing

978-1-61284-937-9/11/$26.00 ©2011 IEEE 38

Page 2: [IEEE 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) - Seattle, WA, USA (2011.03.21-2011.03.25)] 2011 IEEE International

II. PRIVACY: REALITY OR MYTH

Is privacy something that people really care about? A re-cent survey on location privacy [18] summarized interestingopinions about privacy from multiple independent studies.While people in general were oblivious to privacy violationsand amenable to sharing their data, the perception quicklymorphed into one of concern when apprised of the varioussensitive inferences that could be drawn and the resultingconsequences.

Some of the recent high-visibility fiascos have furtherestablished privacy as a important sharing constraint. Exam-ples include the de-anonymization of the publicly releasedAOL search logs [13] and the movie-rating records of Netflixsubscribers [12]. The large datasets in question were releasedto enable data-mining and collaborative filtering research.However, when combined with auxiliary information theanonymized datasets were shown to reveal identity infor-mation of individual users.

Sharing sensory data also presents unique challenges.Due to high temporal and spatial granularity sensory dataoffers great potential for data mining, making it difficultto preserve privacy. For example, households in the U.S.are being equipped with smart meters to collect temporallyfine-grained report of energy consumption. This will allowutility to better estimate the domestic power consumptionleading to optimized distribution and control of the grid.However, as shown in [15], several unintended and sensitiveinferences such as occupancy and lifestyle patterns of theoccupants can be made from the data in addition to totalpower consumption. Infact, privacy has been identified asa major challenge in fine-grained monitoring of residentialspaces [17]. Similarly, in medical research the continuousphysiological data collected by wearable sensors can be usedto infer potentially sensitive information such as smoking ordrinking habits, food preferences. While “informed consent”of the data source is the currently used sharing policy,it can be easily overlooked causing privacy violations asexemplified in [14]. The DNA information, collected fromblood samples of a particular tribe, was originally meantfor type-2 diabetes research. But was later used to furtherresearch in schizophrenia - a condition stigmatized by thetribe - causing extreme anguish and sense of dissatisfaction.Similarly, participatory sensing applications [11] requireusers to voluntarily upload their personal data for purposesof a study. For example, PEIR [16], a popular tool forcomputing carbon exposure levels during trips, requiresusers to upload their travel paths. In the absence of adequateprivacy transformation the uploaded traces could be usedto find out frequently traveled paths, workplace, home andother sensitive locations.

Thus, privacy threat during data disclosure is real andunless adequate mitigation steps are taken it could cause adelay in the adoption of various ubiquitous sensing based

applications.

III. PRIVACY PROBLEM CHARACTERIZATION

Depending on how data is shared by the source we cangroup the various privacy problems into two broad classes:

Identity Violation: The data is syntactically sanitizedby stripping it of personally identifiable information (PII)before sharing - a process called anonymization. The shareddata is intended for research pertaining to population-levelstatistics. However, privacy violation occurs when the data inpresence of auxiliary information is de-anonymized to revealidentity information. Netflix [12] and AOL [13] fiascos fallunder this category. There are two important challenges inusing PII based anonymization. First, the definition of PII isinadequate for many types of datasets [3], including sensorydata. For example, while sharing sanitized location traces,the location data itself can be used to re-identify an indi-vidual. Hence, it is hard to clearly demarcate the PII’s fromthe other attributes shared. Second, it is assumed that thenon-PII attributes cannot be linked to an individual record.However, auxiliary information has been used along withnon-PII attributes to de-anonymize large datasets [12][2].

Inference Violation: For this class of problems thesource’s identity is not concealed in the shared data. Instead,there exists a specific set of inferences which the sourcewants to protect. For example, in a medical study EKGdata is needed to monitor variability in heart rate. However,the data should not be used for other sensitive inferencessuch as smoking or drinking habits of the individual.The attacks using smartmeter data [15], location traces inPEIR [16], or the unauthorized use of DNA samples in [14]indicated in Section II, fall under this category. A varietyof obfuscation techniques have been proposed, to preventinference violations, including removal of sensitive attributesand addition of calibrated noise [20]. A detailed survey ofother techniques can be found in [19].

IV. TRUST: THE UNIFYING THREAD

An important aspect of any system with multiple datasources and recipients is the modeling and update of trustrelationships between its constituents. The process of defin-ing and interpreting trust is highly subjective [21], and asa result trust has found diversified uses depending on theapplication domain. For example, in e-commerce applica-tions [22][23] trust is used as a soft security mechanism.In p2p and social networks it establishes well-knit crediblesocial structures [25] and in sensor networks it is used toassess the quality of information received [24].

Similar to their subjective interpretation, there are differ-ent ways to quantify trust. However, the underlying ideabehind most of the schemes is illustrated in Fig. 1. Thesource uploads data, which is rated by the receiver. Trustis then derived as a function of the given ratings. The

39

Page 3: [IEEE 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) - Seattle, WA, USA (2011.03.21-2011.03.25)] 2011 IEEE International

upload

trust

rating

Receiver Contributor

Data

Figure 1. Computing trust scores from receiver ratings.

Beta distribution based trust model [8] is a popular wayof quantifying trust.

From a privacy perspective, trust has different impli-cations for data source and receiver. At the source, theheterogeneity in trust scores manifests itself as differentprivacy requirements. If a receiver is not trustworthy, thenstrict privacy is desired. At the other end, the receiver’strust on a source is an indicator of the expected quality ofinformation. A trustworthy source is expected to providehigh quality information.

Thus, the notion of trust, which in many ways is theprecursor to privacy concerns, also seems to be a unifyingentity - describing both the desired obfuscation at the sourceand the expected QoI at the receiver.

V. EXISTING SOLUTION APPROACHES

Information disclosure can happen in either an interactiveor non-interactive setting. In a non-interactive setting thesource publishes a sanitized version of the data. This isobtained by removal of PII and possible perturbation of thereleased attributes. A popular technique for data sanitizationis k-anonymity. It requires that every record in the releaseddataset is indistinguishable from at least k−1 other recordson the released attributes. However, in the presence of aux-iliary information, [2][3][12] show that de-anonymization ispossible leading to identity violation. Several variants, suchas l-diversity, t-closeness also suffer from similar flaws.In an interactive setting, the source provides an interfacethrough which users pose queries and get possibly noisyanswers. In this setting the source has greater control overthe released data. However, both identity and inferenceviolations can occur in this setting. A typical privacy policyis to use access control in conjunction with obfuscation byaddition of controlled noise to the released data [4][20].

Participatory sensing based data collection can typicallybe identified with the non-interactive setting. Several tech-niques have been proposed to preserve user privacy. Forexample, PoolView [26] adopts a client-server architecturewhere clients with no existing trust relationship can indepen-dently perturb there data using an application-specific noisemodel. The noise model is such that it preserves the privacyagainst typical reconstruction attacks, while allowing com-putation of community aggregates from the collected data.

Similarly, PriSense [27], uses the concept of data slicingand mixing for preserving privacy, and allows computationof statistical additive and non-additive aggregation functions.Personal Data Vault [28] employs a slightly different modelthan traditional participatory sensing. It allows users toretain ownership of their data and provides fine-grainedaccess control and trace audit functions for controlling theinformation shared with selected content-service providers.

VI. MODELING TRADEOFF: QOS VS. PRIVACY

A critical question that has not been fully explored in theabove privacy schemes is the explicit modeling of QoS at aparticular privacy level specified by the source. We considera problem setting where a single source wants to sharedata with multiple receivers having prior trust relationships.The source has different privacy requirements for eachreceiver. A receiver too has its own QoS requirement whichit advertises to the source along with the requested dataresolution (defined below). We assume that there is a costassociated with data and hence receivers locally balancecost with the desired QoS. If the source strives to exactlysatisfy every receiver request it could violate its own privacy.Similarly, not receiving the appropriate data resolution willresult in degraded QoS at the receiver. We want to determinethe optimal resolution at which the source should share datawith each receiver to “closely” satisfy both privacy and QoSconstraints. An interesting consequence of this setting is thatthe decision of how much data to share with a particularreceiver depends not only on the trust between the sourceand receiver, but also on the existing trust network betweenthe receiver and other neighboring receivers. In this section,we formulate the above tradeoff between QoS and privacy asa linear program. We use information from a trust graph toquantify the risk of information leakage leading to privacyviolation.

A. Resolution

Information content of data is a function of multipledimensions such as accuracy, precision, currency and com-pleteness. Depending on the privacy desired, a combinationof these dimensions could be appropriately obfuscated. Wesummarize the effect of these parameters into a single di-mensionless number resolution(r) and normalize it to valuesin [0, 1]. The interpretation of r is application and dataspecific. Also, not every data type would retain their utilityafter obfuscation. Hence, we restrict ourselves to the classof sensory data which offers utility even after changes toresolution.

B. Trust Graph

Fig. 2 shows the trust subgraph (T ) that exists betweensource s and receivers i, j and k. The edge weights representtrust scores. Thus, 0 ≤ tij ≤ 1 is how much i trusts j. Weinterpret tij as the propensity of i to share data with j. Weassume that T is known to the source.

40

Page 4: [IEEE 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) - Seattle, WA, USA (2011.03.21-2011.03.25)] 2011 IEEE International

k

ij

rα,i

tik

rα,kr

α,j

tsi

s

i

j

t

Figure 2. A trust subgraph graph. i, j and k are receivers. tij and tikrepresent trust that i has on j and k. rα,i, rα,j and rα,k are resolutionsthat the source shares with i, j, and k respectively.

C. Notation

Let R be the set of receivers. The vector R ={r1, r2, · · · , r|R|} is the set of maximum data resolutionrequested by receivers. Parameter 0 ≤ α ≤ 1, is sourcespecified and represents the tolerance to privacy violation.Vector R∗α = {r∗α,1, r∗α,2, · · · , r∗α,|R|} is the set of optimalresolutions at which the source should share data for aparticular value of α.D. Risk of Leakage

Intuitively, risk is an educated guess of the possible lossor damage that could arise out of a particular decision. Thereare multiple subjective interpretations of risk. In our work,we quantify risk of disclosure = probability of disclosure× value of information where probability of disclosure is aprediction of behavior of a node based on prior experienceand value of information is the damage sustained by thesource of information due to unauthorized disclosure [29].

For predicting node behavior, we use the edge weights inthe trust graph T . For estimating the value of informationleaked, we assume a monotonic relation between QoS anddata resolution. Thus, if F (r) is the QoS at resolution rthen r1 ≥ r2 ⇒ F (r1) ≥ F (r2). This is generally truefor applications, as better quality data typically yields betterresults. Thus, the value of information leaked is proportionalto the increase in QoS at the receiver which was otherwiseallocated a lower resolution by the source.

Let the trust subgraph and the resolution at which thesource shares data with receivers i, j and k be as shownin Fig. 2. To compute the risk that a source s incurs whenit wants to share data with receiver i at resolution rα,i weconsider the following two cases:.• rα,i ≤ rα,j : risk = 0. This is because we assume

that information leakage occurs when a receiver obtainsinformation at a resolution higher than that determinedby the source.

• rα,i > rα,j : risk = tij × (rα,i − rα,j).We compute the risk between pairs i and k in a similar way.Combining the two cases above we define the risk function

f(.) for node i as:

f(T,Rα, i) =∑

{j∈R,j 6=i}

tij × [rα,i − rα,j ]+ (1)

where [x]+ = max(x, 0).E. Formulation

Using the risk function in Eqn. 1 we can define thefollowing numerical optimization problem.

min∑i∈R

(α(ri − rα,i) + (2)

(1− α)∑

{j∈R,j 6=i}

tij [rα,i − rα,j ]+)

s.t rα,i ≤ ri ∀i ∈ R (3)rα,i ≥ min{ri|ri ∈ R} (4)

The objective function in Eqn. 2 has two parts. The firstpart weighed by parameter α tries to maximize the QoS byallocating a resolution rα,i as close to resolution ri soughtby the receiver. We refer to the difference between the sumof the allocated resolution and the requested resolution as thefidelity cost. Thus, higher fidelity cost yields lower QoS. Thesecond part weighed by 1−α is the risk function derived inEqn. 1. Constraint (3) ensures that the allocated resolutiondoes not exceed the maximum resolution required by thereceiver. Constraint (4) ensures that the allocated resolutionis at least as high as the minimum of the application specificresolutions requested by the receivers. This problem can beeasily cast as a Linear Programming problem [30] and theoptimal solution R∗α found using standard LP solvers. R∗αcontains the sharing constraint for each receiver.F. Simulation Results

We summarize our simulation results briefly in this sec-tion. We used a random connected graph of 50 nodes as ournetwork. We also randomly generated the trust graph T , thereceiver set R and the R vector. The size of R is chosen asfive.

Variation between α, Application Fidelity and TotalRisk: Fig. 3 shows how total risk and fidelity cost changeswith α. For smaller values of α the risk of leakage is highwhereas for larger values QoS is compromised.

Contribution to the total cost: The contribution of fi-delity cost and risk of leakage towards the total cost (Eqn. 2)is shown in Fig. 4. The domain of α values is partitionedinto three distinct regions, and the source could choose tooperate in any of the regions. If α ≤ 0.33, then the sourcechooses to minimize risk over application fidelity. Similarly,for α ≥ 0.69 the source prefers application fidelity overrisk. However, in the middle region, both fidelity cost andrisk contribute to the total cost. The source could choose α inthis range to balance non-zero risk and non-zero applicationfidelity. This middle region is important because trade-off

41

Page 5: [IEEE 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) - Seattle, WA, USA (2011.03.21-2011.03.25)] 2011 IEEE International

Figure 3. Plot showing the effect of weighing factor α on total risk andfidelity cost.

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Weighing Factor (α)

Co

st

Total Cost

(1−α) Risk

α Fidelity Cost

Figure 4. Plot showing the contribution of risk and fidelity to the totalcost.

between risk and application fidelity is only possible in thisregion.

VII. PRIVACY AWARE DATA SHARING FRAMEWORK

In this section we briefly outline the design and architec-ture of SensorSafe [31] - a framework for privacy-preservingsharing of user’s sensory information. We envision that

Figure 5. SensorSafe Architecture

SensorSafe framework could be used to implement the QoSand privacy model described in Section VI.

Mobile healthcare and medical studies requires sharing ofpersonal data with various parties such as doctors, medicalresearchers, and family members [32][33]. While usersreadily agree to share data for the the services providedby applications, they want to protect themselves againstadditional sensitive inferences that can be drawn using thesame data. In addition, the user’s identity cannot typicallybe anonymized because healthcare services are personalizedand medical studies need sensor data from specific patients.Therefore, privacy concerns primarily center around infer-ence violations.

The design of SensorSafe is governed by the aboveobservations. First, SensorSafe provides fine-grained accesscontrol primitives for specification of user’s privacy rules.Second, our architecture, as illustrated in Fig. 5, uses dis-tributed storage for sensor data. Third, it analyzes privacyrules associated with a data to estimate its utility for aparticular query. Finally, the architecture also provides alibrary of data perturbation algorithms which can be appliedon the output data before sharing. We discuss each of thesefeatures as follows:

Fine-grained Access Control: SensorSafe provides prim-itives which allows users to define fine-grained temporal andspatial privacy rules over the collected data. In addition,users can also specify rules based on the identity of datarequester, type of sensors, and sensor values.

Remote Data Stores: The SensorSafe architecture in Fig.5 uses distributed data storage. A remote data store couldmaintain individual data or data from a group participatingin a study. Distributed storage further minimizes the damagecaused due to possible compromise of a server. However,an additional control server is now required to manage theinteractions between the sources and the data requesters. Toprevent any additional disclosure, no data goes through thecontrol server. All data uploads and downloads take placedirectly between remote data stores and users. Also theremote data stores are typically owned by the data sourcesallowing them greater control over their data.

Framework for Data Perturbation: Medical sensorssuch as ECG and respiration can be used to measurestress level [34] or diagnose cardiac diseases. However, thelocation and timestamp can reveal personal life patterns.SensorSafe provides a library of data perturbation algorithmswhich can be suitably applied to protect sensitive inferences.This approach closely models PoolView [26].

Utility Measurement: Access control and data pertur-bation result in decreased utility of sensor data. Also,differences in individual preferences imply that the utilityvaries from one source to the other. Therefore, to achievea certain QoS we need to match data utility to applicationrequirement. For example, in a medical study, researchersmight recruit people to investigate the factors affecting a

42

Page 6: [IEEE 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) - Seattle, WA, USA (2011.03.21-2011.03.25)] 2011 IEEE International

certain medical condition. For the study to be meaningful,participants should share data at a particular utility level.SensorSafe provides tools to find participants who satisfythe required data utility or recommends appropriate privacyrules to the participants.

VIII. CONCLUSION

In this paper we presented a taxonomy of the privacyproblem as encountered in databases and more recentlyin participatory sensing and health monitoring applications.We then proposed a linear program formulation to modelthe tradeoff between QoS at the receiver and privacy atthe source. We defined a tolerance parameter which couldbe used by the source to balance the tension between thetwo objectives. We presented simulation results and theirinterpretation. Finally, we outlined in brief the design andarchitecture of SensorSafe. We summarized how a combi-nation of fine-grained access control and a data obfuscationlibrary is used to provide privacy-aware sharing.

ACKNOWLEDGEMENT

This research was sponsored by the U.S. Army Research Laboratory and the

U.K. Ministry of Defense under Agreement Number W911NF-06-3-0001 and by the

NSF under award #0910706. The views and conclusions contained in this document

are those of the author(s) and should not be interpreted as representing the official

policies, either expressed or implied, of the U.S. Army Research Laboratory, the U.S.

Government, the U.K. Ministry of Defense or the U.K. Government or the NSF. The

U.S. and U.K. Governments are authorized to reproduce and distribute reprints for

Government purposes notwithstanding any copyright notation hereon.

REFERENCES

[1] Goldwasser, Shafi and Micali Silvio, Probabilistic encryption& how to play mental poker keeping secret all partialinformation, STOC, 1982.

[2] P. Ohm, Broken Promises of Privacy: Responding to thesurprising failure of anonymization, Social Science ResearchNetwork Working Paper Series, 2009.

[3] A. Narayanan and V. Shmatikov, Myths and fallacies of”personally identifiable information”, Communications of theACM, 2010.

[4] C. Dwork, Differential Privacy, ICALP, 2006.[5] L. Sweeney, k-anonymity: a model for protecting privacy, Int.

J. Uncertain. Fuzziness Knowl.-Based Syst., 2002.[6] D. Kifer and J. Gehrke, l-Diversity: Privacy Beyond k-

Anonymity, ICDE, 2006.[7] N. Li and T. Li, t-Closeness: Privacy Beyond k-Anonymity

and l-Diversity, ICDE, 2007.[8] A. Josang and J. Haller, Dirichlet Reputation Systems, Int.

Conf. On Availability, Reliability And Security, 2007.[9] T. Li and N. Li, On the tradeoff between privacy and utility

in data publishing, KDD, 2009.[10] L. Sankar, Rajagopalan and V. Poor, A Theory of Utility and

Privacy of Data Sources, ISIT, 2010.[11] J. Burke et. al.,Participatory sensing, World Sensor Web

Workshop, ACM Sensys, 2006.

[12] A. Narayanan and V. Shmatikov, Robust De-anonymizationof Large Sparse Datasets, IEEE Symposium on Security andPrivacy, 2008.

[13] S. Hansell, AOL removes search data on vast group of webusers, New York Times, Aug 8 2006.

[14] A. Harmon, Whered You Go With My DNA?, New YorkTimes, Apr 24 2010.

[15] A. Molina-Markham et. al., Private memoirs of a smartmeter, BuildSys, 2010.

[16] M. Mun et. al.,PEIR, the personal environmental impactreport, as a platform for participatory sensing systems research,MobiSys, 2009.

[17] Y. Kim, T. Schmid, M.B. Srivastava and Y. Wang, Challengesin resource monitoring for residential spaces, BuildSys, 2009.

[18] J. Krumm, A survey of computational location privacy,Personal Ubiquitous Computing, 2009.

[19] B. Fung, K. Wang, R. Chen and P. Yu, Privacy-PreservingData Publishing: A Survey on Recent Developments, ACMComputing Surveys, 2010.

[20] C. Dwork, F. McSherry, K. Nissim and A. Smith, CalibratingNoise to Sensitivity in Private Data Analysis, Theory ofCryptography, 2006.

[21] J. Audun, R. Hayward and S. Pope, Trust network analysiswith subjective logic, ACSC, 2006.

[22] L. Xiong and L. Liu, A reputation-based trust model forpeer-to-peer ecommerce communities, EC, 2003.

[23] S.D. Kamvar, M.T. Schlosser and H. Garcia-Molina, TheEigentrust algorithm for reputation management in P2Pnetworks, WWW, 2003.

[24] S. Ganeriwal, L.K. Balzano and M.B. Srivastava, Reputation-based framework for high integrity sensor networks, ACMTransactions on Sensor Networks, 2008.

[25] F.E. Walter, S. Battiston and F. Schweitzer, Personalised anddynamic trust in social networks, RecSys, 2009.

[26] R.K. Ganti, N. Pham, Yu-En Tsai and T.F. Abdelzaher,PoolView: stream privacy for grassroots participatory sensing,SenSys, 2008.

[27] J. Shi, R. Zhang, L. Yunzhong and Y. Zhang, PriSense:Privacy-Preserving Data Aggregation in People-Centric UrbanSensing Systems, Infocomm, 2010.

[28] M. Mun et. al.,Personal Data Vaults: A Locus of Control forPersonal Data Streams, ACM CoNEXT, 2010.

[29] P. Cheng et. al.,Fuzzy Multi-Level Security: An Experimenton Quantified Risk-Adaptive Access Control, SP, 2007.

[30] S. Boyd and L. Vandenberghe,Convex Optimization,Cambridge Univ. Press, 2004.

[31] H. Choi et. al., SensorSafe: Managing Health-related SensoryInformation with Fine-grained Privacy Controls, TechnicalReport, September 2010. (TR-UCLA-NESL-201009-01)

[32] J. Sriram et. al., Challenges in Data Quality Assurance inPervasive Health Monitoring Systems, in Future of Trust inComputing, 2009

[33] D. Kotz, S. Avancha and A. Baxi, A privacy frameworkfor mobile health and home-care systems, ACM workshop onSecurity and privacy in medical and home-care systems, 2009.

[34] A.B. Raij et. al., mStress: Supporting Continuous Collectionof Objective and Subjective Measures of Psychosocial Stresson Mobile Devices, Tech. Report No. CS-10-004, Dept. ofComputer Science, Univ. of Memphis, July 14, 2010.

43


Top Related