computing science - newcastle universityeprint.ncl.ac.uk/file_store/production/161115/2c0... ·...

25
COMPUTING SCIENCE Proceedings of the First Trust Economics Workshop P. Inglesant, M. Machulak, S. Parkin, A. van Moorsel, J. Williams (Eds.). TECHNICAL REPORT SERIES No. CS-TR-1153 June, 2009

Upload: others

Post on 29-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

COMPUTING SCIENCE Proceedings of the First Trust Economics Workshop P. Inglesant, M. Machulak, S. Parkin, A. van Moorsel, J. Williams (Eds.). TECHNICAL REPORT SERIES No. CS-TR-1153 June, 2009

Page 2: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

TECHNICAL REPORT SERIES No. CS-TR-1153 June, 2009 Proceedings of the First Trust Economics Workshop P. Inglesant, M. Machulak, S. Parkin, A. van Moorsel, J. Williams (Eds.). Abstract This report documents work presented at the First Trust Economics Workshop, held on June 23rd 2009 at University College London (UCL), UK, in conjunction with the Eighth Workshop on the Economics of Information Security (WEIS) 2009. © 2009 University of Newcastle upon Tyne. Printed and published by the University of Newcastle upon Tyne, Computing Science, Claremont Tower, Claremont Road, Newcastle upon Tyne, NE1 7RU, England.

Page 3: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

Bibliographical details INGLESANT, P., MACHULAK, M., PARKIN, S., VAN MOORSEL, A., WILLIAMS, J.. Proceedings of the First Trust Economics Workshop [By] P. Inglesant, M. Machulak, S. Parkin, A. van Moorsel, J. Williams (Eds.). Newcastle upon Tyne: University of Newcastle upon Tyne: Computing Science, 2009. (University of Newcastle upon Tyne, Computing Science, Technical Report Series, No. CS-TR-1153) Added entries UNIVERSITY OF NEWCASTLE UPON TYNE Computing Science. Technical Report Series. CS-TR-1153 Abstract This report documents work presented at the First Trust Economics Workshop, held on June 23rd 2009 at University College London (UCL), UK, in conjunction with the Eighth Workshop on the Economics of Information Security (WEIS) 2009. About the editors Philip Inglesant is a post-doctoral research fellow in the Human Centred Systems Group in the Department of Computer Science at UCL. He has recently started on a new project called Trust Economics. This project will enhance understanding of the attitudes and responses of employees to organisational security policies. This is collaboration with researchers at the Universities of Bath and Newcastle, HP Labs, and a finance sector partner. They will model the behavioural consequences of policy interventions, and enable systems designers, implementers, and managers to make choices to promote more effective security cultures. Previously, he worked on an EPSRC funded project in Grid security, Easy Expression of Authorisation Policies This is part of the PERMIS Project, an integrated infrastructure for Grid security.. The principal investigators on this project were David Chadwick at the University of Kent (PI) and Angela Sasse in Computer Science at UCL (CoI). He completed his PhD studies at UCL in June, 2007

Maciej Machulak received his MSc in Computing Engineering from Wroclaw University of Technology in Poland in 2007. During his studies he was an Erasmus Exchange Student at Newcastle University. Maciej Machulak additionally completed the Advanced MSc degree in "System Design for Internet Applications" (SDIA) in 2007 at Newcastle University and his thesis was awarded Best 2008 SDIA Thesis. This degree included an industrial placement at Red Hat UK Ltd. Maciej's main task was to develop a framework for transactional Web Services. Before commencing his PhD studies Maciej was also employed as an intern at Red Hat and worked on embedded tools for transaction monitoring inside JBoss Application Server. Maciej Machulak is currently a PhD student working with Dr. Aad van Moorsel on the Trust Economics project, funded by the UK Technology Strategy Board (TSB). Maciej's project is concerned with building a novel authorization system for Web environment which allows users to flexibly adapt access control for their Web resources to their particular security requirements. Simon Parkin is a Post-Doctorate Research Associate working with Dr. Aad van Moorsel as a member of the Trust Economics project, funded by the UK Technology Strategy Board (TSB). Simon completed a BSc Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet Applications" (SDIA) in 2003, both at Newcastle University. The latter included an industrial placement at Arjuna Technologies focusing on reliable messaging for Web Services. Between 2003 and 2007 Simon studied a PhD under the supervision of Dr. Graham Morgan. Research subjects covered during this period included E-Commerce, Service Level Agreements (SLAs) and Distributed Virtual Environments (DVEs). Simon also contributed to the EU-funded "Trusted and QoS-Aware Provision of Application Services" (TAPAS) project during this time. Aad van Moorsel joined the University of Newcastle in 2004. He worked in industry from 1996 until 2003, first as a researcher at Bell Labs/Lucent Technologies in Murray Hill and then as a research manager at Hewlett-Packard Labs in Palo Alto, both in the United States. Aad got his PhD in computer science from Universiteit Twente in The Netherlands (1993) and has a Masters in mathematics from Universiteit Leiden, also in The Netherlands. After finishing his PhD he was a postdoc at the University of Illinois at Urbana-Champaign, Illinois, USA, for two years. Aad has worked in a variety of areas, from performance modelling to systems management, web services and grid computing. In his last position in industry, he was responsible for HP's research in web and grid services, and worked on the software strategy of the company. His research agenda aims at establishing an intelligent enterprise. The goal is to establish objective, quantitative methods to improve IT-related decision making, eventually fully automated. This involves mathematical modelling and algorithms as well as service-oriented software implementations. DTI, EPSRC and EU-funded collaborations are ongoing with Hewlett-Packard,

Page 4: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

Merrill-Lynch, Warwick, Bath, UCL, various universities throughout Europe, and the Business School in Newcastle. Julian Williams is a lecturer in finance at the University of Aberdeen. Julian joined the Business School in September 2008. Suggested keywords TRUST ECONOMICS

Page 5: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

The First Trust Economics Workshop

University College London, England

23rd June 2009

In Conjunction with WEIS 2009

Supported by Technology Strategy Board and by Hewlett-Packard, Merrill Lynch, University of Bath, University

College London, Newcastle University and University of Aberdeen.

Page 6: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet
Page 7: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

CONTENTS

Organisation 1

Introduction 2

Programme 3

Invited Presentation

“Privacy by Design Overcomes Negative Externalities Arising from Poor Management of Personal Information” 4

Ann Cavoukian, Information & Privacy Commissioner of Ontario

Session Papers

“Systems Modelling for Economic Analysis of Security Investments: A Case Study in Identity and Access Management” 5-10

A. Baldwin, M. Casassa Mont, D. Pym and S. Shiu “Total Cost of Security—A Method for Managing Risks and Incentives Across the Extended Enterprise” 11-14

R. Thomas “How Secure is your Password? Towards Modelling Human Password Creation” 15-18

B. Grawemeyer and H. Johnson

Page 8: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

ORGANISATION

Workshop Co-chairs

Aad van Moorsel (Newcastle University School of Computing Science, UK, [email protected])

Julian Williams (University of Aberdeen Business School, UK, [email protected])

Organising Committee

Philip Inglesant (University College London, UK, [email protected])

Simon Parkin (Newcastle University, UK, [email protected])

Maciej Machulak (Newcastle University, UK, [email protected])

Programme Committee

Robert Coles (Merrill-Lynch)

Christos Ioannidis (University of Bath)

Hilary Johnson (University of Bath)

David Pym (HP Labs)

Angela Sasse (UCL)

Simon Shiu (HP Labs)

1

Page 9: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

INTRODUCTION

Welcome to the first Trust Economics workshop, which discusses techniques, methods and tools for security decision making, taking into account economic, business and organisational concerns, human factors and information security technology (in a 'whole-system' view). As a motivating example, enterprises and government face increasingly difficult and important security decisions related to privacy and confidentiality of data of customers and citizens. How can we improve the decision making in such situations? What weaknesses exist in the state of the art? What information do we need? Which mathematical methods and software tools can make a difference? The workshop has an interesting programme, with a main aim of initiating discussions on fundamentally new methods of security decision making grounded in sound mathematical tools that utilise a deep understanding of business, human and technological aspects. The papers in this technical report are internal publications that will not prohibit publication elsewhere. We are greatful for the support of the UK Technology Strategy Board for the Trust Economics project. This workshop was organised and supported by the project partners: Hewlett-Packard, Merrill Lynch, University of Bath, University College London and Newcastle University and by University of Aberdeen. We especially thank the invited and keynote speakers Ann Cavoukian and Cliff Jones for their enthusiastic participation and we hope you enjoy participating in this workshop.

2

Page 10: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

PROGRAMME 10:30 Registration and coffee 11:00 Welcome and opening 11:15 Invited Presentation

Ann Cavoukian, Information & Privacy Commissioner of Ontario “Privacy by Design Overcomes Negative Externalities Arising from Poor Management of Personal Information”

12:00 Lunch 1:00 Keynote

Cliff Jones, Newcastle University “Formal Methods in Interdisciplinary Research”

2:00 Technical Session

A. Baldwin, M. Casassa Mont, D. Pym and S. Shiu “Systems Modelling for Economic Analysis of Security Investments: A Case Study in Identity and Access Management” R. Thomas “Total Cost of Security—A Method for Managing Risks and Incentives Across the Extended Enterprise”

B. Grawemeyer and H. Johnson “How Secure is your Password? Towards Modelling Human Password Creation”

3:30 Coffee 4:00 Panel

“Future Directions for the Economics of Information Security” 5:00 Close

3

Page 11: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

INVITED PRESENTATION

Ann Cavoukian, Information & Privacy Commissioner of Ontario

“Privacy by Design Overcomes Negative Externalities Arising from Poor Management of Personal Information”

Abstract

“Poor management of personal information often results in unintended consequences, or negative externalities. For data-rich organizations, public or private, these can include inefficient operations, fines, penalties and other remedial costs, diminished confidence and trust of customers, partners and other clients, and lost business and competitiveness. From the individual's point of view, these externalities can include lost dignity, loss of control and discrimination, identity fraud and theft, or worse. Poor information accountability impacts everyone, and can result in socially sub-optimal outcomes. In today's opaque Web 2.0 world of ubiquitous, networked data availability, the problem of negative externalities will undoubtedly grow. Come hear Dr. Cavoukian explain how applying her "Privacy by Design" principles in a comprehensive manner to information systems and architectures can transform zero-sum scenarios into positive-sum, win-win outcomes - benefitting both individuals and organizations.”

4

Page 12: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

Systems Modelling for Economic Analyses of Security

Investments: A Case Study in Identity and Access

Management

Adrian Baldwin Marco Casassa Mont David Pym Simon ShiuHP Labs, BristolEngland, U.K.

Abstract

Identity and Access Management (IAM) is a key issue for systems security managers suchas CISOs. More specifically, it is a difficult problem to understand how different investmentsin people, process, and technology affect the intended security outcomes. We position thisproblem within the framework of optimal control models in macroeconomics, and use a processmodel to understand the dynamics of the utility of possible trade-offs between investment,access, and security incidents (breaches). A utility function is used to express the securitymanager’s IAM preferences, and the functional behaviour of its components is described viaa process model. Executing our process model as Monte Carlo simulations, we illustrate thebehaviour of the utility function for varying levels of investment and threat, and so providethe beginnings of a decision-support tool for systems security managers.

1 Introduction

Since CISOs have finite budgets, security investment strategy involves choices between risks andoutcomes. Moreover, many of the outcomes and choices are intuitively correlated; that is, theytrade off against one other. We are interested in how to help stakeholders (decision makers) betterunderstand these trade-offs, and how to form a better-shared understanding of their preferences.

In this macro-economic style modelling approach, following the style outlined in [2] we canidentify the components and preferences between them through utility functions such as:

U(C,A,K, t) = w1(C − C)2 + w2(A − A)2 + w3(K − K)2 (1)

in which C, A, and K and C, A, and K represent, respectively, the actual and target levels ofconfidentiality, availability and investment, and wis represent the appropriate weightings; t denotestime. Thus the utility function is a weighted function of the deviations from target the threeeconomic magnitudes whose mutual trade-off is of interest to us, with the weights expressing thedecision-maker’s preferences among the magnitudes. It should be noted that the quadratic form ofthe utility function is not the only choice available. It is, however, a convenient first step, derivedfrom the basics of utility theory and portfolio theory, and provides a simple account of diminishingmarginal utility. Richer choices are available, such as the asymmetric Linex functions employedin the work of Barro and Gordon [1], Nobay and Peel [8], and Ruge-Murcia [9]. Asymmetries inthe components the utility function — in contrast to the symmetric form of the quadratic case —express the extent to which the security manager is relative more or less concerned about deviatingabove or below target.

The general setting for optimal control models is given in [6]. Having set up such an account ofutility, our objective is to maximize it over the space of control variables which govern its dynamics

5

Page 13: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

over time.1 That is, the security strategy problem for a given organization is expressed as a utilitymaximization problem with respect to the organization’s preferences.

The dynamics of C, A, and K is explored through appropriately constructed system equationsin [7]. In line with previous work by some of us and others [3, 2], however, this paper describeshow to use a systems modelling approach to explore the components of interest. Systems modelsare intended to directly capture and represent (potentially multiple) stakeholders comprehensionof a system, as such we believe they represent a more meaningful and trusted exploration of theutility components.

In the work presented in this paper, which analyzes trade-offs between security incidents, access,and investment in identity and access management, we do not have a set of system equations forthe magnitudes that are of interest to us. Rather, we have a process model: an executablemathematical model of the system in which we are interested that is based on mathematicalconcepts of environment, location, resource, and process. The process model, when executed as adiscrete-event simulation, produces as output numerical and graphical representations of incidentsand access in a given investment context. Thus we can illustrate the desired utility function (seeSection 3) in this case. Establishing an analytic connection between process models and the systemequations required to drive utility maximization, as described above, is a topic for further research.Nevertheless, the illustrations of the utility function of interest that we are able to obtain proveto be a valuable guide in information security investment decision making for IAM.

IAM is a complex ‘people, process, and technology’ problem. It challenges CISOs on how toauthenticate and authorize users; whether to centralize and automate processes (such as provi-sioning); and how to influence and reflect reliance on application and infrastructure security. Withrisks such as segregation of duties (SoD) IAM is also directly related to business level security andproductivity concerns and so is a rich and relevant example for studying security strategy.

In our case study, we use a systems modelling approach to construct a mathematical modelof a typical IAM system that can be tailored to fit different business and threat environments.We assume two investment instruments: configuration which covers provisioning and SoD, andenforcement which covers authentication, authorization, and general infrastructure and applicationsecurity. We use Monte Carlo-style simulation to show the effect different investment choiceswill have on the predicted state of the system, and the predicted protection provided againstdifferent threat scenarios. Predictive modelling represents explicitly the causal dependencies withindeployed security and IAM processes and provides a way to contextualise and calculate the metricswe then use for estimating the utility function.

2 The IAM Systems Model

Our previous security models [3, 10] have been written and run using the Demos2k [5] toolkit.In parallel with these studies, we have been designing and building a new tool chain, Gnosis —which captures the mathematical theory presented in [4], where a prototype implementation isalso discussed — for executing the process models, its associated experiment manager, GXM, tomanage the execution of Gnosis models as Monte Carlo simulations and to support the data andstatistical analysis. The case study described here has been implemented and run using Gnosis.Separate work will describe the advantages and implications of the Gnosis tool-set. The descriptionof the model provided should enable the reader to reconstruct an equivalent study using Demos2k.

The modelling idiom within which we work decomposes systems into four key conceptual facets:the collection of processes that characterize the behaviour of the system, the resources that aremanipulated by the processes as they execute. and the locations around which the system isdistributed, logically or spatially; finally, we consider the environment, described stochastically,within which the system exists. This idiom is supported mathematically by the work presented in[4] and is also discussed in [10].

1Alternatively, we may consider a loss (the opposite of utility) function and seek to minimize it.

6

Page 14: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

2.1 Basic Structure of the Model

We construct a model with 2 investment instruments (control variables), one to set the level ofinvestment on access configuration, and one to set the level of investment on enforcing this config-uration. The investment of each ranges between 1-10, providing 100 different experimental runs,with (1,1) representing minimal investment in both instruments, and (10,10) the maximum. Wealso have the ability to vary the threat environment, although for the purposes of this study, welimited ourselves to two scenarios, one ‘mild’ and another ‘full’ which assumes many more internaland external ‘attacks’. There are no assertions about the cost of moving the configuration instru-ment from, say 5 to 6, or of whether an investment of 4 is equivalent to a typical investment inenforcement. At this stage, the aim of the model is to explore the kinds of situations when dimin-ishing returns are reached on a particular investment instrument, and to differentiate strategiesbased on assumptions about the threat environment.

The basic components of the model are a series of externalities that trigger IAM relevantprocesses, and each of the processes are affecting aspects of configuration and security state thatwe are interested in tracking.

Each of the processes models the effect these processes have on the IAM state. For example, newstarters, staff leaving, job and organizational changes, introduction and retirement of applications,and applying automation to provisioning for an application all affect the configuration state, and soare included in the model. Similarly, application introduction, upgrade, retirement and migrationprojects each affect the overall enforcement state and so are included in the model.

We are interested only in the effect each of the processes has on the state, and so each of theprocesses are only defined in these terms. For example, this means the ‘new starter’ process doesnot capture any of the steps in the provisioning workflow; instead it simply records the expectedside effect this process has on the access configuration. Moving the configuration instrumentchanges the assumed effect this process has each time it runs.

In parallel with the IAM processes are a series of threat processes such as internal fraudattempts, external hack attempts, former staff accessing application, and so on. These processesare also triggered by externalities. We assume more attacks will be thwarted by a good IAM state,and this is reflected in the way threat processes are modelled. Essentially, a good configurationand enforcement state will lower the chances that a threat process will succeed.

In the executable model, each of the processes are spawned in parallel, and relevant probabilitydistributions are randomly sampled to determine when the external triggers fire and cause aninstance of a process to run. Each of these affects the state models (configuration, enforcementand incidents) which we can sample to build a picture of what is going on.

There is not sufficient space to describe the model indetail, but, loosely, the investment instru-ments adjust the way each process affects relevant state. For example, if there is heavy investmentin configuration then we expect the configuration state to be better than if there is little invest-ment.

The architecture of the model is summarized in Figure 1

3 Simulation Results

Experiments exploring the states reached for each combination of the instrument for a ‘benignthreat environment (i.e., threat instruments all set low), and one ‘high-risk environment (i.e.,where the threat instruments all set high) have been performed. Each execution simulates a yearof IAM activity, and we report the state recorded at the end of the year. The model was executed100 times for each setting of the instruments, which is sufficient to bring the standard error toacceptable levels. Since there are 10×10 instrument combinations, the total number of executionsfor the two experiments was 20,000.

For each setting of the instruments we derived the expected number of breaches and businessaccess, where

BusinessAccess = 1000 × (nonaccess(cross))/(nonaccess(cross) + access(tick)) (2)

7

Page 15: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

Figure 1: Basic Components of the IAM Model

For each setting of the instruments, we derive the expected number of incidents/breaches (IC)and denied business accesses (A), where

IC = number-external-attacks-incidents + number-internal-attacks-incidents + number-ex-

workers-attacks-incidents

A = 1000 × (nonaccess(cross))/(nonaccess(cross) + access(tick) (3)

The number of breaches/incidents (IC) is determined by modelling internal, external and ex-workers’ attacks and the likelihood of success based on investments in ‘enforcement’. A is calcu-lated by taking into account the proportion of ‘denied accesses’ to legitimate business users fromthe overall number of accesses.

More empirical work is required to establish an appropriate cost function for the investmentInstruments (K), but to show how this could proceed we set the cost function as

K = 50 + 2x + 1.8y (4)

where x represents the enforcement instrument, and y the configuration instrument. The (simple)intent here is to capture the exponential cost of achieving more with enforcement or configuration.The constant value represents the fact that there will be operational costs associated with theseactivities even if there is zero emphasis placed on them.

When these results are graphed they show (fitting with intuition) that it makes no sense tounder-invest in either instrument as the breaches will be too large, but conversely also shows thatover-inesting will be punished (by the prohibitive cost). It is also interesting to explore the variouspoints of diminishing returns, e.g. when it makes no sense to invest in configuration, without firstinvesting in enforcement.

4 Discussion

Equation 1, in the introduction, describes the utility of a generic dynamic model illustrating thechosen desired trade-off between confidentiality, availability, and investment. Figure 2 expressesthe effect of IAM choices as the aggregated weight between incidents, access, and investment

8

Page 16: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

2 4 6 8 10

2

4

6

8

10

Configuration

Enf

orce

men

t

Figure 2: Iso-investments with iso-utilities

plotted against investments in configuration and enforcement. The corresponding dynamic utilitymodel would be

U(C,A, K, t) = w1(IC − ¯IC)2 + w2(A − A)2 + w3(K − K)2 (5)

where IC, P , K and ¯IC, P , K represent, respectively, the actual and target levels of incidents,access and investment, and the wis express the weighted preferences given to variance from each ofthe targets; t is time. Here the targets represent levels of the given quantities that are acceptableto the security manager in the context of the organizational priorities and policies. Generally, theaim of such policies is to minimize disruption of the business process whilst maintaining acceptablelevels of security and cost.

More generally, each of the assumptions in the construction of the model, and each of therelationships explored through simulation, are approximating the actual relationships between¯IC, A, and K, the incident count, productivity, and investment parameters in Equation 5. For

example, the illustration that there are clear points where investing in one is severely handicappedwithout investment in the other, directly feeds the way we assume K affects both ¯IC and A.

We intend to continue to use the systems model of the IAM strategy, to guide parameterizationof the dynamic model, and to explore further the relationship and role of these two styles of modelbased decision support.

Acknowledgements. We are grateful to Christos Ioannidis, Julian Williams, Brian Monahan,and Matthew Collinson for their advice on various aspects of this work.

References[1] R. Barro and D. Gordon. A Positive Theory of Monetary Policy in a Natural Rate Model. Journal of Political

Economy, 91:589–610, 1983.

[2] A. Beautement, R. Coles, J. Griffin, C. Ioannidis, B. Monahan, D. Pym, A. Sasse, and M. Wonham. Modellingthe Human and Technological Costs and Benefits of USB Memory Stick Security. In M. Eric Johnson, editor,Managing Information Risk and the Economics of Security. Springer, 2008. Preliminary version available inProc. WEIS 2008: http://weis2008.econinfosec.org/papers/Pym.pdf.

[3] Y. Beres, J. Griffin, S. Shiu, M. Heitman, D. Markle, and P. Ventura. Analysing the performance of securitysolutions to reduce vulnerability exposure window. In Proceedings of the 2008 Annual Computer SecurityApplications Conference, pages 33–42. IEEE Computer Society Conference Publishing Services (CPS), 2008.

[4] M. Collinson, B. Monahan, and D. Pym. A Logical and Computational Theory of Located Resource. Journalof Logic and Computation, 2009. To appear (preprint available as HP Labs Technical Report HPL-2008-74R1).

[5] Demos2k. http://www.demos2k.org.

9

Page 17: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

[6] M.P. Giannoni and M. Woodford. Optimal Interest-Rate Rules I: General Theory. Working Paper Series 9419,National Bureau of Economic Research, 2002. ISSU 9419, ISSN 0898-2937.

[7] C. Ioannidis, D. Pym, and J. Williams. Investments and trade-offs in the economics of information security. InRoger Dingledine and Philippe Golle, editors, Proceedings of Financial Cryptography and Data Security ’09 (toappear). Springer, 2009. Preprint available at http://www.cs.bath.ac.uk/~pym/IoannidisPymWilliams-FC09.pdf.

[8] R.A. Nobay and D.A. Peel. Optimal Discretionary Monetary Policy in a Model of Asymmetric Bank Prefer-ences. Economic Journal, 113(489):657–665, 2003.

[9] Francisco J. Ruge-Murcia. Inflation targeting under asymmetric preferences. Journal of Money, Credit, andBanking, 35(5), 2003.

[10] M. Yearworth, B. Monahan, and D. Pym. Predictive modelling for security operations economics (extendedabstract). In Proc. I3P Workshop on the Economics of Securing the Information Infrastructure, 2006. Pro-ceedings at http://wesii.econinfosec.org/workshop/.

10

Page 18: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

Total Cost of Security – A Method for Managing Risks and Incentives Across the Extended Enterprise

Russell Cameron Thomas Principal, Meritology

1534 Plaza Lane, Suite 306 Burlingame, CA, 94010

1-650-692-2731

[email protected]

ABSTRACT This is an extended abstract of the presentation of the same title for the Trust Economics Workshop, 2009.

Categories and Subject Descriptors K.6.0 [Management of Computing and Information Systems]: General – Economics.

General Terms Management, Measurement, Economics, Security, Theory.

Keywords Information Risk Management, Cyber Security, Total Cost of Security, Loss Distribution Approach

1. INTRODUCTION One of the main challenges facing information technology (IT) managers and business executives is how to map security metrics and performance to business metrics and performance. This is necessary to align business goals and investments with security requirements, and to balance risks against costs and rewards. Lack of such metrics has resulted in a persistent disconnection between business decision-makers and security specialists regarding value and risk of information security [1].

Because the benefits of security are the avoidance of uncertain losses, applying traditional cash flow return on investment (ROI) techniques would be inappropriate, confusing, or misleading. Even variations tailored for security (e.g. Return on Security Investment, ROSI [3]) have fundamental problems. Furthermore, the domain is rife with unruly uncertainty (i.e. ambiguity, incomplete information, contradictory information, intractability, unknown-unknowns, etc.) which makes it difficult or impossible to reliably estimate annualized loss expectation (ALE) or other probabilistic estimates of expected losses.

As a solution, I propose a managerial accounting framework called “Total Cost of Security”. (The name alludes to the Total Quality Management and the concept of “Total Cost of Quality”.)

The proposed method has the following advantages over previous methods:

• It is compatible with both Generally Accepted Accounting Practices (GAAP) and modern ERP software packages.

• It is compatible with enterprise risk management (ERM) frameworks.

• It is compatible with economic theories of the firm and rational decision-making with uncertain and incomplete information.

• It provides a general framework for integrating a variety of “ground truth” security metrics into an economically meaningful composite measure.

• It significantly reduces the data collection burden compared to other approaches (e.g. ALE).

• It makes the most of available information and avoids many of the problems of unruly uncertainty.

• It is robust to changing threat, vulnerability, asset, and organization environments.

• It supports a variety of incentive instruments for stakeholders to both manage risks better, minimize externalities, and to disclose relevant information.

• It is composable, which allows modular analysis of complex organizations and networks both at a component level and at various levels of aggregation.

• It can be extended to include related risks such as privacy, intellectual property protection, and digital rights.

• It is applicable to a wide variety of organizations, including for-profit, not-for-profit, government, and military. It scales well across organization size and structures, including networks of organizations.

2. PREVIOUS METHODS There have been previous attempts to quantify the risks associated with information security including Return on Investment (ROI), Discounted Cash Flow (DCF), Return on Security Investment (ROSI), and Annualized Loss Expectancy (ALE) and variants. Each of these has severe or fatal limitations when applied to information security risk. Only the ALE method is consistent from an economic perspective. However, it is not widely implemented because of the difficulty of getting enough historical data to estimate probabilities of loss for each incident or loss type. There are other severe problems with the ALE method, including the lack of any way to account for the dependence structure

11

Page 19: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

between incident types. This leads to significant underestimation of “tail risk”. Given the difficulty of quantifying information security risk, many organizations and analysts rely on qualitative risk assessment methods, including the “Frequency vs. Severity” 3X3 Qualitative Matrix (with “High-Medium-Low” values for each dimension). These are easier to produce and are useful for informing some decisions, but they lack the power of quantitative risk measures. In particular, it’s not easy to use them as a basis for incentive instruments and they don’t compose easily.

3. REQUIREMENTS The requirements for a risk management framework were listed in Section 1, phrased in the form of “advantages”. More technically, it needs to be based on coherent risk measures, with the properties of translation invariance, subadditivity, positive homogeneity, and monoticity [4]. In addition, there is the requirement to harmonize two perspective of economic risk. The first perspective is that of the rational investor who is focused on short-term returns, and volatility of returns. Performance is defined as return on investment, and it is determined by the “fat of the curve” characterized by the mean and variance of return distributions. The second perspective is the insurance actuary who is focused on long-term funding of a pool of risks. Performance is defined as avoiding “ruin” (i.e. paying out more in claims than you take in as premiums), and is determined by the “tail of the curve” characterized by parameters that quantify the thickness of the probability distribution at extreme values. Unlike previous methods, the Total Cost of Security framework harmonizes these two perspectives on economic risk to support rational decision-making and incentive instruments.

4. TOTAL COST OF SECURITY FRAMEWORK This framework is based on the Loss Distribution Approach (LDA) that has become common in Enterprise Risk Management (ERM), pioneered in the financial services industry. The curve in Figure 1 is a forward-looking probability density function for total cost of security for a given period.

Figure 1. Idealized Total Cost of Security

Probability Distribution

It’s a matter of policy what costs to include or exclude in Total Costs of Security. The framework is intended to be broad and inclusive, and it can include:

• Direct costs of information security (personnel, security-specific operating and capital expenses, professional services, security training and awareness programs, security measurement and management costs, etc.)

• Indirect costs of information security, allocated proportionately (IT help desk, configuration management, patch management, etc.)

• Direct costs of security breaches, intrusions, losses, and recovery (discovery, damage control, emergency response, system restoration, penalties and/or fines, etc.)

• Indirect costs of security breaches, intrusions, losses, and recovery, including revenue impact, reputation damage, etc.

Our first innovation is to divide security-related or cyber trust costs into three categories: “Budgeted”, “Self-insured”, and “Catastrophic” (Figure 1). Basically, this approach divides the aggregate cost probability distribution into three sections. The fat part of the curve near the mean is "budgeted". The tail section up to some threshold (95% or 99%) is "self-insured". The very far end of the tail is "catastrophic". It’s all-inclusive in that any given incident type, vulnerability type, or threat type could contribute costs into any or all of these categories.

• "Budgeted" region is the “fat” part of the curve that includes costs that are predictable and likely within the budget year. This includes all direct spending on security, plus indirect costs, plus the expected value of all high frequency losses and some small mix of lower frequency losses. It also includes the opportunity costs – business activities that are prevented or inhibited by security.

• "Self-insured" region covers loss magnitudes are potentially big enough to bust the budget (i.e. material to quarterly earnings statements), or could get the firm on the front page of a national newspaper, or could even threaten the firm’s credit rating, but not necessarily threaten firm survival. These losses are low probability, but not close to zero.

• "Catastrophic" region covers the most extreme loss values that are very unlikely and/or very unpredictable, but could threaten firm survival or even more widespread systemic losses. This includes most or all “doomsday” scenarios.

Our second innovation is the treatment of indirect costs, especially indirect costs of security incidents. We advocate a general method of valuation called “Expected Cost of Recovery” – the anticipated cost of restoring the information systems, data, business processes, and business relationships to their previous level of capability and performance. This is more conservative and reliable than other measures which try to estimate the lost business value due to the security incidents, including decline in stock prices and other stakeholder value metrics.

12

Page 20: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

5. TCoS Risk Measure The general formula for TCoS (short for Total Cost of Security, pronounced “TEE-koss”) is summarized by the following equation:

TCoS = B + SI + C , where

• TCoS is the Total Cost of Security risk measure

• B is the budgeted security costs and losses for the period (i.e. median costs, or within a margin of the median),

• SI is the self-insurance premiums to cover low probability-high impact losses, and

• C is the costs of business continuity to prepare for catastrophic scenarios, allocated according to information security causes and effects.

In plain language, TCoS starts with expected spending on security and security-related costs (losses, etc.) that are reflected in an organization’s budget. Then you add the cost of insurance premiums to cover losses low probability-high impact losses, but below the level of catastrophe. (Nearly all organizations will carry this risk rather than transfer it, so we call it “self-insurance”.) Finally, you add in the cost of business continuity allocated to information security. Once these three components are added, the result is a TCoS in current dollars for the next time period. A stream of TCoS values over multiple periods can be treated like ordinary cash flows in the standard Discounted Cash Flow (DCF) method. The discount rate, a critical parameter, is very easy to specify – it’s the firms weighted-average cost of capital, or in other contexts, the risk-free rate. (In ordinary capital budgeting analysis, the discount rate in DCF is adjusted to match the riskiness of the project. “Riskiness of the project” is a tortured concept in the information security context.)

5.1 Decision Criteria The most general decision criterion can be simply stated:

• “Minimize TCoS while meeting other business objectives”

It’s also possible to integrate TCoS into ordinary return on investment calculations to get a risk-adjusted return for various business opportunities or investments (e.g. outsourcing a business function, implementing a new intellectual property licensing revenue model for on-line media, etc.) that have significant information security implications. In addition to this general decision criterion, TCoS can inform more complicated decisions and has well-defined methods of composition (i.e. combining TCoS measures from different organization units into a composite measure for the entire organization) using portfolio theory, and also risk budgeting (allocation and prioritization incentives and constraints to guide business unit managers). Details are outside the scope of this presentation.

5.2 Estimation Methods Of course, the success of this or any other risk measurement method depends on our ability to estimate the relevant probability distribution curves. If no such method is feasible, either in theory or in practice, then the method should be rejected. In the

proposed Total Cost of Security framework, these are still open research questions. In this presentation I propose a set of methods that seem feasible, or at least promising. (It’s important to note that the Total Cost of Security framework does not depend on any particular estimation or modeling method.) Rather than use a single estimation method for the whole curve (as in the DCF and ALE methods), I propose piece-wise approach. The probability distribution is then assembled from the pieces. Though each set of methods are different, they can draw from similar data: operational security metrics (a.k.a. “ground truth”), business process metrics, expert opinion, historical data of incidents and losses, estimates of asset value and other values at risk.

• The “Budgeted” region would be estimated using fairly conventional cost-driver models (i.e. linear relationships between operational metrics and indirect or overhead costs, etc.) and data drawn from accounting information systems.

• The “Self-insured” region would be modeled using rank order or order-of-magnitude approaches, possibly combining stochastic methods with inferential reasoning.

• The “Catastrophic” region would be modeled using scenario analysis and ordinal or nominal scales. Here, the precision of cost estimate is much less important than it’s the qualitative value to guide strategy and business continuity planning, for example.

An illustrative example is given for estimating self-insurance costs of data breaches for a mid-sized retailer (13 million credit card records). Source data could include statistics about the IT architecture and operations, security metrics, the company’s breach history, industry surveys and data breach databases, threat models, and business process models. It may be possible to estimate the “Self-insured” region using methods such as Bayesian Networks, Delphi Method, Predictive Modeling, and Monte Carlo Simulation. This is an open research question. Another illustrative example is given for how TCoS could be used to define incentive instruments in the extended enterprise for the same retailer, focusing on card payment processing. The incentive instruments do not need to be linked to the complete TCoS metric for each party. Instead, contingent payments, pooling, and other incentives can be tied to thresholds and limits for TCoS or its components. There will be opportunities for third parties to support incentive instruments, including risk rating agencies and insurance companies, using facilities such as parametric (indexed) insurance [5] and finite risk insurance.

6. RESEARCH RESULTS Theoretical research on the Total Cost of Security framework and TCoS risk measure is in the very early stages. We are working toward following theoretical results:

1. Demonstrating that TCoS is a coherent risk measure 2. Demonstrating that it is feasible to derive a stable,

acceptable estimate of the “Budgeted” region of the Total Cost of Security distribution curve using cost

13

Page 21: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

driver methods from Activity-Based Costing, plus a formal bargaining game for cost sharing among (competing) stakeholders.

3. Proposing an approach to estimating of the “Self-insured” region of the Total Cost of Security distribution curve using a pluralistic, competition between diverse models. This method remains to be tested and validated.

4. Using similar methods as #2, demonstrating a method to segment TCoS and it’s components into three subcomponents: “internally-driven”, “partner-driven”, and “externally-driven”. These sub-components can serve as the basis for risk pooling, insurance, cap-and-trade, or other incentive-based mechanisms

7. DISCUSSION Of course, confidence in this whole proposal depends on empirical research and on whether available data sets can be used usefully to estimate TCoS. Our claim at this stage of research is that the framework is promising and seems to be viable from a theoretical perspective.

One of the advantages of the proposed Total Cost of Security framework is that it can incorporate any type of information security risk or, more broadly, cyber trust which includes privacy, intellectual property protection, and digital rights management.

It is also flexible enough to handle a wide range of risk profiles. In cases where the Total Cost of Security distribution curve happens to be normal distribution with relatively modest variance, then it would all fall into the "budgeted" category, and thus could be managed using traditional budget and cash flow methods. On the other hand, if the loss distribution has a "fat tail", then the three-part approach becomes very useful to distinguish between what we know with confidence and what we know with less confidence or don't know at all.

The framework makes the most of existing information, aligns with decision-making processes, and avoids the problem of conflating reliable and unreliable estimates. It draws on innovations from Enterprise Risk Management, Activity-based Costing, and qualitative reasoning. The approach is roughly analogous to the Total Cost of Quality concept that helped motivate the Total Quality Management movement. In addition to helping with security cost and performance management, this approach highlights the importance of organization learning and discovery.

Another advantage is that it is compatible with existing methods for enterprise investment and performance management, including

“Risk-adjusted Return on Capital” (RAROC) in financial services and “Economic Value-added” (EVA) across various industries. In essence, “self-insurance” adds to the capital required by a project or business unit. Higher levels of information risk mean a larger “self-insurance” pool is required, which lowers return on capital, and vice versa.

It may be possible to standardize these methods with industries and organization types to allow, for the first time, meaningful aggregation of cyber trust cost information to guide government policy and vendor product development decisions. It would also allow meaningful public disclosure of cyber trust risks and risk tolerance in stakeholder reports and regulatory filings.

8. ACKNOWLEDGMENTS My thanks to Patrick Amon, Bob Austin, Sean Barnum, Jean Camp, Fred Cohen, Eric Dalci, John Delaney, Naomi Fine, Dan Geer, Alex Hutton, Jack Jones, Georgiy Bobashev, Ray Kaplan, John Nye, Elizabeth Nichols, Brent Rowe, and Diglio Simoni for their ideas, support, feedback, and suggestions. Additional thanks goes to the members of Securitymetrics.org for their comments, suggestions, and feedback.

9. REFERENCES [1] Conference Board 2006. Navigating Risk—The Business

Case for Security, http://www.conference-board.org/publications/describe.cfm?id=1231 .

[2] Tuck School of Business – Glassmeyer/McNamee Center for Digital Strategies 2006, Embedding Information Security Risk Management in the Extended Enterprise (Workshop), http://mba.tuck.dartmouth.edu/digital/Programs/CorporateEvents/CIO_RiskManage/Overview.pdf .

[3] Berinato, S. 2002. Calculated Risk - Guide to determining security ROI, CSO Magazine http://www.csoonline.com/article/217727/Calculated_Risk_Return_on_Security_Investment

[4] Artzner, P., Delbaen, F., Eber, J.M., Heath, D. 1999. Coherent measures of risk. Math. Finance 9(3), 203-228.

[5] Skees, J. et. al. 2007, “Scaling Up Index Insurance”, Microinsurance Centre, LLC, http://www.microinsurancecentre.org/UploadDocuments/080911a%20Scaling%20Up%20Index%20Insurance%20Final.pdf

[6] Leavitt, R. and Anderson, M. 2008. Finite Risk Insurance: A New Product Based on an Old Standard. http://www.wgains.com/Assets/WhitePapers/finiterisk701.pdf

14

Page 22: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

How Secure Is Your Password? TowardsModelling Human Password Creation

Beate Grawemeyer and Hilary Johnson

Human Computer Interaction Group, Department of Computer ScienceUniversity of Bath, Bath, BA2 7AY, UK{b.grawemeyer,h.johnson}@bath.ac.uk

Abstract. There is a need to devise security policies that protect infor-mation from unauthorised access, as well as being responsive to require-ments regarding users’ authentication behaviour. This paper contributesby investigating real-life password generation within the context of dailylife. It presents the results of an empirical study, where participants com-pleted a password diary over seven days. Factors relevant for profilingdifferent user roles are identified. The paper focuses on a subgroup ofparticipants, undertaking the role of non-IT administrative support staff,and investigates participants’ password generation behaviour. Results in-dicate that users’ knowledge of information security influences how theycreate passwords. Possible changes in security policies that address theseindividual differences are discussed.

1 Introduction

Passwords support system security, but are vulnerable to attack. Thus, there isa need for security policies in the creation of passwords. However, this can some-times be counterproductive. Research on the effect of security policies upon userbehaviour [1] shows that certain policies, enforced to increase security, resultedin less and not more security. For example, enforcement of frequent passwordchange might lead the user to write passwords down. Hence more effective secu-rity policies are needed which not only help to ensure that passwords are resilientto attacks, but also take into account user behaviour in relation to passwords.

Our research aims to investigate how people use passwords and authenticatewith respect to relevant services in their daily live and how to model humanauthentication behaviour. A password diary study was conducted, intended toprovide a record of what happens in reality regarding individual users. Thispaper aims to identify factors that are relevant for profiling different user roles.It focuses on a subgroup of participants - non-IT administrative support staff -for example, an undergraduate administrator or a finance secretary. The paperlooks at this group’s behaviour towards password generation. A high-level aim isto integrate this work on modelling human authentication behaviour with workon model-based design of password security (see for example, [2]).

2 Password diary study

The research aim is to investigate how people use passwords in their daily livesand identify areas that would benefit from changes in either security policyor user behaviour. The study aims to show the difference in attitude towardspasswords and authentication behaviour with regards to an individual’s role.

15

Page 23: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

Twenty two participants were recruited from the computer science depart-ment at the University of Bath and HP labs in Bristol: 6 non-IT administrativesupport staff, 4 researchers, 1 lecturer, 1 systems engineer, 7 PhD students and3 MSc students (13 female/9 male).

Participants were asked to complete a password diary over a seven-day pe-riod. The diary consisted of a form that included detailed questions about theauthentication process. After participants completed their password diary a de-brief was conducted. Here, information about the passwords and their use wererecorded. The information gathered included details about: the service used forauthentication; participants’ perception of the security-sensitivity of the service;the structure of the password; whether passwords were shared between services;whether passwords were known by a third person; how the password was gen-erated; participants’ estimated level of security of the password; how it wasremembered; how often it was used; and finally, how often it was changed. Adetailed description of the setup of the password diary study can be found in[3].

3 Results

This paper looks at participants’ behaviour in password generation gatheredthrough the debrief session only. It focuses on a subgroup of participants thenon-IT administrative support staff (which, from here, we refer to as the ‘adminsupport group’) - in order to identify factors that are relevant for profiling dif-ferent user roles when modelling human authentication behaviour. A completeanalysis of the password diary study, which includes participants’ authenticationbehaviour relating to different roles is given in [3].

3.1 Services

Overall, the admin support group (6 participants) reported 36 passwords, whichwere used for authentication on certain services over the seven-day period (mean=6, min=4, max=8, SD=1.41). Most of these services (42%) were related toan authentication for a company internal service. This was followed by ser-vices (30%), which enabled authentication for an external service (e.g. bank,e-commerce). Finally, 28% of the authentications were logins to or unlocking acomputer/or domain.

3.2 Passwords

The passwords that the admin support group generated included: passwordsthat contain a singe word (44%), a common word or name (20%), a variation ona meaningful phrase (12%), a number pattern (9%), a meaningful phrase (6%),some other meaningful pattern (6%), and randomly generated ones (3%). Thesepasswords were sometimes strengthened by either appending numeric suffix orprefix (58%) or by replacing/inserting letters by symbols (22%).

3.3 Security perception on services and passwords

Most services (59%) which required password authentication were perceived tobe ‘very sensitive’. This was followed by services perceived as ‘fairly sensitive’,23%; and ‘sensitive’, 18%.

16

Page 24: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

The passwords were rated mostly as ‘secure’ (hard to crack), 50%; ‘fairlysecure’, 23%; ‘highly secure’ (uncrackable), 12%; ‘not very secure’ (crackable),12%; ‘insecure’ (easy to guess), 3%.

A bivariate correlation analysis between the security perception of the serviceand the passwords revealed the following significant correlations:

– A significant negative correlation between a service perceived as very sensi-tive and passwords perceived as fairly secure (r=-.50, p<.01).

– A significant positive correlation between a service perceived as fairly sensi-tive and passwords perceived as fairly secure (r=.84, p<.01).

– A significant negative correlation between a service perceived as fairly sen-sitive and passwords perceived as secure (r=-.51, p<.01).

– A significant positive correlation between a service perceived as not secureand passwords perceived as insecure (r=.56, p<.01).

The results suggest that participants tried to match perceived security levelof the service to the estimated security level of their passwords, for examplematching a not very sensitive service with an insecure password, a fairly sensitiveservice with a fairly secure password; and avoiding matching a very sensitiveservice with only a fairly secure password.

Fig. 1. Estimated security level and password nature within admin support group

Looking into more detail at the admin support group participants’ passwordsand estimated security level, Figure 1 interestingly shows, that all of the pass-words estimated as highly secure (uncrackable) and most of passwords regardedas secure (hard to crack) are passwords that contain a single word. Additionally,most of the passwords estimated as fairly secure are passwords that include acommon word or name. However, surprisingly, both these types of passwords(single word and common word or name) are among those which are insecureand easy to guess.

17

Page 25: COMPUTING SCIENCE - Newcastle Universityeprint.ncl.ac.uk/file_store/production/161115/2C0... · Computing Science degree in 2002 and an Advanced MSc degree in "System Design for Internet

The bivariate correlation analysis between participants’ estimated securitylevel and the nature of the passwords showed the following significant correla-tions:

– A significant positive correlation between passwords perceived as highly se-cure and passwords that contain a singe word (r=.41, p<.05).

– A significant negative correlation between passwords perceived as secure andpasswords that contain a common word or name (r=-.51, p<.01).

– A significant positive correlation between passwords perceived as fairly se-cure and passwords that contain a common word or name (r=.58, p<.01).

– A significant positive correlation between passwords perceived as insecureand passwords that entail a number pattern (r=.56, p<.01).

The results suggest that participants within the admin support group mighthave misconceptions regarding the security level of passwords and their nature,including the misconception that passwords which contain a single word arehighly secure, or that passwords that consists of a number pattern are insecure.In fact, single word passwords are insecure and number pattern passwords aresecure.

4 Discussion and Conclusion

The results indicate that the admin support group tried to match the perceivedsecurity level of the service to the estimated security level of the password.However, there seems to be a misconception of the security level of passwords andtheir nature. It appears that the admin support group has a lack of knowledgeabout certain password cracking methods such as dictionary attacks that makesa password that entails a single word insecure [4]. Security policies could helpto overcome such misconceptions by giving advice on how to generate securepasswords, such as how to create a variation on a meaningful phrase and/orhighlight the security implications of insecure passwords.

The next step of this research is to use Task Knowledge Structures [5] tomodel human authentication behaviour, including the activity of generatingpasswords. These models can then be taken into account in the design of newsecurity policies able to protect information from unauthorized access, and beresponsive to individuals’ needs by assisting in overcoming security misconcep-tions.

References

1. Adams, A., Sasse, M. A.: Users are not the enemy. Communications of the ACM,42(12), 40–46 (1999)

2. Casassa Mont, M., Baldwin, A., Griffin, J., Shiu, S., Beres, J.: Identity Analytics:Using Modeling and Simulation to Improve Data Security Decision Making. HPLaboratories, HPL-2008-118 (2008)

3. Grawemeyer, B., Johnson, H.: ‘A week to a view’: Empirical results of a passworddiary study. To be submitted to: Interacting with Computers (2009)

4. Yan, J., Blackwell, A., Anderson, R., Grant, A.: Password Memorability and Secu-rity: Empirical Results. Security & Privacy, IEEE, 2(5), 25–31 (2004)

5. Johnson, H., Hyde, J.: Towards Modeling Individual and Collaborative Constructionof Jigsaws Using Task Knowledge Structures (TKS). Transactions on Computer-Human Interaction, 10 (4), 339–387 (2003)

18