aligning security and usability - virginia...

8
Usability and Security D esigners of security-sensitive software applica- tions sometimes speak of a trade-off between achieving strong security and making software easy to use. When we look for ways to adjust an existing design, usability improvements seem to yield more easily compromised software, and adding security measures seems to make software tedious to use or hard to understand. Yet designers cannot afford to neglect ei- ther—both security and usability failures can render a product useless. Conflicts between security and usability can often be avoided by taking a different approach to security in the design process and the design itself. Every design problem involves trading off many factors, but the most successful designs find ways to achieve multiple goals simultane- ously. To this end, in this article I discuss when and how we can bring security and usability into alignment through these main points: Security and usability elements can’t be sprinkled on a product like magic pixie dust. We must incorporate both goals throughout the design process. When secu- rity and usability are no longer treated as add-ons, we can design them together to avoid conflicts. We can view security and usability as aspects of a com- mon goal: fulfilling user expectations. This involves maintaining agreement between a system’s security state and the user’s mental model, both of which change over time. An essential technique for aligning security and usabil- ity is incorporating security decisions into the users’ workflow by inferring authorization from acts of desig- nation that are already part of their primary task. I apply these points to three classes of everyday security problems: worms, cookie manage- ment, and phishing attacks. Scope Two assumptions frame my discussion. First, I assume a known user, setting aside the problem of authenticating the user. This article will not contribute any insights on authentication. Second, I assume that the parties we intend to serve have a mutually understood framework of acceptable be- havior. (Malicious parties might attempt unacceptable behavior, but we are not interested in serving their needs.) For example, if music distributors wish to impose copying restrictions that music listeners find unreason- able, then software designers cannot serve both distribu- tors and listeners without compromising. In such a situa- tion, the conflict stems not from usability issues but from disagreements between people about policy, which is a different problem. Resolving such disputes is outside the scope of this discussion. Conflicts in the design process To understand how security and usability come into con- flict during software design, it might be helpful to look at this tension from both points of view. Pseudosecurity harms usability As a security practitioner, you might have been asked to take a nearly complete product and make it more secure. You then know firsthand how difficult and ineffective it is to try to add on security at the last minute. Although we KA-PING YEE University of California, Berkeley Aligning Security and Usability 48 PUBLISHED BY THE IEEE COMPUTER SOCIETY 1540-7993/04/$20.00 © 2004 IEEE IEEE SECURITY & PRIVACY Conflicts between security and usability goals can be avoided by considering the goals together throughout an iterative design process. A successful design involves addressing users’ expectations and inferring authorization based on their acts of designation.

Upload: others

Post on 11-Aug-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Aligning Security and Usability - Virginia Techpeople.cs.vt.edu/~kafura/cs6204/Readings/Usability/AliigningSecurity... · Usability and Security can find some types of bugs in a

Usability and Security

D esigners of security-sensitive software applica-tions sometimes speak of a trade-off betweenachieving strong security and making softwareeasy to use. When we look for ways to adjust an

existing design, usability improvements seem to yieldmore easily compromised software, and adding securitymeasures seems to make software tedious to use or hard tounderstand. Yet designers cannot afford to neglect ei-ther—both security and usability failures can render aproduct useless.

Conflicts between security and usability can often beavoided by taking a different approach to security in thedesign process and the design itself. Every design probleminvolves trading off many factors, but the most successfuldesigns find ways to achieve multiple goals simultane-ously. To this end, in this article I discuss when and howwe can bring security and usability into alignmentthrough these main points:

• Security and usability elements can’t be sprinkled on aproduct like magic pixie dust. We must incorporateboth goals throughout the design process. When secu-rity and usability are no longer treated as add-ons, wecan design them together to avoid conflicts.

• We can view security and usability as aspects of a com-mon goal: fulfilling user expectations. This involvesmaintaining agreement between a system’s securitystate and the user’s mental model, both of whichchange over time.

• An essential technique for aligning security and usabil-ity is incorporating security decisions into the users’workflow by inferring authorization from acts of desig-nation that are already part of their primary task.

I apply thesepoints to three classesof everyday security problems: worms, cookie manage-ment, and phishing attacks.

ScopeTwo assumptions frame my discussion. First, I assume aknown user, setting aside the problem of authenticatingthe user. This article will not contribute any insights onauthentication.

Second, I assume that the parties we intend to servehave a mutually understood framework of acceptable be-havior. (Malicious parties might attempt unacceptablebehavior, but we are not interested in serving theirneeds.) For example, if music distributors wish to imposecopying restrictions that music listeners find unreason-able, then software designers cannot serve both distribu-tors and listeners without compromising. In such a situa-tion, the conflict stems not from usability issues but fromdisagreements between people about policy, which is adifferent problem. Resolving such disputes is outside thescope of this discussion.

Conflicts in the design processTo understand how security and usability come into con-flict during software design, it might be helpful to look atthis tension from both points of view.

Pseudosecurity harms usabilityAs a security practitioner, you might have been asked totake a nearly complete product and make it more secure.You then know firsthand how difficult and ineffective it isto try to add on security at the last minute. Although we

KA-PING YEE

University ofCalifornia,Berkeley

Aligning Security and Usability

48 PUBLISHED BY THE IEEE COMPUTER SOCIETY ■ 1540-7993/04/$20.00 © 2004 IEEE ■ IEEE SECURITY & PRIVACY

Conflicts between security and usability goals can be

avoided by considering the goals together throughout an

iterative design process. A successful design involves

addressing users’ expectations and inferring authorization

based on their acts of designation.

Page 2: Aligning Security and Usability - Virginia Techpeople.cs.vt.edu/~kafura/cs6204/Readings/Usability/AliigningSecurity... · Usability and Security can find some types of bugs in a

Usability and Security

can find some types of bugs in a code review, real securityis a deeper property of the whole design. John Viega andGary McGraw wrote, “Bolting security onto an existingsystem is simply a bad idea. Security is not a feature youcan add to a system at any time.”1

What happens when people try to bolt on security in-stead of designing it into a system from the ground up?Among other things, usability suffers. After-the-fact se-curity fixes often add configuration settings and prompts.But extra settings and prompts aren’t very effective atsolving security problems—they just make it easier toblame user error when something goes wrong.

Pseudousability harms securityIf you’re a usability practitioner, you might have had totake a nearly complete product and make it more usable.Don Norman described this mistake: “[The] assumptionthat user experience is just another add-on is pretty con-sistent across the industry.”2 Thus, you might know howdifficult and ineffective it is to try to add on usability latein the development process. Good usability engineeringrequires understanding user needs and incorporating theappropriate features throughout the design process, nottacking on superficial features such as flashy widgets, ani-mations, or skins.

What happens when people try to bolt on usability in-stead of designing it into a system from the ground up?Among other things, security suffers. Usability quickfixes sometimes involve hiding security-related decisionsfrom the user or choosing lax default settings. Flashy,nonstandard interfaces can also decrease a product’s secu-rity by increasing complexity or confusing users.

Integrated iterative designBoth the security and usability communities have advo-cated iterative development processes based on repeatedanalysis, design, and evaluation cycles, rather than linearprocesses in which security or usability testing occurs atthe end. Although many teams have adopted iterativeprocesses, few seem to incorporate security and usabilitythroughout. Not only is it important to examine these is-sues early and often, it is vital to design the user interfaceand security measures together. Iterating offers the op-portunity to see how security and usability decisions af-fect each other—in fact, separating security engineeringfrom usability engineering virtually guarantees that con-flicts will arise.

Conflicts during useAt first glance, the source of conflict might appear obvi-ous: security usually aims to make operations harder todo, while usability aims to make operations easier. How-ever, it’s more precise to say that security restricts access tooperations that have undesirable results, whereas usabilityimproves access to operations that have desirable results.

Users designate which results are desirable with their ac-tions in the user interface. When a system accurately in-terprets these actions, security and usability do not con-flict, and the problem becomes merely a matter offunctional correctness. Thus, conflicts between the twogoals arise when a system lacks the information to deter-mine whether a particular result is desirable.

Presenting security to users as a secondary task pro-motes conflict. People typically use computers for pur-poses other than security, such as communicating withfriends, using online services, managing time and money,composing and editing creative work, and so on. Askingusers to take extra security steps is problematic becausethe extra steps either interrupt their workflow or end uphidden in configuration panels. Interrupting users withprompts presents security decisions in a terrible context:it teaches users that security issues obstruct their main taskand trains them to dismiss prompts quickly and carelessly.

Both arguments suggest that we can help bring secu-rity and usability into alignment by seeking ways to ex-tract and use as much accurate information as possiblefrom a user’s normal interactions with the interface. Asthe following examples will illustrate, the tediousness ofsecurity features sometimes comes from throwing awaythat information. The more information about securityexpectations we can determine from the actions that theuser makes naturally in carrying out a primary task, theless we need to insert secondary security tasks.

Mental modelsAlthough software components are often casually la-beled trusted, trustworthiness is not a black-and-whiteissue. The word trusted is meaningless without answersto the questions, “Trusted by whom?” and “Trusted todo what?” We cannot define security policies withoutasking, “Secure from whom?” and “Secure againstwhat?”3 Attempting to classify all programs as simplytrusted or untrusted is not helpful, yet some security ex-perts continue to think along such lines.4 The missingcomponent in this common and dangerous oversimpli-fication is the mental model.

Simson Garfinkel and Gene Spafford wrote that “acomputer is secure if you can depend on it and its soft-ware to behave as you expect.”5 Fulfilling expectations isa matter of keeping behavior and expectations in agree-ment, and the users’ expectations are based on theirmental model of the system. Both the security policy andmental model are dynamic; they change in response touser actions.

The basic question of who can do what suggests aframework for mental models based on actors and abilities.6

Actors are entities that the user perceives as independentagents—for example, application programs or otherusers. At any given moment, each actor possesses a set ofabilities—potential actions that can affect the user. (Ac-

www.computer.org/security/ ■ IEEE SECURITY & PRIVACY 49

Page 3: Aligning Security and Usability - Virginia Techpeople.cs.vt.edu/~kafura/cs6204/Readings/Usability/AliigningSecurity... · Usability and Security can find some types of bugs in a

Usability and Security

tors and abilities are analogous to principals and capabili-ties but are parts of the user’s mental model, not theimplementation.) The actor–ability state in the user’smental model should accurately bound the actual state ofaccess rights in the system at all times. To use this frame-work, we perform usability studies to determine whichactors and abilities are present in the mental model andhow users expect their interactions with the computer tochange the actor–ability state.

Admonition and designationConsider a software component A that a user trusts toconstrain the effects of another component B, whichaffects the user’s world by making requests through A.For example, A might be an operating system and Bmight be an application running on it, or A might be aWeb browser and B might be a script or plug-in on aWeb page.

Figure 1 shows abstract diagrams of the space of possi-ble actions. In Figure 1a, the red box is a static, pre-estab-lished security policy intended to limit B to a set of safeactions in the vicinity of the green region, which repre-sents the actions the user considers acceptable. Becausethe policy is only finitely detailed, it only roughly approx-

imates the exact set of abilities that the user wants B tohave. Because the policy is static, it must also accommo-date the various ways that we might want to use B inother situations. Thus, the red box includes large areasnot in the green region. (Windows and Unix systems areextreme examples; applications run with the user’s fullprivilege, so an enormous discrepancy exists between theallowed set and the acceptable set of abilities.)

Security by admonitionWhat happens when B attempts an action unacceptableto the user? One strategy is to help the user avoid letting Bexercise the action by explaining what will happen or in-tervening with a request for confirmation. This approachis security by admonition. Unfortunately, the computerdoesn’t know exactly what the user considers acceptable,so it can’t know when to warn. It can guess what mostusers would consider acceptable and trade off the incon-venience of prompting against the magnitude of potentialdamage. Nonetheless, there are bound to be false posi-tives and negatives. Bad prompts make users wonder,“Why is the computer asking if I want this? I already toldit to do this.” The larger the discrepancy between at-temptable and acceptable actions, the more severely weare forced to compromise between security and usability.In other words, violating the principle of least privilege7

forces security and usability into conflict.

Security by designationFigure 1b acknowledges that the security policy and userexpectations change over time and depicts a differentstrategy: security by designation. In this approach, B startswith a minimal set of abilities. When the user wants B todo various things, the user’s actions simultaneouslyexpress the command and the extension of authority. Asbefore, the solid arrows are interactions with B’s user in-terface. The dotted arrows are interactions with A’s userinterface that simultaneously grant additional authorityto B while conveying the user’s intended command to B.We thereby maintain a close match between the allowedset and the acceptable set of abilities without asking usersto establish a detailed security policy beforehand or ex-press their intentions twice.

Software systems employ a mix of these two strategies.For instance, launching an application is a case of designa-tion; users don’t need to select an application and thenseparately grant CPU time to the newly created processbecause a single action accomplishes both. Sending anemail attachment is another example; users don’t have toselect a file to attach and then separately grant read per-missions to the message’s recipient because attaching thefile conveys both designation and authorization. Securitywarnings and prompts, including any questions begin-ning with, “Are you sure...?,” or, “Do you want toallow...?,” are examples of admonition.

50 IEEE SECURITY & PRIVACY ■ SEPTEMBER/OCTOBER 2004

Authority allowed by system User designates actionsand conveys extended

authority to the programat the same time

Program is never able to requestthis unacceptable action

Program beginswith minimal authority

(a) (b)

Prompting here may avert a security problem,

but is still disruptive

Prompting here is disruptive,does not improve security,and reduces effectiveness

of other prompts

Actionsacceptable to user

Figure 1. Space of possible actions. (a) Security by admonition: Thered rectangle delineates the actions that B can request through A.The green curvy region shows the set of actions acceptable to theuser in a particular situation. The black dots represent the actionsthat B exercises. The solid arrows are the user’s interactions with B’suser interface that motivate B to take those actions; (b) security bydesignation: the dotted arrows are interactions with A’s user inter-face that simultaneously grant additional authority to B while con-veying the user’s intended command to B.

Page 4: Aligning Security and Usability - Virginia Techpeople.cs.vt.edu/~kafura/cs6204/Readings/Usability/AliigningSecurity... · Usability and Security can find some types of bugs in a

Usability and Security

Security by designation is convenient and straight-forward because it embodies the aforementioned idealof integrating security decisions with the user’s primarytask. Security by admonition, on the other hand, de-mands the user’s attention to a secondary source of in-formation. With designation, the user grants authority,so the user has the necessary context to know why itshould be granted. With admonition, the program ini-tiates the request for authority, so the user might nothave the context to decide whether to grant it. There-fore, we should use security by designation wheneverfeasible.

ImplementationTo implement the designation strategy, we have to findthe act of designation on which to hang the securitydecision. When tempted to ask the user, “Is action X ac-ceptable?,” we instead ask ourselves “How was action Xspecified?” The action might have been conveyedthrough various software components, so we must followit back to its origin. When the origin is a user interface in-teraction, we lift that interaction into a software layer thatthe user trusts to handle the relevant authority (for exam-ple, from B into A) and convey the authority togetherwith the designation of the action.

However, security by designation might not be feasi-ble for various reasons. The authority-granting actionmight be inaccessible, interoperating with other systemsmight make the designation of actions untrustworthy, orthe effort required to support finer-grained control mightbe too great. In these cases, we could be forced to fall backon security by admonition.

Admonition is required whenever the user is likely togrant an actor the ability to do something that the userdoesn’t want. This might happen due to user error, be-cause the harmful action might not be technically pre-ventable, or because the security primitives are too coarseto distinguish what the user wants. When using admoni-tion, be careful about leaping from stating factual conse-quences to passing judgment on whether an action isgood or bad. Information about an action’s outcome canbe displayed neutrally without interrupting the user’sworkflow. Intervening with a warning prompt is disrup-tive and likely to cause frustration. For example, openingany attachment causes Microsoft Outlook to display awarning about the possibility of viruses. The warning of-fers two choices: to open the file now or save it to disk,whereupon presumably the user will go and open it any-way. Issuing such warnings too frequently reduces theuser’s trust in admonitions, decreasing their effectivenessin critical situations. Thus, admonition should strive toinform, not interrupt.

Admonition and designation are not mutually exclu-sive strategies, but admonition should be avoided whensecurity by designation is acceptable.

Design problemsTo make these ideas more concrete, let’s apply them tosome security problems that computer users commonlyexperience. We will examine how to adjust a design toimplement security by designation and how to knowwhen it’s necessary to fall back to security by admonition.

Viruses and wormsSelf-propagating email attachments have caused wide-spread havoc in the past few years. Some of them ex-ploit software bugs in email clients. Fixing softwarebugs is an important step toward eradicating theseworms, but bugs are not the whole story. Many emailworms, such as MyDoom, Netsky, and Sobig, rely onhumans to activate them by opening executable attach-ments; they would continue to run rampant even withbug-free software.

Viewing and executing. On Windows and Mac OSmachines, the act of double-clicking opens documentsand launches applications. Documents are usually inert,whereas launching an application grants it complete ac-cess to a user’s account. Thus, by double-clicking on anattachment, users who intend only to read it instead handover control of their computer to a nasty worm.

A missing piece of information here is the choice be-tween viewing and executing. The user is given no way tocommunicate this choice, so the computer has to guess,with potentially dangerous consequences. A poor solu-tion would be to patch this guess by asking the user, “Doyou really want to start this application?,” every time he orshe double-click on an application icon. To find a bettersolution, we must consider how users specify their intentto run a new application.

Let’s look at what users already do when they installapplications. On a Mac, users install most applications bydropping them into the Applications folder. Suppose thatdouble-clicking icons in the Applications folderlaunched them, but double-clicking icons elsewherewould only passively view them. This would stop themain propagation method of worms attached to emailswith virtually no loss of usability—Mac users usually runapplications from the Applications folder anyway. As forWindows users, dragging a single icon is simpler thanrunning a multistep application installer, so switching to adrag-and-drop style of installation would simultaneouslyimprove security and usability.

Although distinguishing viewing from executionwould be better than what we have now, this distinctionis rather blunt. The suggested solution is not much goodfor threats other than executable attachments. Any pro-gram the user actually installs, including downloadedsoftware that could contain viruses or spyware, wouldstill run with full user-level access, exposing the user totremendous risk.

www.computer.org/security/ ■ IEEE SECURITY & PRIVACY 51

Page 5: Aligning Security and Usability - Virginia Techpeople.cs.vt.edu/~kafura/cs6204/Readings/Usability/AliigningSecurity... · Usability and Security can find some types of bugs in a

Usability and Security

File and email access. The lack of fine-grained accesscontrols on application programs is an architectural fail-ure of Windows, Mac, and Unix systems. Such controlwould enable a true solution to the virus problem. Let’sconsider how to control two major kinds of access thatviruses exploit: file and email access.

How do users specify the intent to read or write aparticular file? In current GUIs, files are designated bypointing at file icons and selecting file names in dialogboxes. Standard functions in Mac OS and Windows al-ready implement both of these interactions. To per-form security by designation, we would stop providingapplications with default access to the file system andinstead provide them with file access via icon manipu-lation and the file selection dialog box. Applicationswould start with access limited to their own programfiles and a bounded scratch space. Instead of receiving afile name from the file dialog box and then using uni-versal file system privilege to open the associated file byname, an application would receive a file handle di-rectly from the dialog box. Similarly, dropping a file onan application would send the application a messagecontaining the file handle instead of the file name. Theuser experience would not change, yet security wouldbe vastly improved.

Certain special programs, such as search tools or diskutilities, do require access to the entire disk, and it wouldbe impractical to designate files individually. Instead, theuser could convey access to the entire disk by installingprograms in a Disk Tools subfolder of the Applicationsfolder. This would require a bit more effort than drop-ping everything in the Applications folder, but it is not an

alien concept—on Windows systems, disk utilities arealready kept in an Administrative Tools subfolder of theStart menu.

How do users specify the intent to send email to a par-ticular person? Users usually indicate recipients by select-ing names from an address book or typing in email ad-dresses. To control access to email, we might add a mailhandle abstraction to the operating system, which repre-sents the ability to send mail to a particular address just as afile handle represents the ability to read or write a partic-ular file. Then, selecting names from a standard system-wide address book would return mail handles to theapplication, just as a secure file selection dialog box wouldreturn file handles. Mail handles would let us stop provid-ing default network access to all programs, so that wormscould not email themselves to new victims. To allow amail client to implement its own custom address bookand send mail to arbitrary email addresses, the user couldinstall it in a Networking Tools subfolder in the Applica-tions folder.

Through security by designation, these measureswould prevent viruses from propagating in files or emailmessages while minimizing the burden on users.

Cookie managementCookies are small data records that Web sites ask browsersto retain and present to identify users when they return tothe same site. They let Web sites perform automatic loginand personalization. However, they also raise privacyconcerns about the tracking of user behavior and raisesecurity risks by providing an avenue for circumventinglogins.8 It’s helpful to allow users some control over whenand where cookies are sent.

Many browsers address this problem by promptingusers to accept or reject each received cookie. Butcookies are so ubiquitous that users are constantlypestered with irritating prompt boxes when they en-able this feature. To reduce the user’s workload, somebrowsers provide additional buttons to accept or rejectall cookies from the site they’re currently visiting. Un-fortunately, after users choose a site-wide policy, it canbe hard to find and reverse the decision. Moreover, de-ciding to accept cookies is irrelevant because receivingcookies poses no risks; the real risks lie in sendingcookies.

The missing piece of information here is whether theuser, when returning to a Web site, wants to continuewith the same settings and context where the last sessionleft off. To find a better solution, we should consider howusers return to Web sites they’ve seen before.

The existing user interface mechanism for this pur-pose is the bookmark list. Users designate the site theywant by selecting a bookmark. Therefore, suppose thatreceived cookies were embedded in bookmark records.Users could choose to bookmark the generic view of a

52 IEEE SECURITY & PRIVACY ■ SEPTEMBER/OCTOBER 2004

Figure 2. A file selection dialog box. We can achieve much strongerfile access protection against viruses and Trojan horses without anychange to this user interface.

Page 6: Aligning Security and Usability - Virginia Techpeople.cs.vt.edu/~kafura/cs6204/Readings/Usability/AliigningSecurity... · Usability and Security can find some types of bugs in a

Usability and Security

site (before logging in) or personalized view (after log-ging in), or even create multiple bookmarks for differentsessions. The decision to activate a cookie would be in-corporated into the existing task of selecting a book-mark. This isn’t the same as the way people browse now,but it would be easy to explain and would not be inher-ently inconvenient.

Bookmarks containing cookies could be displayedwith an icon to show that they are personalized. A sim-ilar icon in the browser toolbar could indicate whetherthe current view is personalized by a cookie. Clickingon the icon would let users select among generic andpersonalized views of the current site, letting users acti-vate cookies when arriving at a site by means other thanbookmarks.

This solution would eliminate the need for promptboxes or lists of cookie-enabled or cookie-disabled sitesto manage. Login cookies would no longer be left linger-ing on public-access machines. Users’ privacy would bebetter protected. Sites would no longer mysteriouslychange their appearance depending on hidden state. Andusers would gain additional functionality: bookmarkswould let them manage multiple logins or multiple sus-pended transactions at the same site. Security and usabil-ity would be simultaneously improved.

Phishing attacksForged email messages and Web sites designed to stealpasswords are another serious problem. In a typical phish-ing attack, a user receives an email message that appears tobe from a bank, asking the user to click on a link and ver-ify account information. The link appears to point to thebank’s Web site but actually takes the user to an identitythief ’s Web site, styled to look just like the bank’s officialsite. If an unsuspecting user enters personal informationthere, the identity thief captures it.

Here, the missing piece of information is the formsubmission’s intended recipient. In one scam, for exam-ple, an imitation of PayPal was hosted at http://paypai.com. This problem is trickier than the precedingexamples because the system has no way of knowingwhether the user intended to go to http://paypal.comand was misdirected to http://paypai.com or whetherthe user really wanted to visit http://paypai.com. The in-tention resides only in the user’s mind.

The designation at the heart of this security problem isthe link address. But the user does not specify the address;the user merely clicks in the email message. The safest so-lution would therefore be to secure the email channel sousers can tell whether an email comes from a trustworthysource. But until we convince most email users to use se-cure email, we are stuck interfacing with an untrustwor-thy mail system and have no user act of designation onwhich to hang a security decision.

In this situation, practical constraints make security by

designation infeasible. We could add security by admoni-tion at several points along the path from the email mes-sage to the impostor’s form:

• The email client could make the link destinationclearer by emphasizing the true domain name in themessage body.

• The browser could display a user-assigned name in a re-served part of its toolbar to help identify the site hostingthe form.

• As the user enters information into form fields, thebrowser could display the entered text in red if the userhasn’t assigned a name to the receiving site before. Alabel below the current field could appear if the site isunfamiliar or suspicious (see Figure 3). (A similar labelcould warn against sending secrets over an unencryptedconnection.)

• When the mouse pointer passes over a form submissionbutton, the mouse cursor could change into a specialicon annotated with the site’s user-assigned name.

Although users could overlook them, the warningswould still be more effective than prompting users onevery form submission. These admonitions inform theuser but don’t interrupt workflow. Using forms wouldnot require any additional effort, but phishing for pass-words would be significantly less successful.

Real-world implementationsAs the preceding discussion illustrates, we can achievecertain kinds of security improvements with minimalchanges to the operating system, but truly strong secu-rity often requires more fundamental changes to operat-ing systems and applications. CapDesk and Polaris aresoftware projects that aim at different points along thistrade-off.

www.computer.org/security/ ■ IEEE SECURITY & PRIVACY 53

Figure 3. An admonition. As the user fills out a form, the enteredtext appears in red to indicate that the user has not assigned aname to this site. Without interrupting the workflow, a messageinforms the user of the hosting site’s domain name.

Page 7: Aligning Security and Usability - Virginia Techpeople.cs.vt.edu/~kafura/cs6204/Readings/Usability/AliigningSecurity... · Usability and Security can find some types of bugs in a

Usability and Security

54 IEEE SECURITY & PRIVACY ■ SEPTEMBER/OCTOBER 2004

CapDesk9 is a capability-based desktop shell thatimplements security by designation, eliminating vul-nerability to viruses while letting users run untrustedsoftware in a familiar GUI environment. It exploreswhat we can achieve by building everything from theground up with security and usability in mind. In-stalling an application endows it with minimal defaultauthorities, such as access to its own program files. Theusers can convey additional file access to applications bymanipulating file icons and selecting files in file dialogboxes. In an earlier article, I proposed 10 guidelines forsecure interaction design (see the “Guidelines for

secure interaction design” sidebar for a complete listand explanation).6 Throughout most aspects of its de-sign, CapDesk meets all but two of these guidelines(visibility and revocability).

On the other end of the spectrum, researchers at HPLabs are developing Polaris, a safe environment for run-ning Windows applications. Polaris explores what we canachieve by applying the concept of security by designa-tion without changing the operating system or the appli-cations. It provides effective protection against viruses,email worms, and spyware. In this environment, virus-scanning software’s chief value is to inform the user offailed attacks. Polaris is in the early stages of prototyping,but user tests are already showing promising results. Ofthe 24 users who have tried Polaris, 20 found it comfort-able enough to use in their daily work with the MicrosoftOffice suite and other applications.

A lthough security and usability practitioners mustlearn to work together to create truly secure systems,

they already have more in common than might be initiallyobvious. Both recognize the importance of incorporatingtheir concerns throughout the design cycle and acknowl-edge the need for an iterative rather than a linear designprocess. I have argued one step further: that security andusability not only must be considered early and iteratively,but also together.

Many obstacles prevent a perfect solution to thetoday’s problem of security and usability, such as the in-stalled base of operating systems and the requirementfor interoperation with untrustworthy domains such asemail. However, I hope the arguments and examples Ipresent here convince readers that we can achieve sig-

These 10 design guidelines are based on the actor–ability

framework. Readers might find them helpful in designing

and evaluating user interfaces for secure systems.

General principles• Path of least resistance. The most natural way to do a task should also

be the safest.

• Appropriate boundaries. The interface should draw distinctions

among objects and actions along boundaries that matter to the user.

Maintaining the actor–ability state• Explicit authorization. A user’s authority should only be granted to

another actor through an explicit user action understood to imply

granting.

• Visibility. The interface should let the user easily review any active au-

thority relationships that could affect security decisions.

• Revocability. The interface should let the user easily revoke authority

that the user has granted, whenever revocation is possible.

• Expected ability. The interface should not give the user the impres-

sion of having authority that the user does not actually have.

Communicating with the user• Trusted path. The user’s communication channel to any entity that

manipulates authority on the user’s behalf must be unspoofable and

free of corruption.

• Identifiability. The interface should ensure that identical objects or

actions appear identical and that distinct objects or actions ap-

pear different.

• Expressiveness. The interface should provide enough expressive

power to let users easily express security policies that fit their goals.

• Clarity. The effect of any authority-manipulating user action should

be clearly apparent to the user before the action takes effect.

Guidelines for secure interaction design

Figure 4. The user opens a file in an editor running under CapDesk.

Page 8: Aligning Security and Usability - Virginia Techpeople.cs.vt.edu/~kafura/cs6204/Readings/Usability/AliigningSecurity... · Usability and Security can find some types of bugs in a

Usability and Security

nificant improvements by designing security and us-ability together, maintaining agreement with users’mental models, and applying security by designationwherever possible.

AcknowledgmentsI thank Morgan Ames, Nikita Borisov, Tyler Close, Linley Erin Hall,Marti Hearst, Mark S. Miller, Kragen Sitaker, and Marc Stiegler fortheir help with this article. Many of the ideas expressed here originatedin discussions with them.

References1. J. Viega and G. McGraw, Building Secure Software, Addi-

son-Wesley, 2002, p. 14.2. D. Norman, The Invisible Computer, MIT Press, 1998, p. 205.3. B. Schneier, Secrets and Lies: Digital Security in a Networked

World, Wiley, 2004, p. 12.4. B. Lampson, “Computer Security in the Real World,”

Computer, vol. 37, no. 6, 2004, pp. 37–46.5. S. Garfinkel and G. Spafford, Practical UNIX and Internet

Security, 2nd ed., O’Reilly & Associates, 1996, p. 6.6. K.-P. Yee, “User Interaction Design for Secure Systems,”

Proc. 4th Int’l Conf. Information and Communications Secu-rity, R. Deng et al., eds., LNCS 2513, Springer, 2002,pp. 278–290; http://zesty.ca/sid.

7. J.H. Saltzer and M.D. Schroeder, “The Protection ofInformation in Computer Systems,” Proc. IEEE, IEEEPress, vol. 63, no. 9, pp. 1278–1308.

8. B. McWilliams, “Hotmail at Risk to Cookie Thieves,”Wired News, 26 Apr. 2002; http://wired.com/news/technology/0,1282,52115,00.html.

9. D. Wagner and D. Tribble, A Security Analysis of theCombex DarpaBrowser Architecture, http://combex.com/papers/darpa-review/index.html.

Ka-Ping Yee is a PhD student in computer science at the Uni-versity of California, Berkeley. His research interests include sys-tem security, usability, and decision support systems. He is amember of the ACM, the Foresight Institute, the Python Soft-ware Foundation, the Electronic Frontier Foundation, and theFree Software Foundation. Contact him at [email protected].

www.computer.org/security/ ■ IEEE SECURITY & PRIVACY 55

EXECUTIVE COMMITTEEPresident:CARL K. CHANG* Computer Science Dept.Iowa State UniversityAmes, IA 50011-1040Phone: +1 515 294 4377Fax: +1 515 294 [email protected]: GERALD L. ENGEL*Past President: STEPHEN L. DIAMOND*VP, Educational Activities: MURALI VARANASI*VP, Electronic Products and Services: LOWELL G. JOHNSON (1ST VP)*VP, Conferences and Tutorials: CHRISTINA SCHOBER†VP, Chapters Activities: RICHARD A. KEMMERER (2ND VP)*VP, Publications: MICHAEL R. WILLIAMS*VP, Standards Activities: JAMES W. MOORE*VP, Technical Activities: YERVANT ZORIAN*Secretary: OSCAR N. GARCIA*Treasurer:RANGACHAR KASTURI†2004–2005 IEEE Division V Director: GENE F. HOFFNAGLE†2003–2004 IEEE Division VIII Director: JAMES D. ISAAK†2004 IEEE Division VIII Director-Elect: STEPHEN L. DIAMOND*Computer Editor in Chief:DORIS L. CARVER†Executive Director: DAVID W. HENNAGE†* voting member of the Board of Governors† nonvoting member of the Board of Governors

E X E C U T I V E S T A F FExecutive Director: DAVID W. HENNAGEAssoc. Executive Director: ANNE MARIE KELLYPublisher: ANGELA BURGESSAssistant Publisher: DICK PRICEDirector, Administration:VIOLET S. DOANDirector, Information Technology & Services: ROBERT CARE

PURPOSE The IEEE Computer Society is theworld’s largest association of computing pro-fessionals, and is the leading provider of tech-nical information in the field.

MEMBERSHIP Members receive the month-ly magazine Computer, discounts, and opportu-nities to serve (all activities are led by volunteermembers). Membership is open to all IEEEmembers, affiliate society members, and othersinterested in the computer field.

COMPUTER SOCIETY WEB SITEThe IEEE Computer Society’s Web site, atwww.computer.org, offers information andsamples from the society’s publications and con-ferences, as well as a broad range of informationabout technical committees, standards, studentactivities, and more.

BOARD OF GOVERNORSTerm Expiring 2004: Jean M. Bacon, RicardoBaeza-Yates, Deborah M. Cooper, George V. Cybenko,Haruhisha Ichikawa, Thomas W. Williams, YervantZorianTerm Expiring 2005: Oscar N. Garcia, Mark A.Grant, Michel Israel, Stephen B. Seidman, Kathleen M.Swigger, Makoto Takizawa, Michael R. WilliamsTerm Expiring 2006: Mark Christensen, AlanClements, Annie Combelles, Ann Gates, Susan Men-gel, James W. Moore, Bill SchilitNext Board Meeting: 5 Nov. 2004, New Orleans

IEEE OFFICERSPresident: ARTHUR W. WINSTONPresident-Elect: W. CLEON ANDERSONPast President: MICHAEL S. ADLERExecutive Director: DANIEL J. SENESESecretary: MOHAMED EL-HAWARYTreasurer: PEDRO A. RAYVP, Educational Activities: JAMES M. TIENVP, Pub. Services & Products: MICHAEL R. LIGHTNERVP, Regional Activities: MARC T. APTERVP, Standards Association: JAMES T. CARLOVP, Technical Activities: RALPH W. WYNDRUM JR.IEEE Division V Director: GENE F. HOFFNAGLEIEEE Division VIII Director: JAMES D. ISAAKPresident, IEEE-USA: JOHN W. STEADMAN

COMPUTER SOCIETY OFFICESHeadquarters Office

1730 Massachusetts Ave. NW Washington, DC 20036-1992Phone: +1 202 371 0101 Fax: +1 202 728 9614E-mail: [email protected]

Publications Office10662 Los Vaqueros Cir., PO Box 3014Los Alamitos, CA 90720-1314Phone:+1 714 8218380E-mail: [email protected] and Publication Orders:Phone: +1 800 272 6657 Fax: +1 714 821 4641E-mail: [email protected]

Asia/Pacific OfficeWatanabe Building1-4-2 Minami-Aoyama,Minato-kuTokyo107-0062, JapanPhone: +81 3 3408 3118 Fax: +81 3 3408 3553E-mail: [email protected]