a comparison of commercial and military computer security...
TRANSCRIPT
A Comparison of Commercial and MilitarY computer Security Policies
David D. Clark* - David Il. Wilson*’e
*Senior Research Scientist, MIT Laboratory for Computer Sciencf
** 545 Technology Square, Cambridge, MA 02139Director, Information Security Servicesl Ernst & whinneY
2000 National City Center, Cleveland, OH 44114
ABSTRACT
Most discussions of computersecurity focus on control of disclosure.
In Particular, the U.S. Department ofDefense has developed a set of criteriafor computer mechanisms to providecontrol of classified information.However, for that core of dataprocessing concerned with businessoperation and control of assets, theprimary security concern is dataintegrity. This paper presents a policyfor data integrity based on commercialdata processing practices, and comparesthe mechanisms needed for this policywith the mechanisms needed to enforcethe lattice model for informationsecurity. We argue that a lattice modelis not sufficient to characterizeintegrity policies, and that distinctmechanisms are needed to Controldisclosure and to provide integrity.
INTRODUCTION
Any discussion of mechanisms toenforce computer security must involve aparticular security policy thatspecifies the security goals the systemmust meet and the threats it mustresist. For example, the high-levelsecurity goals most often specified arethat the system should preventunauthorized disclosure or theft ofinformation, should prevent unauthorizedmodification of information, and shouldprevent denial of service. Traditionalthreats that must be countered aresystem penetration by unauthorizedpersons, unauthorized actions byauthorized persons, and abuse of specialprivileges by systems programmers andfacility operators. These threats maybe intentional or accidental.
Imprecise or conflicting assumptionsabout desired policies often confusediscussions of computer securitymechanisms. In particular, in comparingcommercial and military systems, a
misunderstanding about the underlying
policies the two are trying to enforce
often leads to difficulty in
understanding the motivation for certain
mechanisms that have been developed and
espoused by one 9rouP or the other.
This paper discusses the militarysecurity policy, presents a security
policy valid in many commercial
situations, and then compares the twopolicies to reveal important differencesbetween them.
The military security policy we arereferring to is a set of policies thatregulate the control of classified
information within the government. This
well-understood, high-level information
security policy is that all classified
information shall be protected from
unauthorized disclosure ordeclassification. Mechanisms used to
enforce this policy include the
mandatory labeling of all documents withtheir classification level, and the
assigning of user access categories
based on the investigation (or“clearing”) of all persons permitted touse this information. During the last15 to 20 years, considerable effort hasgone into determining which mechanismsshould be used to enforce this policywithin a computer. Mechanisms such asidentification and authorization ofusers, generation of audit information,and association of access control labelswith all information objects are wellunderstood. This policy is defined inthe Department of Defense Trusted
computer System Evaluation Criteria[DOD], often called the “Orange Book”
from the color of its cover . It
articulates a standard for maintainingconfidentiality of information and is,for the purposes of our paper , the“military” information security policy.The term “military” is perhaps not themost descriptive characterization ofthis policy; it is relevant to anysituation in which access rules forsensitive material must be enforced. Weuse the term ‘military” as a concise tagwhich at least captures the origin ofthe policy.
184CH2416-61871000010 184SOi.000 19871EEE
In the commercial environment,preventing disclosure “ oftenimportant, but preventing ~~authorizeddata modification is usually paramount.In particular, for that core ofcommercial data processing that relatesto management and accounting for assets,preventing fraud and error is theprimary goal. This goal is addressed byenforcing the integrity rather than the
privacy of the information. For thisreason, the policy we will concernourselves with is one that addressesintegrity rather than disclosure. Wewill call this a commercial policy, incontrast to the military informationsecurity policy. We are not suggestingthat integrity plays no role in military
concerns. However, to the extent thatthe Orange Book is the articulation ofthe military information security
policy , there is a clear difference ofemphasis in the military and commercialworlds.
While the accounting principles thatare the basis of fraud and error controlare well known, there is yet no OrangeBook for the commercial sector thatarticulates how these policies are to beimplemented in the context of a computersystem. This makes it difficult toanswer the question of whether themechanisms designed to enforce militaryinformation security policies also applyto enforcing commercial integritypolicies. It would be very nice if thesame mechanisms could meet both goals,thus enabling the commercial andmilitary worlds to share the developmentcosts of the necessary mechanisms .However, we will argue that two distinctclasses of mechanism will be required,because some of the mechanisms needed toenforce disclosure controls andintegrity controls are very different.
Therefore, the goal of this paper isto defend two conclusions. First, thereis a distinct set of security policies,related to integrity rather thandisclosure, which are often of highestpriority in the commercial data
processing environment . Second, someseparate mechanisms are required for
enforcement of these policies, disjointfrom those of the Orange Book.
MILITARY SECURITY POLICY
The policies associated with the
management of classified information,
and the mechanisms used to enforce thesepolicies, are carefully defined and wellunderstood within the military.
However, these mechanisms are not
necessarily well understood in the
commercial world, which normally doesnot have such a complex requirement for
control of unauthorized disclosure.Because the military security model
provides a good starting point, we beginwith a brief summary of computersecurity in the context of classifiedinformation control.
The top-level goal for the controlof classified information is verysimple: classified information must notbe disclosed to unauthorizedindividuals. At first glance, itappears the correct mechanism to enforcethis policy is a control over whichindividuals can read which data items.This mechanism, while certainly needed,is much too simplistic to solve theentire problem of unauthorizedinformation release. In particular,enforcing this policy requires amechanism to control writing of data as
well as reading it. Because the controlof writing data is superficially
associated with ensuring integrityrather than preventing theft, and the
classification policy concerns the
control of theft, confu~ion has arisen
about the fact that the military
mechanism includes strong controls overwho can write which data.
Informally, the line of reasoningthat leads to this mechanism is as
follows. To enforce this policy, thesystem must protect itself from theauthorized user as well as the
unauthorized user. There are a number
of ways for the authorized user to
declassify information. He can do so as
a result of a mistake, as a deliberate
illegal action, or because he invokes a
program on his behalf that, without hisknowledge, declassifies data as a
malicious side effect of its execution.This class of program, sometimes
called a “Trojan Horse” program, has
received much attention within the
military. To understand how to COntrOlthis class of problem in the computer,consider how a document can be
declassified in a noncomputerizedcontext . The simple technique involvescopying the document, removing theclassification labels from the documentwith a pair of scissors, and then making
another copy that does not have the
classification labels. This second
copy , which physically appears to beunclassified, can then be carried past
security guards who are responsible forcontrolling the theft of classified
documents. Declassification occurs by
copying.To prevent this in a computer
system, it is necessary to control the
ability of an authorized user to copy adata item. In particula~, once acomputation has read a data item of acertain security level, the system must
ensure that any data items written bytnat computation have a security label
at least as restrictive as the label of
the item previously read. It is this
185
mandatory check of the security level ofall data items whenever they are writtenthat enforces the high level securitypolicy.
An important component of thismechanism is that checking the securitylevel on all reads and writes ismandatory and enforced by the system, asopposed to being at the discretion ofthe individual user or application. In
a typical time sharing system notintended for multilevel secureoperation, the individual responsiblefor a piece of data determines who mayread or write that data. Suchdiscretionary controls are notsufficient to enforce the militarysecurity rules because, as suggestedabove, the authorized user (or programsrunning on his behalf) cannot be trustedto enforce the rules properly. Themandatory controls of the systemconstrain the individual user so that
any action he takes is guaranteed toconform to the security policy. Mostsystems intended for military security
provide traditional discretionarycontrol in addition to the mandatory
classification checking to support whatis informally called “need to know.” BYthis mechanism, it is possible for theuser to further restrict the
accessibility of his data, but it is notpossible to increase the scope in amanner inconsistent with the
classification levels.In 1983, the U.S. Department of
Defense produced the Orange Book, whichattempts to organize and document
mechanisms that should be found in acomputer system designed to enforce the
military security policies. Thisdocument stresses the importance of
mandatory controls if effectiveenforcement of a policy is to be
achieved within a system. To enforcethe particular policy of the OrangeBook, the mandatory controls relate todata labels and user access categories.
Systems in division C have norequirement for mandatory controls,
while systems in divisions A and B
specifically have these mandatory
maintenance and checking controls for
labels and user rights. (Systems in
Division A are distinguished from thosein B, not by additional function, hut by
having been designed to Permit formalverification of the security principles
of the system. )several security systems used in the
commercial environment, specifically
RACF , AcF/2 , and CA-TopSecret, were
recently evaluated using the Orange Bookcriteria. The C ratings that these
security packages received would
indicated that they did not meet the
mandatory requirements of the security
model as described in the Orange Book.
Yet, these packages are used commonly inindustry and viewed as being rathereffective in their meeting of industry
requirements. This would suggest thatindustry views security requirementssomewhat differently than the securitypolicy described in the Orange Book .The next section of the paper begins adiscussion of this industry view.
COMMERCIAL SECURITY POLICY FOR INTEGRITY
Clearly, control of confidentialinformation is important in both thecommercial and military environments.However, a major goal of commercial dataprocessing, often the most importantgoal, is to ensure integrity of data toprevent fraud and errors. No user ofthe system, even if authorized, may bepermitted to modify data items in such away that assets or accounting records ofthe company are lost or corrupted. Somemechanisms in the system, such as userauthentication, are an integral part ofenforcing both the commercial andmilitary policies. However, othermechanisms are very different.
The high-level mechanisms used toenforce commercial security policiesrelated to data integrity were derivedlong before computer systems came intoexistence. Essentiallyr there are twomechanisms at the heart of fraud anderror control: the well-formedtransaction, and separation of dutyamong employees.
The concept of the well-formedtransaction is that a user should notmanipulate data arbitrarily, but only inconstrained ways that preserve or ensurethe integrity of the data. A verycommon mechanism in well-formedtransactions is to record all datamodifications in a log so that actions
can be audited later. (Before thecomputer, bookkeepers were instructed towrite in ink, and to make correctingentries rather than erase in case oferror. In this way the booksthemselves, being write-only, became thelog, and any evidence of erasure wasindication of fraud.)
Perhaps the most formally structuredexample of well-formed transactionsoccurs in accounting systems, whichmodel their transactions on theprinciples of double entry bookkeeping,Double entry bookkeeping ensures theinternal consistency of the system’sdata items by requiring that anymodification of the books comprises twoparts, which account for or balance eachother. For example, if a check is to bewritten (which implies an entry in thecash account) there must be a matchingentry on the accounts payable account.If an entry is not performed properly,so that the parts do not match, this can
186
be detected by an independent(balancing
testthe books). It is thus
possible to detect such frauds as thesimple issuing of unauthorized checks.
The second mechanism to controlfraud and error, separation of duty,attempts to ensure the externalconsistency of the data objects: thecorrespondence between the data objectand the real world object “represents . Because computers do n;;normally have direct sensors to monitorthe real world, computers cannot verifyexternal consistency directly. Rather,the correspondence is ensured indirectlyby separating all operations intoseveral subparts and requiring that eachsubpart be executed by a different
person. For example, the process of
purchasing some item and paying for itmight involve subparts: authorizing the
purchase order, recording the arrival ofthe item, recording the arrival of theinvoice, and authorizing payment. The
last subpart, or step, should not beexecuted unless the previous three are
properly done. If each step is
performed by a different person, theexternal and internal representation
should correspond unless some of thesepeople conspire. If one person can
execute all of these steps, then a
simple form of fraud is possible, inwhich an order is placed and payment
made to a fictitious company without anyactual delivery of items. In this case,
the books appear to balance; the erroris in the correspondence between real
and recorded inventory.Perhaps the most basic separation of
duty rule is that any person permittedto create or certify a well-formed
transaction may not be permitted to
execute it (at least against productiondata) . This rule ensures that at leasttwo people are required to cause a
change in the set of well-formed
transactions.The separation of duty method is
effective except in the case of
collusion among employees. For this
reason, a standard auditing disclaimeris that the system is certified correctunder the assumption that there has beenno collusion. While this might seem arisky assumption, the method has provedvery effective in practical control offraud. Separation of duty can be madevery powerful by thoughtful applicationof the technique, such as random
selection of the sets of p:;g;e toperform some operation, so any
proposed collusion is only safe bychance. Separation of duty is thus a
fundamental principle of commercial dataintegrity control.
Therefore, for a computer system tobe used for commercial data processing,specific mechanisms are needed to
enforce these two rules. To ensure thatdata items are manipulated only by meansof well-formed transactions, it is firstnecessary to ensure that a data item canbe manipulated only by a specific set ofprograms. These programs must beinspected for proper Construction, andcontrols must be provided on the abilityto install and modify these programs, so
that their continued validity isensured. To ensure separation ofduties, each user must be permitted touse only certain sets of programs. Theassignment of people to programs mustagain be inspected to ensure that thedesired controls are actually met.
These integrity mechanisms differ ina number of important ways from the
mandatory controls for military securityas described in the Orange Book, First,
with these integrity controls, a dataitem is not necessarily associated with
a Particular security level, but ratherwith a set of programs permitted to
manipulate it. Second, a user is notgiven authority to read or write certaindata items, but to execute certain
programs on certain data items. Thedistinction between these two mechanismsis fundamental. With the Orange Bookcontrols, a user is constrained by whatdata items he can read and write. If heis authorized to write a particular dataitem he may do so in any way he
chooses. With commercial integritycontrols, the user is constrained bywhat programs he can execute, and themanner in which he can read or write
data items is implicit in the actions ofthose programs. Because of separation
of duties, it will almost always be thecase that a user, even though he is
authorized to write a data item, can doso only by using some of thetransactions defined for that data
item. Other users, with different
duties, will have access to different
sets of transactions related to that
data.
MANDATORY COMMERCIAL CONTROLS
The concept of mandatory control iscentral to the mechanisms for militarysecurity, but the term is not usuallyapplied to commercial systems. That is,commercial systems have not reflectedthe idea that certain functions, centralto the enforcement of policy,designed as a fundamental characteris~!~of the system. However, it is importantto understand that the mechanismsdescribed in the previous section, insome respects, are mandatory controls.They are mandatory in that the user ofthe system should not, by any sequenceof operations, be able to modify
list of programs permitted to manipul~?~a particular data item or to modify the
187
list of users permitted to execute agiven program. If the individual usercould do so, then there would be nocontrol over the ability of an
untrustworthy user to alter the systemfor fraudulent ends.
In the commercial integrity
environment, the owner of an applicationand the general controls implemented bythe data processing organization areresponsible for ensuring that all
programs are well-formed transactions.As in the military environment, there isusually a designated separake staff
responsible for assuring that users canexecute transactions only in such a waythat the separation of duty rule is
enforced. The system ensures that theuser cannot circumvent these controls.
This is a mandatory rather than adiscretionary control.
The two mandatory controls, militaryand commercial, are very differentmechanisms. They do not enforce thesame policy. The military mandatorycontrol enforces the correct setting ofclassification levels. The commercialmandatory control enforces the rulesthat implement the well-formed
transaction and separation of dutymodel. When constructing a computersystem to support these mechanisms, verydifferent low-level tools areimplemented.
An interesting example of these twosets of mechanisms can be found in theMultics operating system, marketed byHoneywell Information Systems andevaluated by the Department of Defensein Class B2 of its evaluation criteria.A certification in Division B impliesthat Multics has mandatory mechanisms toenforce security levels, and indeedthose mechanisms were specificallyimplemented to make the system usable ina military multilevel secure environment[wHITMORE]. However, those mechanismsdo not provide a sufficient basis forenforcing a commercial integrity model.In fact, Multics has an entirelydifferent set of mechanisms, calledprotection rings, that were developed
specifically for this purpose[SCHROEDERI. Protection rings provide ameans for ensuring that data bases canbe manipulated only by programsauthorized to use them. Multics thushas two complete sets of securitymechanisms , one oriented toward the
military and designed specifically formultilevel operation, and the otherdesigned for the commercial model ofintegrity.
The analogy between the two forms ofmandatory control is not perfect. Inthe integrity control model, there mustbe more discretion left to theadministrator of the system, because thedetermination of what constitutes proper
separation of duty can be done only by acomparison with application-specific
criteria. The separation of dutydetermination can be rather complex,because the decisions for al 1 thetransactions interact. This greaterdiscretion means that there is alsogreater scope for error by the security
officer or system owner, and that thesystem is less able to prevent the
security officer, as opposed to the
user, from misusing the system. To thesystem user, however, the behavior of
the two mandatory controls is similar.The rules are seen as a fundamental partof the system, and may not be
circumvented, only further restricted,
by any other discretionary control thatexists.
COMMERCIAL EVALUATION CRITERIA
As discussed earlier, RACF, ACF/2,and CA-TopSecret were all reviewed usingthe Department of Defense evaluation
criteria described in the Orange Book.Under these criteriat these systems didnot provide any mandatory controls.However, these systems, especially whenexecuted in the context of a
telecommunications monitor system such
as CICS or IMS, constitute the closestapproximation the commercial world hasto the enforcement of a mandatoryintegrity policy. There is thus a
strong need for a commercial equivalentof the military evaluation criteria toprovide a means of categorizing systemsthat are useful for integrity control.
Extensive study is needed to developa document with the depth of detailassociated with the Department ofDefense evaluation criteria. But, as a
starting point, we propose the followingcriteria, which we compare to the
fundamental computer security
requirements from the “Introduction” to
the Orange Book. First, the system mustseparately authenticate and identifyevery user, so that his actions can becontrolled and audited. (This issimilar to the Orange Book requirementfor identification.) Second, the systemmust ensure that specified data itemscan be manipulated only by a restrictedset of programs, and the data centercontrols must ensure that these programsmeet the well-formed transaction rule.Third, the system must associate witheach user a valid set of programs to berun, and the data center controls mustensure that these sets meet theseparation of duty rule. Fourth, thesystem must maintain an auditing logthat records every program executed andthe name of the authorizing user. (Thisis superficially similar to the OrangeBook requirement for accountability, but
188
the events to be auditeddifferent. )
are quite
In addition to these criteria, themilitary and commercial environmentsshare two requirements. First, thecomputer system must contain mechanismsto ensure that the system enforces itsrequirements. And second, ttl:mechanisms in the system mustprotected against tamperingunauthorized change. These t:;requirements, which ensure that thesystem actually does what it asserts it
does, are clearly an integral part ofany security policy. These are
generally referred to as the “general”or “administrative” controls in acommercial data center.
A FORMAL MODEL OF INTEGRITY
In this section, we introduce a moreformal model for data integrity withincomputer systems, and compare our workwith other efforts in this area. We use
as examples the specific integrity
policies associated with accounting
practices, but we believe our model isapplicable to a wide range of integritypolicies.
To begin, we must identify and labelthose data items within the system towhich the integrity model must be
applied. We call these “Constrained
Data Items,” or CDIS. The particular
integrity policy desired is defined bytwo classes of procedures: Integrity
Verification Procedures, or IVPS , and
Transformation Procedures, or TPs. The
purpose of an IVP is to confirm that allof the CDIS in the system conform to theintegrity specification at the time theIVP is executed. In the accounting
example, this corresponds to the auditfunction, in which the books are
balanced and reconciled to the externalenvironment. The TP corresponds to ourconcept of the well-formed transaction.The purpose of the TPs is to change theset of CDIS from one valid state to
another. In the accounting example, aTP would correspond to a double entrytransaction.
To maintain the integrity of the
CDIS, the system must ensure that only a
TP can manipulate the CDIS. It is this
constraint that motivated the term
constrained Data Item. Given this
constraint~ we can argue that, at anygiven time, the CDIS meet the integrityrequirements. (We call this condition a“valid state.”) We can assume that atsome time in the past the system was ina valid state, because an IVP was
executed to verify this. Reasoning
forward from this point, we can examinethe sequence of TPs that have been
executed. For the first TP executed, wecan assert that it left the system in a
valid state as follows. By definitionit will take the CDIS into a valid stateif they were in a valid state beforeexecution of the TP. But thisprecondition was ensured by execution ofthe IVP. For each TP in turn, we canrepeat this necessary step to ensurethat, at any point after a sequence ofTPs , the system is still valid. Thisproof method resembles the mathematical
method of induction, and is valid
provided the system ensures that only
TPs can manipulate the CDIS.1While the system can ensure that
only TPs manipulate CDIS, it cannotensure that the TP performs awell-formed transformation. Thevalidity of a TP (or an IVP) can bedetermined only by certifying it withrespect to a specific integrity policy.In the case of the bookkeeping example,
each TP would be certified to implementtransactions that lead to properlysegregated double entry accounting. Thecertification function is usually amanual operation, although someautomated aids may be available.
Integrity assurance is thus atwo-part process: certification, whichis done by the security officer, systemowner, and system custodian withrespect to an integrity policy; andenforcement, which is done by thesystem. Our model to this point can be
summarized in the following three rules:
Cl: (Certification) All IVPS mustproperly ensure that all CDISare in a valid state at the timethe IVP is run.
C2: All TPs must be certified to bevalid. That is, they must takea CDI to a valid final state,given that it is in a validstate to begin with. For eachTP, and each set of CDIS that itmay manipulate, the securityofficer must specify a“relation,” which defines thatexecution. A relation is thus
--- ----- ------ - ----lThere is an additional detail whichthe system must enforce, which is toensure that TPs are executed serially,rather than several at once. During themid-point of the eXeCUtiOn of a TP,there is no requirement that the systembe in a valid state. If another TPbegins execution at this point, there isno assurance that the final state willbe valid. To address this problem, most
modern data base systems have mechanismsto ensure that TPs appear to haveexecuted in a strictly serial fashion,
even if they were actually executed
concurrently for efficiency reasons.
189
of the form: (TPi, (CDIa, CDIb,CDIC, . . .)), where the list ofCDIS defines a particular set ofarguments for which the TP hasbeen certified.
El: (Enforcement) The system mustmaintain the list of relationsspecified in rule c2, and mustensure that the onlymanipulation of any CDI is by aTP, where the TP is operating onthe CDI as specified in somerelation.
The above rules provide the basic
framework to ensure internal consistencyof the CDIS. To provide a mechanism forexternal consistency, the separation of
duty mechanism, we need additional rulesto control which persons can execute
which programs on specified CDIS:
E2 : The system must maintain a listof relations of the form:
(UserID, TPi, (CDIa, CDIb, CDIC,. . . )), which relates a user, aTP, and the data objects that TPmay reference on behalf of thatuser. It must ensure that onlyexecutions described in one ofthe relations are performed.
C3: The list of relations in E2 must.be certified to meet the
separation of duty requirement.
Formally, the relations specifiedfor rule E2 are more powerful than thoseof rule El, so El is unnecessary.However, for both philosophical andpractical reasons, it is helpful to have
both sorts of relations.Philosophically, keeping El and E2
separate helps to indicate that thereare two basic problems to be solved:internal and external consistency. As a
practical matter, the existence of bothforms together permits complex relationsto be expressed with shorter lists, byuse of identifiers within the relationsthat use “wild card” characters to matchclasses of TPs or CDIS.
The above relation made use ofUserID, an identifier for a user of thesystem. This implies the need for a
rule to define these:
E3: The system must authenticatethe idenkity of each user
attempting to execute a TP.
Rule E3 is relevant to both commercialand military systems. Howevert thosetwo classes of systems use the identityof the user to enforce very differentpolicies. The relevant policy in themilitary context, as described in theOrange Book, is based on level andcategory of clearance, while the
commercial policy is likely to be basedon separation of responsibility amongtwo or more users.
There may be other restrictions onthe validity of a TP. In each case,this restriction will be manifested as acertification rule and enforcementrule. For example, if a TP is validonly during certain hours of the day,then the system must provide atrustworthy clock (an enforcement rule)and the TP must be certified to read theclock properly.
Almost all integrity enforcementsystems require that all TP execution belogged to provide an audit trail.However, no special enforcement rule isneeded to implement this facility; thelog can be modeled as another CDI, withan associated TP that only appends tothe existing CDI value. The only rulerequired is:
C4: All TPs must be certified towrite to an append-only CD I(the log) all informationnecessary to permit the natureof the operation to bereconstructed.
There is only one more criticalcomponent to this integrity model. Notall data is constrained data. Inaddition to CDIS, most systems containdata items not covered by the integritypolicy that may be manipulatedarbitrarily, subject only todiscretionary controls. TheseUnconstrained Data Items, or UDIS, arerelevant because they represent the waynew information is fed into the system.For example, information typed by a userat the keyboard is a UDI; it may havebeen entered or modified arbitrarily.To deal with this class of data, it isnecessary to recognize that certain TPs
may take UDIS as input values, and maymodify or create CDIS based on thisinformation. This impl ies acertification rule:
C5 : Any TP that takes a UDI as aninput V?ilUe mUSt be certifiedto perform only validtransformations, or else notransformations, for anypossible value of the UDI. Thetransformation should take theinput frOM a UDI to a CDI, OK
the UD I is rejected.Typically, this is an editprogram.
For this model be effective, thevarious certification rules must not be
bypassed. For example, if a user cancreate and run a new TP without havingit certified, the system cannot meet itsgoals. For this reason, the system must
190
ensure certain additional constraints.Most obviously:
E4 : Only the agent permitted tocertify entities may change thelist of such entitiesassociated with otherentities: specifically, theassociated with a TP. An agentthat can certify an entity maynot have any execute rightswith respect to that entity.
This last rule makes this integrityenforcement mechanism mandatory ratherthan discretionary. For this structureto work overall, the ability to changepermission lists must be coupled to the
ability to certify, and not to someother ability, such as the ability toexecute a TP. This coupling is thecritical feature that ensures that thecertification rules govern what actuallyhappens when the system is run.
Together, these nine rules define asystem that enforces a consistentintegrity policy. The rules aresummarized in Figure 1, which shows theway the rules control the systemoperation. The figure shows a TP thattakes certain CDIS as input and producesnew versions of certain CDIS as output.These two sets of CDIS represent twosuccessive valid states of the system.The figure also shows an IVP reading thecollected CDIS in the system in order toverify the CDIS’ validity. Associatedwith each part of the system is the rule
(or rules) that governs it to ensureintegrity.
Central to this model is the ideathat there are two classes of rules:enforcement rules and certificationrules. Enforcement rules correspond tothe application-independent securityfunctions, while certification rulespermit the application-specificintegrity definitions to be incorporatedinto the model. It is desirable tominimize certification rules, becausethe certification process is complex,prone to error, and must be repeatedafter each program change. In extendingthis model , therefore, an importantresearch goal must be to shift as muchof the security burden as possible fromcertification to enforcement.
For example, a common integrityconstraint is that TP S are to beexecuted in a certain order.model (and in most systems of ~~da~?~this idea can be captured only bystoring control information in some CDI,and executing explicit program steps ineach TP to test this information. Theresult of this style is that the desiredpolicy is hidden within the program,rather than being stated as an explicitrule that the system can then enforce.
Other examples exist. Separation ofduty might be enforced by analysis ofsets of accessible CDIS for each user.We believe that further research onspecific aspects of integrity policywould lead to a new generation of tools
for integrity control.
OTHER MODELS OF INTEGRITY
Other attempts to model integrityhave tried to follow more closely thestructure for data security defined byBell and LaPadula [BELLI, the formalbasis of the military securitymechanisms. Biba [BIBA] defined anintegrity model that is the inverse ofthe Bell and LaPadula model. His modelstates that data items exist atdifferent levels of integrity, and thatthe system should prevent lower leveldata from contaminating higher leveldata. In particular, once a programreads lower level data, the systemprevents that program from writing to
(and thus contaminating ) higher leveldata.
Our model has two levels ofintegrity: the lower level UDIS and thehigher level CDIS. CDIS would beconsidered higher level because they canbe verified using an IVP. In Biba’smodel, any conversion of a UDI to a CDIcould be done only by a security officeror trusted process. This restriction isclearly unrealistic; data input is themost common system function, and shouldnot be done by a mechanism essentiallyoutside the security model. Our modelpermits the security officer to certifythe method for integrity upgrade (in our‘terms? those TPs that take UDIS asinput values), and-thus recognizes ch efundamental cole of the TP (i.e.,trusted process) in our model. Moregenerally, Bibs’s model lacks anyequivalent of rule El (CDIS changed onlyby authorized TP), and thus cannotprovide the specific idea of constraineddata.
Another attempt to describeintegrity using the Bell and LaPadulamodel is Lipner [LIPNER]. He recognizesthat the cakegory facility of this modelcan be used to distinguish the generaluser from the systems programmer or thesecurity officer. Lipner alsorecognizes that data should bemanipulated only by certified(production) programs. In attempting toexpress this in terms of the latticemodel, he is constrained to attach listsof users to programs and dataseparately, rather than attaching a listof programs to a data item. His modelthus has no way to express our rule El.By combining a lattice security modelwith the Biba integrity model, he moreclosely approximates the desired model,
191
Figure 1: Summary of System Integrity Rules
USERS
Cl: lVPvalidatea CDl atate
C5: TPsvatidate UDl
C2: TPspreserve valid state
E4: Authorization lists
/
IVP
DCDI
System insome state
$CDI
CDI
CDI
==%/El: CDlschanged only byauthorized T
192
but still cannot effectively express theidea that data may be manipulated onlyby specified programs (rule El).
Our integrity model is less relatedto the Bell and LaPadula model than itis to the models constructed in support
of security certification of systemsthemselves. The iterative process weuse to argue that TPs preserveintegrity, which starts with a knownvalid state and then validatesincremental modifications, is also themethodology often used to verify that asystem, while executing, continues tomeet its requirements for enforcingsecurity. In this comparison, our CDISwould correspond to the data structuresof the system, and the TPs to the systemcode. This comparison suggests that thecertification tools developed for systemsecurity certification may be relevantfor the certifications that must beperformed on this model.
For example, if an Orange Book forindustry were created, it also mighthave rating levels. Existing systemssuch as ACF/2, RACF , and CA-TopSecretwould certainly be found wanting incomparison to the model. This modelwould suggest that, to receive higherratings, these security systems mustprovide: better facilities for end-userauthentication; segregation of dutieswithin the security officer functions,such as the ability to segregate theperson who adds and deletes users fromthose who write a user$s rules, andrestriction of the security functionfrom user passwords; and the need toprovide much better rule capabilities togovern the execution of programs andtransactions.
The commercial sector would be veryinterested in a model that would lead toand measure these kinds of changes.Further, for the commercial world, thesechanges would be much more valuable thanto take existing operating systems andsecurity packages to B or A levels asdefined in the Orange Book.
CONCLUSION
With the publication of the OrangeBook, a great deal of public and
governmental interest has focused on theevaluation of computer systems for
security. However, it has beendifficult for the commercial sector toevaluate the relevance of the OrangeBook criteria, because there is no cleararticulation of the goals of commercialsecurity. This paper has attempted toidentify and describe one such goal ,
information integrity, a goal that iscentral to much of commercial data
processing.In using the woras commercial and
military in describing these models, we
do not mean to imply that the commercialworld has no use for control of
disclosure, or that the military is notconcerned with integrity. Indeed, much
data processing within the military
exactly matches commercial practices.However, taking the Orange Book as themost organized articulation of militaryconcerns, there is a clear difference inpriority between the two sectors. Forthe core of traditional commercial dataprocessing, preservation of integrity isthe central and critical goal.
This difference in priority has
impeded the introduction of Orange Bookmechanisms into the commercial sector.
If the Orange Book mechanisms could
enforce commercial integrity policies aswell as those for military information
control, the difference in prioritywould not matter, because the same
system could be used for both.
Regrettably, this paPer argues there isnot an effective overlap between the
mechanisms needed for the two . The
lattice model of Bell and LaPadulacannot directly express the idea thatmanipulation of data must be restrictedto well-formed transformations, and thatseparation of duty must be based oncontrol of subjects to these
transformations.The evaluation of RACF, ACF/2, and
CA-TopSecret against the Orange Book
criteria has made clear to the
commercial sector that many of these
criteria are not central to the securityconcerns in the commercial world. What
is needed is a new set of criteria thatwould be more revealing with respect tointegrity enforcement. ‘This paper
offers a first cut at such a set ofcriteria. We hope that we can stimulatefurther effort to refine and formalizean integrity model, with the eventual
goal of providing better securitysystems and tools in the commercial
sector.There is no reason to believe that
this effort would be irrelevant to
military concerns . Indeed,
incorporation of some form of integritycontrols into the Orange Book might leadto systems that better meet the needs ofboth groups.
ACKNOWLEDGMENTS
The authors would like to thank
Robert G. Andersen, Frank S. Smith, 111,and Ralph S. Poore (Ernst & Whinney,Information Security Services) for theirassistance in preparing this paper. Wealso thank Steve Lipner (DigitalEquipment Corporation) and the refereesof the paper for their very helpfulcomments .
193
REFERENCES[DoD] Department of Defense
Trusted computer System
Evaluation Criteria,
CSC-STD-011-83, Departmentof Defense ComputerSecurity Centerr FortMeade, MD, August 1983.
[Bell] Bell, D. E. and L. J.LaPadulat “Secure Computer
systems,” ESD-TR-73-278(Vol I-III) (also MitreTR-2547), MitreCorporation, Bedford, MA,April 1974.
[Bibs] Biba, K. J., “IntegrityConsiderations for SecureComputer systems,” MitreTR-3153, MitreCorporation, Bedford, MA,April 1977.
[Lipner] Lipner, s. B.t“Non-DiscretionaryControls for CommercialApplications,” Proceedingsof the 1982 IEEE Symposium
Security and Privacy,~kland, CA, April 1982.
[Schroeder] Schroeder, M. D. and J. H.Saltzer, “A HardwareArchitecture forImplementing ProtectionRings, “ Comm ACM, Vol 15,3 March 1972.
[Whitmore] Whitmore, J. C. et al.,“Design for MulticsSecurity Enhancements ,“ESD-TR-74-176, HoneywellInformation Systems, 1974.
194