expert systems and audit planning: evolving research issues

7
ExpertSystems With Applications, Vol. 5, pp. 351-357, 1992 0957--4174/92 $5.00+ .00 Printed in the USA. © 1992 Peqiamon Press Ltd. Expert Systems and Audit Planning: Evolving Research Issues ANTHONY WENSLEY University of Toronto, Toronto, Ontario, Canada Ab~act--This paper investigates some of the key research issues which arise when expert system methodology is used as an approach to developing models of domains as complex as auditing. It also discusses the types of research which can be conducted using expert systems methodology in auditing and argues that, although useful, the distinction between research and development may have been overworked. Central areas of concern and inquiry which are addressed in this paper are knowledge acquisition, knowledge representation, and of particular concern to the author and his co-researchers at the present time, validation and verification. 1. INTRODUC]'ION THIS PAPER INVESTIGATES some of the issues which have emerged as a result ofextensive research into de- veloping and applying experts systems methodology to the domain of audit planning In addition to investi- gating the nature ofthe research project(s) which can be undertaken using expert systems the author inves- tigates knowledge acquisition, knowledge representa- tion, and verification and validation. These issues, and others, will be discussed in the following paper with particular emphasis being placed on the experience that the author and his co-researcher, Efrim Boritz of the University of Waterloo, have gained from building an audit planning expert system, CAPEX, over the past seven years. CAPEX consists of an extensive knowledge base containing detailed knowledge of auditing and a dynamic model for pro- cessing this knowledge to generate audit program plans which attempt to satisfy a large number of audit ob- jectives for the audit of a particular client company. Interested readers will find detailed descriptions of the system in Boritz and Wensley ( 1991a, b,c, 1989), and Wensley (1989). 2. THE STATUS OF EXPERT SYSTEM RESEARCH Questions have been raised as to whether the construc- tion of expert systems constitutes a valid research ac- Revised version of Wensley, A., Expert systems and audit planning: Evolving __re~.~r~hissues, pp. 158-166, from Liebowitz: Expert Sys- tems World Congress Proceedings, copyright 1991, with permission from Pergamon Press Ltd., Headington Hill Hall, Oxford, OX3 0BW, United Kingdom. Requests for reprints should be sent to Anthony Wemk'y, Faculty of Management, University of Toronto, Toronto, Ontario, Canada MSS IV4. tivity (see, for instance, O'Leary (1987)). However, to the extent that an expert system either implements a particular theory about (or model of) a particular do- main or attempts to investigate the nature and structure of the problem-solving process used by a particular in- dividual, or set of individuals, it may represent a valu- able research contribution. In particular, the validity of a theory of the domain or ofindividual/group prob- lem solving may be subject to a variety of tests using expert systems. Such tests may well lead to increased knowledge about either the domain or about problem solvers and would therefore constitute research activ- ities. In the early stages of modelling using expert systems methodology, the principal concern of the researcher will typically be directed towards ensuring that the model is an adequate representation of reality. In later stages further analysis of the model's performance may lead to new insights into the nature of reality and its constituents. Thus, modelling can help to refine some knowledge items and / or rules for combining / operating on particular entities. For instance, it has been sug- gested that the construction of the knowledge base for the expert system DENDRAL helped to identify a va- riety of different types of inconsistencies in experts' "knowledge" and subsequently led to the development of an algorithmic model to solve the problem for which the system was originally constructed (for a discussion of this issue, refer to Dreyfus and Dreyfus (1986)). The nature of the research questions that can be addressed in constructing and experimenting with ex- pert systems depends in part on the specifications that have been established for the system prior to its con- struction. Issues relating to the specification of expert systems have been discussed by the author elsewhere (Boritz & Wensley, ! 991 a,b). However, it is important to distinguish between two different types of specifi- cation: 351

Upload: anthony-wensley

Post on 26-Jun-2016

214 views

Category:

Documents


2 download

TRANSCRIPT

Expert Systems With Applications, Vol. 5, pp. 351-357, 1992 0957--4174/92 $5.00 + .00 Printed in the USA. © 1992 Peqiamon Press Ltd.

Expert Systems and Audit Planning: Evolving Research Issues

ANTHONY WENSLEY

University of Toronto, Toronto, Ontario, Canada

Ab~act--This paper investigates some of the key research issues which arise when expert system methodology is used as an approach to developing models of domains as complex as auditing. It also discusses the types of research which can be conducted using expert systems methodology in auditing and argues that, although useful, the distinction between research and development may have been overworked. Central areas of concern and inquiry which are addressed in this paper are knowledge acquisition, knowledge representation, and of particular concern to the author and his co-researchers at the present time, validation and verification.

1. INTRODUC]'ION

THIS PAPER INVESTIGATES some of the issues which have emerged as a result ofextensive research into de- veloping and applying experts systems methodology to the domain of audit planning In addition to investi- gating the nature ofthe research project(s) which can be undertaken using expert systems the author inves- tigates knowledge acquisition, knowledge representa- tion, and verification and validation.

These issues, and others, will be discussed in the following paper with particular emphasis being placed on the experience that the author and his co-researcher, Efrim Boritz of the University of Waterloo, have gained from building an audit planning expert system, CAPEX, over the past seven years. CAPEX consists of an extensive knowledge base containing detailed knowledge of auditing and a dynamic model for pro- cessing this knowledge to generate audit program plans which attempt to satisfy a large number of audit ob- jectives for the audit of a particular client company. Interested readers will find detailed descriptions of the system in Boritz and Wensley ( 1991a, b,c, 1989), and Wensley (1989).

2. THE STATUS OF EXPERT SYSTEM RESEARCH

Questions have been raised as to whether the construc- tion of expert systems constitutes a valid research ac-

Revised version of Wensley, A., Expert systems and audit planning: Evolving __re~.~r~h issues, pp. 158-166, from Liebowitz: Expert Sys- tems World Congress Proceedings, copyright 1991, with permission from Pergamon Press Ltd., Headington Hill Hall, Oxford, OX3 0BW, United Kingdom.

Requests for reprints should be sent to Anthony Wemk'y, Faculty of Management, University of Toronto, Toronto, Ontario, Canada MSS IV4.

tivity (see, for instance, O'Leary (1987)). However, to the extent that an expert system either implements a particular theory about (or model of) a particular do- main or attempts to investigate the nature and structure of the problem-solving process used by a particular in- dividual, or set of individuals, it may represent a valu- able research contribution. In particular, the validity of a theory of the domain or ofindividual/group prob- lem solving may be subject to a variety of tests using expert systems. Such tests may well lead to increased knowledge about either the domain or about problem solvers and would therefore constitute research activ- ities.

In the early stages of modelling using expert systems methodology, the principal concern of the researcher will typically be directed towards ensuring that the model is an adequate representation of reality. In later stages further analysis of the model's performance may lead to new insights into the nature of reality and its constituents. Thus, modelling can help to refine some knowledge items and / or rules for combining / operating on particular entities. For instance, it has been sug- gested that the construction of the knowledge base for the expert system DENDRAL helped to identify a va- riety of different types of inconsistencies in experts' "knowledge" and subsequently led to the development of an algorithmic model to solve the problem for which the system was originally constructed (for a discussion of this issue, refer to Dreyfus and Dreyfus (1986)).

The nature of the research questions that can be addressed in constructing and experimenting with ex- pert systems depends in part on the specifications that have been established for the system prior to its con- struction. Issues relating to the specification of expert systems have been discussed by the author elsewhere (Boritz & Wensley, ! 991 a,b). However, it is important to distinguish between two different types of specifi- cation:

351

352 A. Wensley

• Type 1: An expert system exhibits expert "behav- iour" without necessarily using expert process.

• Type 2: An expert system, exhibits expert "behav- iour" AND generates this behaviour through the use of expert process. Obviously to be useful such specifications need to

be "fleshed out" considerably. Such fleshing out should consider the types of inputs with which the system should deal, the process by which such inputs are con- verted into outputs, the standards by which behaviour should be judged, etc.

Failure to adequately discuss system specifications has led many critics to argue that the only knowledge expert systems can provide us with is knowledge about expert decision making otherwise they are simply clever toys---simulators of behaviour not models of behav- iour. This would seem to be too simplistic a view of either the present status of expert systems research or the future promise of such research. As we have noted above and will discuss further in the case of CAPEX, a variety of expert systems have moved away from modelling an particular expert's perception of a par- ticular domain to developing models of the domain which are more rooted in domain concepts and theo- ties. Compare, if you like, an expert system which could predict the motion of bodies using Aristotelian expertise to one based on Newtonian mechanics. The first tells you a great deal about the way human beings see and interpret the world. The second tells you very little. From the standpoint of scientific research it is clear which tells you more about mechanics. There are, of course, situations where there are no well-developed theories of a particular domain and reliance has to be placed on the perceptions of an expert or a limited number of experts. In these cases expert systems pro- vide a research tool which can be used to investigate the nature of individual expertise, the structuring and combination of knowledge and as indicated previously may facilitate the development of models and testable theories of the domain.

In the context of auditing, expert systems can be used to address the following research questions: • Is the present state of knowledge of the domain of

auditing complete and consistent? • Does the auditing knowledge which has been rep-

resented interact in expected ways7 (This question is particularly interesting when a domain in partic- ularly complex and multilayered as in the case of audit planning.)

• How do different models of the auditing process compare with each other given similar initial inputs and assumptiom. The di~ectic which is established during the con-

~-uction of expert systems has a variety of important consequences. In the first place it defines the context within which the research takes place and a set of values and common understandings which are necessary to

provide sufficient richness to the system. It is also likely that the dialectic will lead to an alteration of the per- ceptions of both the expert system builder(s) and the knowledge providers. Such changes need to be inves- tigated with the types oftools which are now available from anthropology and cognitive psychology. In ad- dition, the dialectic provides the basis for limited ver- ification and validation tests. These issues are discussed in the section entitled "Verification and Validation" and in more detail in Stamper (1984), Boritz and Wensley (1991a,b), and Wensley ( 1991a, 1990a,b).

In the context of auditing expert systems, the di- alectic which is engaged in during construction of the expert system can provide an opportunity for the structuring and exchange of auditing knowledge. Al- though many firms make some aspects of their auditing knowledge explicit, it is often difficult to represent such knowledge in sufficient detail. The dialectic also results in the surfacing of assumptions and values which would otherwise remain implicit.

In addition to considering the construction of expert systems as a possible source of answers to research questions, their subsequent use also represents a fertile field of research. Relatively little research has been conducted concerning the impact of expert systems on either the nature of an individual's cognition, decision- making process or the nature and quality of the outputs of that process (though see, Lamberti (1987), Lamberti and Wallace (1990) for some interesting new work in this area). Some of the research questions which would be interesting to investigate here are: 1. To what extent does interaction with the system

lead to changes in the user's perception of the do- main?

2. Does interaction with the system improve the qual- ity of the user's decision making?

3. Does interaction with the system improve the user's confidence in his/her decisions? Since the overall objective in constructing many ex-

pert systems is expert augmentation, this type of re- search would seem to be of critical importance. One of the reasons for the lack of research in this area is that experiments which provide sufficiently rich results would be extremely time consuming (hence expensive) and exhausting for participants.

In the context of auditing, the above questions are becoming increasingly important to ask since many large and small audit firms are placing increasing re- liance on expert systems or, at least, relatively complex knowledge-based systems.

Another important contribution of expert systems research is in the development and testing of meth- odologies for verifying and validating expert systems in particular, and complex information systems in general. As more and more expert systems are used it becomes necessary to develop sufficiently robust veri- fication and validation methodologies to ensure that

Evolving Research Issues 353

their use does not result in the multiplication of risks and/or the dramatic increase of exposures faced by their users and society in general. These issues are dis- cussed in more detail in Wensley (1991b).

3. KNOWLFAM~E ACQUISITION

Knowledge acquisition is a dialectical process. At the very least, expert system builder(s) and the expert(s) or knowledge providers attempt to construct a shared reality in the guise of an expert system. Thus, as Wen- sley (1989) notes, much so-called knowledge acquisi- tion is, in fact, knowledge construction. The traditional metaphor of "mining knowledge" was probably never appropriate even for the simplest expert systems and is certainly not appropriate for the more complex sys- tems. Once we enter into the world of knowledge con- struction the notion of encoding an expert's under- standing of a domain becomes increasingly vague.

In the domain of auditing there are a variety of tools available for providing an initial structure for knowl- edge construction. Many of these tools are paper based and rely on worksheets and matrices. In some cases, however, spreadsheets and other automated, interactive tools have been developed by firms as a means of elic- iting and storing knowledge. With respect to the sub- domain of audit planning, one of the structuring tools which is available is the so-called assertion-based ap- proach. This approach may be conceptualized as a way of elaborating the nature of the detailed objectives which an external auditor seeks to achieve and a way ofanaiyzing the nature of the evidence potentially pro- vided by audit evidence-gathering procedures.

CAPEX, as a knowledge-based system, was built us- ing knowledge obtained from a variety of different sources. One source of knowledge was expert auditors. However, the knowledge which they provided was not considered to be necessarily superior to that provided by other sources. CAPEX is, to put it simply, a tool for integrating and investigating the nature of audit knowledge. The acquisition of knowledge for CAPEX required the development of a variety of tools. Some of these tools required that the auditors in question perform considerable analysis. For instance, a matrix was developed which allowed for the mapping of evi- dential support from audit procedures to audit asser- tions. Another tool used for knowledge construction was the expert system itself. As the knowledge base became richer, it was possible to investigate the impact of different judgments concerning the strength of audit evidence.

When an expert system itself is used as a tool for investigating the dynamic impact of knowledge items some care must be taken. There is a danger that the expert will "fine tune" his/her judgments to obtain the desired behaviour from the system rather than provide judgments which most accurately represent his/her

perceptions of reality. Problems may also arise in de- veloping validation tests since extensive interaction with the system during its construction may result in experts developing markedly different viewpoints of the domain to those who have not particilmted in its construction. This type ofproblem is discussed in more detail later in this paper and in Boritz and Wensley (1991a, d).

There is a need to conduct research into the appro- priateness of various different tools which may be used for knowledge acquisition. Though considerable re- search effort has been directed towards investigating the reliability and validity of protocol elicitation and analysis, it is important that the reliability and validity of other knowledge acquisition tools be researched with equal thoroughness.

As experts and knowledge engineers work on ex- tending and refining a knowledge base, learning often takes place. For instance, in a real sense CAPEX re- quired that the knowledge providers learn more about the domain of audit planning. Some of this learning may be essentially "negative" in the sense that it offers the possibility of demonstrating that knowledge may not be structured or processed in a particular manner. In other cases gaps or inconsistencies in knowledge may be identified.

4. KNOWLEDGE REPRESENTATION

Knowledge representation is intimately associated with knowledge acquisition. Knowledge representation may be considered to be a two-stage process. The fLrst stage involves recording the acquired knowledge in some way so as to preserve the structure of the knowledge. The second stage involves mapping the recorded knowledge into a computable knowledge structure or set of struc- tures.

CAPEX represents the limited "audit" world in terms of accounting objects, management assertions, and audit procedures. Accounting objects are financial statements, financial statement items such as accounts receivable. These accounting objects have a variety of characteristics including dollar value, variability, type of account, etc. Management assertions relate to each financial statement item, its completeness, and its val- uation. Finally, audit procedures may involve substan- tive analysis or tests of details, provide information concerning the presence, or absence, oferrors in a par- ticular financial statement item, set of items, or the financial statements as a whole, and have a variety of other characteristics including cost, time taken to complete, reliability, and relevance.

One of the interesting problems which arose during the construction of CAPEX related to the character- ization of such seemingly obvious objects as auditing procedures. In the end, it became necessary to relate audit procedures directly to the messages provided. An

354 A. Wensley

audit procedure is thus defined as an activity which yields a single message. For example, the procedure which involves the positive confirmation of accounts receivable is actually at least two procedures---one which provides a message concerning the existence of accounts receivable and one concerning the dollar value of those accounts receivable. Though this issue may seem fairly trivial it demonstrates to degree to which building expert systems of the complexity of CAPEX forces attention to be paid to a number of very detailed questions the answers for which may be difficult, or in some cases impossible, to obtain.

Broadly speaking, the interpretation of the meaning of the messages provided by audit evidence-gathering procedure relates to the notion of relevance of the mes- sage generated by a procedure while concerns with re- spect to the confidence that may be placed in such interpretations relates to the reliability of the message generated by a procedure. As noted above, work which is direct towards representing audit knowledge, and in particular knowledge concerning audit procedures, may well result in elaboration and refinement of that knowledge. This is particularly true when considering the reliability of the evidence which is generated by audit procedures.

Since the knowledge we have available to encode in a particular knowledge base is often incomplete, it is also important to consider how such concepts as ig- norance, uncertainty, ambiguity, and vagueness should be represented. Since the overall objectives for CAPEX were stated in terms of the specific audit risk targets it was vitally necessary to select an appropriate formalism with which to represent a variety of different types of uncertainty associated with an external audit.

There are a variety of possible approaches to ana- lyzing and representing the types of uncertainty which arise in audit planning. Traditionally risk has been ex- plicitly analyzed at a fairly high level. For instance, risk has often been defined as the uncertainty associated with an assertion of the form.

The set of financial statements do not contain any material errors.

It has often been the case that rather than making ref- erence to risk the concept of assurance has been used. Thus, an external auditor is concerned to achieve a certain target level of assurance rather than a target level of risk. The choice of perspective is not trivial and may well lead to the types of framing problems which have been discussed by Kahneman and Tversky (1982). The impact of biases due to framing might alter assessments of the characteristics of audit evi- dence-gathering procedures and also potentially alter the way in which audit program plans are constructed.

Traditionally uncertainties associated with audit planning have been modelled as classical probabilities.

This has proved problematic for a number of reasons, the most important of which are the fact that it is un- likely that such probabilities can be known and the classical approach provides no principled way &taking into account the revision of uncertainties following the receipt of new evidence. Bayesian approaches have gained some favour. The principal problem which faces the use of these approaches relates on the one hand to the semantics of Bayesian uncertainty measures, and on the other to the problems or ensuring that somewhat stringent independence conditions obtain between dif- ferent items of evidence. An additional problem of Bayesian approach, which it shares with the classical approach, is that it is not possible to distinguish between uncertainty which arises from randomness and uncer- tainty which arises from lack of knowledge.

Recently, Sharer and Srivastava (1987) and Sharer, Shenoy, and Srivastava (1989) have proposed an ap- proach to modelling uncertainties in audit planning using belief functions. Belief function approaches ad- dress some of the problems identified above but bring with them a number of equally difficult problems not the least of which relate to the semantics of belief func- tions. This issue has been addressed by Shafer and Tversky (1985) but needs to be elaborated considerably to provide an adequate specification in the context of the uncertainties inherent in audit planning. The topic of uncertainty modelling in auditing is discussed ex- tensively in Boritz and Wensley (1990).

In its present incarnation, CAPEX can represent uncertainties as either cla~slcal probabilities, subjective Bayesian probabilities or Dempster-Shafer belief functions. Audit evidence-gathering procedures are considered to provide evidence which reduces the un- certainty associated with the truth of a variety of dif- ferent types of assertions which are considered to be relevant to a variety of accounting objects.

The encoding of a variety of different approaches to uncertainty modelling in CAPEX will allow the re- searchers to investigate the impact of the selection of one particular approach on the nature of audit plans, the process by which such plans are constructed, and resources consumed.

5. PLANNING AND SIMULATION

CAPEX is an expert system designed to generate audit program plans. As such it provides a medium for in- vestigating aspects of planning problems in general and audit planning in particular. Planning involves the construction of sequences of actions which are expected to lead to the achievement ofpredefined goals. In order to develop plans it is necessary to be able to represent: 1. The state of the world in sufficient detail to be able

to represent the impact of actions. 2. The characteristics of actions in sufficient detail to

be able to identify unique actions to the user.

Evolving Research Issues 355

Audit program plans specify sets of information gath- ering actions. The nature of these actions is investigated further in the following section.

5.1. Information Gathering Actions

In the context of audit planning, we may think of the actions which are being performed as a set of infor- mation-gathering actions. Such actions potentially provide information concerning possible states of the system being investigated. Given perfectly reliable in- formation-gathering procedures which can be carried out at zero cost it will always be possible to construct some set of information-gathering actions which taken together will provide as accurate a specification of the state of the system as is possible at minimum cost. However, procedures are neither perfectly reliable, nor costless. Thus, we are plunged into a familiar world where heuristics are required to: 1. Determine the tradeoff between different charac-

teristics, say, accuracy and reliability. 2. Reduce the complexity of the problem-solving do-

main. One way of conceptualizing the construction of

plans which consist of information-gathering actions is as a set of actions which constrain the possible states of the world which could possible obtain. As more ac- tions are performed, the set of possible states of the world becomes smaller and hence the likelihood ofone possible state obtaining becomes higher. This way of looking at the problem of information gathering is use- ful because it tends to focus our attention on a number of important features: I. In a world of uncertainty it is never possible to know,

unambiguously, that a particular state of affairs ob- tains.

2. Our attention is directed towards collecting both information which tends to support the existence of a particular set of states of affairs and also infor- mation which tends to rule out other states of affairs.

Strategically such an approach may not adequately reinforce the requirement to collect negative evidence with respect to the existence of a particular state of affairs--evidence which may be likened to the "failure ofthe dog to bark" in the Sherlock Holmes story "Silver Blaze." It is dil~cult to know whether thi~ type of stra- tegic injunction can be motivated purely through a particular way of conceptualizing the problem or if it is necessary to make it an explicit meta rule concerning the collection of evidence.

One central issue relates to the "descriptions which are used to describe the possible states of the world. Indeed these descriptions are vital since they must en- compass all the relevant characteristics of the world which we might be interested in. If they do not so char- acterize the world we will not be able to construct an appropriate plan. Consider, for example, a murder in-

vestigation. One would expect that the initial state of affairs could be represented by a set of statements each of which express the guilt of one party or another. As information is collected it would bear differentially on this set of statements. This situation is particularly clear cut in the sense that the state(s) of the world which actually obtained are either represented by a unary set, or at least a set with relatively few members.

Consider a different example drawn from audit plannin& With no information about a particular client it is possible that all possible types of error occur in the client's financial statements or that no errors are actually present. We assume that we are absolutely in the dark as to the accuracy of the financial statements. In order to provide a suitable characterization of the state of affairs which obtains in this case it is necessary to describe the state of affairs in terms of statements referring to the presence of all the different types of errors which can logically occur in the client's financial statements. One way of doing this is presented by the assertion-based approach to audit planning.

One of the central problems which we faced was to provide CAPEX with a heuristic, called the "evidential power" heuristic, to enable the selection of audit evi- dence-gathering procedures to include a particular au- dit program plan.

5.2. Evidential Power

CAPEX is concerned with differentiating between a number of information-gathering procedures. One could conceive of a complex function being built to determine the expected costs associated with different types of failures to detect errors and then determining the reduction in the expected losses which would accrue as a result of receiving information from a particular audit evidence-gathering procedure. It would then be possible, in theory, to identify the most cost-effective way of constructing a set of audit evidence-gathering procedures to achieve the predefined audit objectives. Clearly this approach requires that a large body of as- sessments be obtained from expert users which they may well be unable to provide. In the initial imple- mentation of CAPEX we chose a somewhat less in- formation-intensive approach by developing a heuristic based on evidential power.

This heuristic attempts to determine the cost effec- tiveness of audit evidence-gathering procedures. Es- sentially it attempts to assess the total quantity of ev- idence provided by a particular audit evidence-gath- ering procedure.

Evidential power

= Sum of maximum derivable assurances/Cost.

( l . l )

356 A. Wensley

In this formulation the maximum derivable assurances measure the maximum extent to which the uncertainty associated with each relevant financial statement item assertion can be reduced. One of the drawbacks of the simplistic evidential power heuristic is that it does not take into account whether a particular procedure is providing evidential support which will actually be needed in the future to achieve the predefined objec- tives associated with a particular audit. A modified form of evidential power heuristic would make this type of comparison.

Evidential power

= Sum of useable assurances/Cost. (1.2)

There are obviously problems with refining the evi- dential power heuristic in this way. One of these prob- lems is that the more sophisticated versions ofthe heu- ristic are computationally more demanding. They are, of course, less demanding than exhaustive search strat- egies but none-the-less given the complexity of the audit planning problem they increase very considerably the time taken to construct an audit program plan.

It is unlikely that the present strategy of using a heuristic to essentially avoid a search can be sustained. At present the computational overhead involved in even a very restricted search is excessive. It is hoped that future versions of CAPEX will allow both the use of more sophisticated planning heuristics and greater search.

6. VERIFICATION AND VALIDATION

From relatively humble beginnings, research into the verification and validation of expert systems has bur- geoned. There is often some confusion as to the precise meaning of the terms. As a first cut let us consider the following definitions: Verification: Does the knowledge base meet the

specitications which were developed for it and does it have relevant character- istics such as consistency, completeness, etc?

Validation: Does the system perform as required? One of the key issues concerning the verification of expert system relates to the extent that the behaviour of the system has been specified. Typically both user requirements and system specifications are either not stated or stated very vaguely. From a research stand- point a number of related issues need to be investigated: • How should system specifications and user require-

ments be developed for expert system? • Given that system specifications and user require-

ments have been adequately developed what types of verification and validation tests are appropriate? O'Leary ( 1986, 1987, 1988) was one of the first re-

searchers to recognize the importance of research which

addressed the verification and validation of expert sys- tems. Unfortunately there is insufficient space available in this paper to investigate general theoretical and practical aspects of this important topic further. The interested reader is directed towards O'Leary's papers, Boritz and Wensley ( 1991 a), Wensley ( 1990a,b, 1989). However, specific issues with respect to validating CAPEX will be discussed in the following paragraphs.

In many traditional planning systems which have been addressed in the artificial intelligence literature, it has been relatively straightforward to determine whether a plan achieves a parti¢~,lar goal, or set of goals. Thus one approach to validation has been simply to carry out the sequence of actions which have been specified and see if the desired state is achieved.

In the case of CAPEX, the planning situation is more complex since it is not possible to objectively determine whether the goal state has been achieved. Generally speaking, the only way we can know whether the goal state has been achieved is by consulting knowledgeable auditors. There is no independent way of determining the effect of the evidence provided by audit evidence- gathering procedures on reducing the uncertainties faced by the auditor. If we add to this the fact that the set of all possible combinations of actions which can possibly result in the achievement of any desired state is very large, if not infinite, both the validation and verification stages become very difficult.

Considerable attention was directed towards devel- oping case materials for validating CAPEX which elic- ited differential behaviours from individuals with dif- ferent levels of expertise. Since audit program plans and the objectives they are designed to achieve are complex, it was necessary to develop and pretest an extensive assessment questionnaire. Finally, one of the interesting aspects of the validation tests which were performed on CAPEX was that its performance was assessed by independent experts both absolutely and relative to plans which had been developed by the writ- ers of the validation case material.

7. CONCLUSION

It has been argued that, contrary to the opinion of some critics, expert systems are suitable instruments for conducting a variety of different types of research projects both generally and in the field of auditing. The types of research questions which can be answered de- pends, in part, on the initial specifications which have been established for the expert system. On the one hand, if the principal intention of the researcher is to model an expert or a group ofexperts it may be possible to provide answers to questions relating to how experts solve particular problems. On the other hand, if the intention is to model the domain in question, it may be possible to provide answers relating to questions about the nature of the domain itself.

Evolving Research Issues 35 7

The construction and refinement of expert systems requires considerable interaction to take place between the system and the various individuals involved in its construction and refinement. This interaction can pro- vide answers to questions relating to the nature of such interactions and the effect of them on individuals' per- ceptions of other individuals or the domain itself.

To the extent that knowledge of any domain is in- complete, as it certainly is of audit planning, it has been argued that it becomes necessary to analyse and choose an appropriate formalism for representing and processing uncertainties. Such an analysis may provide insights both into the domain of audit planning in par- ticular and the behaviour of uncertainty modeling for- malisms in general.

Finally, the domain of audit planning provides a rich and complex set of knowledge-based problems which can stimulate research concerning all stages of the development of expert systems from knowledge acquisition to verification and validation.

~ ~ N ~

Boritz, J.E., & Wensley, A.K.P. (1991a). Validating expert systems with complex outputs: The case of audit planning. Unpublished manuscript, School of Accountancy, University of Waterloo, Waterloo, Ontario, Canada.

Boritz, J.E., & Wensiey, A.K.P. ( 1991 b). Structuring the assessment of audit evidence. Auditing: A Journal of Practice and Theory. 9 (Suppl.), 49-87.

Boritz, J.E., & Wensley, A.K.P. ( 1991 c). CAPEX: An expert systems approach to substantive audit planning. Expert Systems With Applications. 3, 27-49.

Boritz, J.E., & Wensley, A.K.P. (1991d). Validating complex infor- mation systems: An expert systems perspe~ive. Unpublished manuscript, Faculty of Management, University of Toronto, To- ronto, Ontario, Canada.

Boritz, J.E., & Wensk'y, A.K.P. (1990). Evidence, uncertainty mod- elling and audit risk. Unpublished manuscript, School of Ac- countancy, University of Waterloo, Waterloo, Ontario, Canada.

Boritz, J.E., & Wensley, A.ICP. (1989). CAPE)( technical manual 1.0. Unpublished numuL~ipt, School of Accountancy, University of Waterloo, Waterloo, Ontario, Canada.

Dreyfus, H., & Dreyfus, S. (1986). Mind over machine: The power

of human intuition and expertise in the era of the computer. New York: Free Press.

Kahneman, D., & Tversky, A. (1982). In D. Kahneman, P. Siovic, & A. Tvelsky (Eds.), Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press.

Lamherti, D.M. (1987). Impact of abstract versus concrete infor- mation presentation in expert systems on the expert-novice user performance. UnpuHished doctoral dissertation, Rensselaer Polytechnic Institute.

Lamberti, D.M., & Walhu,, W.A. (1990). Intelligent interface design: An empirical ame~ment of knowledge presentation in expert sys- tems. MIS Quarterly, 14(3), 271-31 !.

O'Leary, D. (1988). Methods of validating expert systems. Interfaces, 18(6), 72-79.

O'Leary, D. (1987). Validation of expert systems. Decision Sciences, 18(3), 468-486.

O'Leary, D. (1986). Validation of business expert systems. Paper presented at the USC Symposium of Expert Systems and Audit Judgements, School of Accountancy, University of Southern California.

Shafer, G., & Srivastava, R. (1987). The Bayesian and befief-function formalisms !, A general perspective of auditing. Paper p~sented at the University of Waterloo Symposium on Research on Au- diting, School of Accountancy, University of Waterloo, Ontario, Canada.

Shafer, G., & Tversky, A. (1985). Languages and designs for prob- ability judgment. Cognitive Science, 9, 309-339.

Sharer, G., Shenoy, P.P., & Srivastave, R. (1989). Audit risk:A belief function approach. Working Paper, No. 212, School of Business, University of Kansas.

Stamper, R. (1984). Management epistemology.. Garbage in garbage out. Paper presented at the IFIP Working Group Meeting 8.4, Durham, UK.

Wensley, A.K.P. (1991a). Developing specifications for expert sys- tems--lmplications for validation and verification. Unpublished paper, Faculty &Management, University of Toronto, Ontario, Canada

Wensk'y, A.K.P. (1991b). Software reliability: Thoughts about the nature of failure. Unpublished paper, Faculty of Management, University of Toronto, Ontario, Canada.

WenrJey, A.K.P. (1990a). Expert systems validation: Some issues. Unpublished paper, Faculty of Management, University of To- ronto, Ontario, Canada.

Werrdey, A.K.P. (1990b). Validating complex audit planning systems: Instruments, experiments and results. Unpublished paper, Faculty of Management, University of Toronto, Toronto, Ontario, Can- ada.

Wensley, A.K.P. (1989). The feasibility of developing as assertion based approach to audit planning using expert systems method- o/ogy. Unpublished doctoral dissertation, Department of Man- agement Science, University of Waterloo, Ontario, Canada.