vol. use of artificial intelligence in analytical systems for the...

16
Journal of Automatic Chemistry, Vol. 17, No. (January-February 1995), pp. 1-15 Use of artificial intelligence in analytical systems for the clinical laboratory John F. Place, Alain Truchaud, Kyoichi Ozawa, Harry Pardue and Paul Schnipelsky IFCC Committee on Analytical Systems: Chairman A. Truchaud The incorporation of information-processing technology into analytical systems in the form of standard computing software has recently been advanced by the introduction of artificial intelligence (AI), both as expert systems and as neural networks. This paper considers the role of software in system operation, control and automation, and attempts to define intelligence. AI is characterized by its ability to deal with incomplete and imprecise injbrmation and to accumulate knowledge. Expert systems, building on standard computing techniques, depend heavily on the domain experts and knowledge engineers that have programmed them to represent the real world. Neural networks are intended to emulate the pattern-recognition and parallel processing capabilities of the human brain and are taught rather than programmed. The future may lie in a combination of the recognition ability of the neural network and the rationalization capability of the expert system. In the second part of the paper, examples are given of applications of AI in stand-alone systems for knowledge engineering and medical diagnosis and in embedded systems for failure detection, image analysis, user interfacing, natural language processing, robotics and machine learning, as related to clinical laboratories. II is concluded that AI constitutes a collective form of intellectual propery, and that there is a need for better documentation, evaluation and regulation of the systems already being used in clinical laboratories. THEORY Introduction Automation was introduced fairly late in the field of clinical chemistry. In the 1930s, increasing use was made of simple instruments based on classical principles of analytical chemistry. Instrumental analysis became more important because of a switch to physical and physico- chemical measurement approaches. The latter methods were often more sensitive and could be used to quantify smaller amounts ofanalytes; they were frequently simpler and tiaster than more conventional methods [1]. Spec- trometers, flame spectrometers, instrumentation using electrochemical electrodes and instrumental separation methods (centrifugation, electrophoresis) were particularly important in this respect. Until 1950 it was still common for clinical chemists to do determinations manually using instruments for the measurement step only. The mechanization of sample- processing functions was greatly influenced by the Correspondence to John F. Place, DAKO A/S, Produktionsvej 42, 2600 Glostrup/Copehagen, Denmark. segmented flow system started with the auto-analysers of Skeggs [2], that were made commercially available at the end of the 1950s. This process has continued through many phases with the most recent advances being in the area of robotics [3]. Mechanization and automation of most of the functional components of analytical systems [4] has allowed clinical chemistry to grow and develop into a systems approach in which all aspects of clinical determinations have become integrated. Software, that was originally used primarily to sequence the mechanical processes and process raw measurement data, has become so sophisticated and generally available that it can now be used to monitor and control processes and to interpret information. This development has made it possible to incorporate intelli- gence into advanced analytical instrumentation. This forrn of machine intelligence is also called Artificial Intelligence (AI). An attempt is made later in this paper to define intelligence, and examples of intelligent systems will be given. AI is being used increasingly in the clinical laboratory, both in the form of stand-alone expert systems for clinical decision making and as knowledge-based systems embedded within laboratory instrumentation. In addition to AI systems that can be implemented in standard computers, progress is being made in the development of software architectures that mimic human intelligence. Research in this area is leading to a better understanding of human (natural) intelligence. This approach is so different that it is sometimes termed naturally intelligent’ to distinguish it from other artificial intelligence ap- proaches [5]. In this paper no such distinction will be made and all machine intelligence will be described as artificial. Despite these advances, or perhaps because of them, the general level of understanding of A! is very low; users are not aware of what AI really is and what it can or cannot do. One of the main reasons for this is that the terminology is confusing and even ambiguous and that the concept of AI for many users has overtones of science fiction. The aim of this paper is to attempt to present the concepts and potential applications of AI in a format that is more easily understood by clinical chemists. Automation The earliest forms of automatic machines had mechanical forms of feedback and were capable of repeated action: an example is a mechanical clock. Technically, this type of system is called an automaton. With the development of convenient sources of power, automation was expanded during the Industrial Revolution. 1995 The exclusive copyright in any language and country is vested in the International Federation of Clinical Chemistry (IFCC).

Upload: others

Post on 30-Jan-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • Journal of Automatic Chemistry, Vol. 17, No. (January-February 1995), pp. 1-15

    Use of artificial intelligence in analyticalsystems for the clinical laboratory

    John F. Place, Alain Truchaud, Kyoichi Ozawa,Harry Pardue and Paul SchnipelskyIFCC Committee on Analytical Systems: Chairman A. Truchaud

    The incorporation of information-processing technology intoanalytical systems in theform ofstandard computing software hasrecently been advanced by the introduction of artificial intelligence(AI), both as expert systems and as neural networks.

    This paper considers the role of software in system operation,control and automation, and attempts to define intelligence. AI ischaracterized by its ability to deal with incomplete and impreciseinjbrmation and to accumulate knowledge. Expert systems, buildingon standard computing techniques, depend heavily on the domainexperts and knowledge engineers that have programmed them torepresent the real world. Neural networks are intended to emulatethe pattern-recognition and parallel processing capabilities of thehuman brain and are taught rather than programmed. The futuremay lie in a combination of the recognition ability of the neuralnetwork and the rationalization capability of the expert system.In the second part of the paper, examples are given of applicationsofAI in stand-alone systemsfor knowledge engineering and medicaldiagnosis and in embedded systems for failure detection, imageanalysis, user interfacing, natural language processing, roboticsand machine learning, as related to clinical laboratories.

    II is concluded that AI constitutes a collective form of intellectualpropery, and that there is a need for better documentation,evaluation and regulation of the systems already being used inclinical laboratories.

    THEORY

    Introduction

    Automation was introduced fairly late in the field ofclinical chemistry. In the 1930s, increasing use was madeof simple instruments based on classical principles ofanalytical chemistry. Instrumental analysis became moreimportant because of a switch to physical and physico-chemical measurement approaches. The latter methodswere often more sensitive and could be used to quantifysmaller amounts ofanalytes; they were frequently simplerand tiaster than more conventional methods [1]. Spec-trometers, flame spectrometers, instrumentation usingelectrochemical electrodes and instrumental separationmethods (centrifugation, electrophoresis) were particularlyimportant in this respect.

    Until 1950 it was still common for clinical chemists to dodeterminations manually using instruments for themeasurement step only. The mechanization of sample-processing functions was greatly influenced by the

    Correspondence to John F. Place, DAKO A/S, Produktionsvej 42, 2600Glostrup/Copehagen, Denmark.

    segmented flow system started with the auto-analysers ofSkeggs [2], that were made commercially available at theend of the 1950s. This process has continued throughmany phases with the most recent advances being in thearea of robotics [3].

    Mechanization and automation of most of the functionalcomponents of analytical systems [4] has allowed clinicalchemistry to grow and develop into a systems approachin which all aspects ofclinical determinations have becomeintegrated. Software, that was originally used primarily tosequence the mechanical processes and process rawmeasurement data, has become so sophisticated andgenerally available that it can now be used to monitorand control processes and to interpret information. Thisdevelopment has made it possible to incorporate intelli-gence into advanced analytical instrumentation. Thisforrn of machine intelligence is also called ArtificialIntelligence (AI). An attempt is made later in this paperto define intelligence, and examples of intelligent systemswill be given.

    AI is being used increasingly in the clinical laboratory,both in the form of stand-alone expert systems forclinical decision making and as knowledge-based systemsembedded within laboratory instrumentation. In additionto AI systems that can be implemented in standardcomputers, progress is being made in the development ofsoftware architectures that mimic human intelligence.Research in this area is leading to a better understandingof human (natural) intelligence. This approach is sodifferent that it is sometimes termed naturally intelligent’to distinguish it from other artificial intelligence ap-proaches [5]. In this paper no such distinction will bemade and all machine intelligence will be described asartificial.

    Despite these advances, or perhaps because of them, thegeneral level of understanding ofA! is very low; users arenot aware of what AI really is and what it can or cannotdo. One ofthe main reasons for this is that the terminologyis confusing and even ambiguous and that the concept ofAI for many users has overtones of science fiction. Theaim of this paper is to attempt to present the conceptsand potential applications of AI in a format that is moreeasily understood by clinical chemists.

    Automation

    The earliest forms of automatic machines had mechanicalforms of feedback and were capable of repeated action:an example is a mechanical clock. Technically, this typeof system is called an automaton. With the developmentofconvenient sources ofpower, automation was expandedduring the Industrial Revolution.

    1995 The exclusive copyright in any language and country is vested in the International Federation of Clinical Chemistry (IFCC).

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    The most significant advances that have facilitated theevolution of automation in more recent years includethe development of electronics and digital computing,advances in software storage capacity and in sensortechnology and the derivation of theories of controlsystems including AI.

    System operation and control

    Before trying to define intelligence and the position of AIin the clinical laboratory, and in order to place AI in awider perspective, it is useful to look at the higher levelfunctional processes of system operation and control inclinical laboratory automation. These are illustrated infigures to 4.

    process

    Input Output

    Control

    Figure 2. The control loop in process control.

    Figure shows the functional elements of an instrumentalcomponent such as the incubation block of a clinicalinstrument. In this example the electrical elements in theincubation block (the effectuators) produce heat andwarm the block (output). Heat dissipates through theblock (feedback) and the temperature of the block (input)is measured by a built-in thermocouple and comparedwith a set of standards. When the temperature rises abovea set value, power to the heating element is switched off.The block cools by loosing heat to its surroundings. Whenthe temperature falls below a set value, power is appliedto the heater again until the measured input value againrises above the set value. This sort of feedback system isa central idea in cybernetics, defined by the Americanmathematician Norbert Wiener in 1947 as ’the science ofcommunication in the animal and in the machine’.

    In figure 2 the elements from input to output aresummarized in the single function ’process’ and feedbackis generalized into the wider term ’control’. Any form ofautomation or decision-making system can be regardedvery simply as an input giving rise directly or indirectlyvia a process system to an output, which then gives riseto some feedback or control.

    Measurement

    Input

    Standard IComparison t-

    Effectuator

    Feedback

    Output

    Figure 1. Functional elements of an instrumental component.

    Figure 3. The man-machine interface and process control.

    Figure 3 introduces the concept of decision as a mainlyhuman control function and feedback as a mainly machinecontrol function. As in figure 2, the input is on theleft-hand side of the figure and the output on theright-hand side. This figure also illustrates the differencebetween a system that is embedded within an instrument

    Meter Computer

    Sensor KeY_bard_

    Display

    Software

    Effectuator

    Automatic tool Programmable tool

    Figure 4. Classification ofsimple instruments intofour categories.

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    (bottom half of the figure) and the open or stand-alonesystem (total figure). In the stand-alone system, directand interactive human control is possible as output isdisplayed for human evaluation. In the embedded system,cycles of processes are evaluated automatically byintelligent controllers below the surface of the man-machine interface.

    On the basis ofwhether the input is from man (for examplevia a keyboard) or machine (by sensor and associatedcircuitry) and whether output is to man (for example asa display) or to machine (by effectuator), one can developa basic classification of simple instruments. This isillustrated in figure 4. In a meter, the machine (sensor)input provides data that are processed by software andrelayed as information to a display, while in a personalcomputer it is the human operator (for example via akeyboard) that provides input to the software. Effectuators(for example a motor or some other form of tool) can bemonitored automatically by a sensor system (for examplein robotics), or programmed via a keyboard by a humanoperator.

    Figure 4 also shows that software is the central feature ofprocessing and control in automation. This is of course agrossly simplified illustration. Real instrumentation is verycomplex, integrating many such simple units into acomplex whole.

    Intelligence and learning mechanisms

    With the introduction of programmable software itbecame possible to introduce great flexibility into thesystem, both in the operational processes and in thefeedback loop. As more flexibility and higher level controlis introduced into the system it can acquire the capacityto become ’intelligent’.

    However, it is not always possible or appropriate tointroduce machine intelligence into system operation andcontrol. When the decision sequence is clear, an algorithmcan be defined, and a conventional software programmemay be more appropriate.

    Intelligence is very difficult to define. It has something todo with the faculty of understanding and the capacity toknow or to learn and much of the basic work on AI is todo with the mechanisms oflearning. The major advantageof AI is that it allows learning via the accumulation ofknowledge.

    However, learning includes a large proportion of insightcharacterized by good classification of the stimulus and’silent’ trial and error to reinforce the response.

    Knowledge engineering

    In an attempt to get away from science fiction connotations,AI technologies are now commonly termed ’knowledgeengineering’ and the intelligent computer software thatembodies knowledge is called the ’expert system’. Becauseit has been rather difficult to develop practical applicationsof automatic learning, expert systems often do not includethe ability to learn by themselves. Nevertheless, such

    expert systems are able to make decisions based on theaccumulated knowledge with which they are programmedand are therefore commonly included within the definitionofAI systems. An excellent introduction to expert systemsis given by Bartels and Weber [6], and Wulkan [7]provides a recent review of the state of the art in relationto clinical chemistry.

    Expert systems possess high-quality, specific knowledgeabout some problem area and build upon the problem-solving capabilities of human experts. Human expertiseis characterized by the fact that it is perishable, difficultto transfer and to document, hard to predict and expen-sive to obtain, and one of the main aims of expert systemsis to overcome, these frailties by acquiring, representing,documenting and preserving expertise in a form that thenbecomes widely available at lower cost. The very processof realizing and documenting this human expertise maywell improve upon it, and in some cases, expert systemsuse exhaustive or complex reasoning that individualhuman experts would be incapable of using. However,the expert system, like its makers, is not necessarilyinfallible.

    Expert systems, like their human counterparts, shouldprovide a high level of skill with adequate robustness.Unlike most conventional programs, expert systemsusually have the ability to work with represented anduncertain knowledge and the ability to handle verydifficult problems using complex rules. On the other hand,if the problem requires creative invention, it is really notpossible to develop an expert system to solve it. Similarly,if the solution requires a significant input of general broadcommon-sense knowledge, the expert system may not beable to deal with the problem in a satisfactory way.Differences between conventional software programs andexpert systems are listed in table 1. Some of the termsused in table are discussed below under the section ’theknowledge base’.

    The distinction between a conventional software programand an expert system may not be as clear as table mightsuggest. Not all existing expert systems are able to dealwith uncertainty and some operational expert systems useconventional Bayesian probabilistic methods to deal with

    Table 1. Conventional programs and expert systems compared.

    Conventional programs Expert systems

    Represent and use dataAlgorithmicIterate repetitively and

    exhaustivelyUse a decision sequenceMust have all decision rulesStop if data missingDerive one solution

    consistentlyUse numbers and equationsPresent a result with

    confidence limitsAppropriate for known,

    accepted ways to solve aproblem, especially withonly one solution

    Represent and use knowledgeHeuristicUse inferential processes

    May have numerous ways toreach conclusion

    Allow for missing informationDerive best decisions possible

    Can also use conceptsProvide uncertain resultsRank results by likelihoodAppropriate for imprecise

    information andconceptual problems

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    Knowledge User Domainengineer expert

    User

    Knowledge Inferencebase ..,.. -..,.., .,. engine

    Figure 5. Elements of an expert system.

    uncertainty. A complex AI system may also contain aseries of conventional programs embedded within it.

    An expert system consists of at least four structuralcomponents (figure 5): the knowledge base that contains theexpert knowledge (the facts), the rules and the heuristics,i.e. experience and as much of the intuition as can berepresented; the inference engine that contains the reasoningand problem solving strategies; the working memory thatholds data about the particular situation the system isevaluating and records the steps ofthe reasoning sequence;and the user interface by which the user (or another expertsystem, or the main memory of an information system)can communicate with the expert system. An expertsystem without the knowledge base is called a ’shell’.

    The expert system may have additional modules, forexample one that can provide the user with an explanationof how and why it arrived at the conclusion, or a learningmodule with evaluation and feedback systems that canupdate data and even the rules of the expert system.

    There are two components in expert systems that thereader may not be familiar with--the knowledge base andthe inference engine. These are described in the followingsections.

    The knowledge base

    The knowledge base is developed by the knowledgeengineers who query the domain experts. An importantdistinction is made between data (which consist ofnumerical representations, equations and algorithmicprocedures) and information (which encompasses facts,concepts, ideas, previous experience and rules of thumb).The knowledge engineer feeds the system base with detailsfrom manuals and test procedures etc. (facts) andinterviews the domain experts to learn how and whenthey reach a decision (rules) and to define the wayexperience and intuition play a role (heuristics). The wayin which knowledge engineers interpret real-world

    Table 2. Representations used in MYCIN.

    ’Definitely not’ 1-0’Almost certainly not’ -0"8Probably not’ 0"6’No evidence’ -0"2 to 0"2’Slight evidence’ 0"3’Probably’ 0"6’Almost certain’ 0"8’Definite’ 1"0

    knowledge in terms ofcomputer data structures is a criticalcomponent of an AI system and is called ’knowledgerepresentation’.

    A major characteristic of expert systems is their ability toaccept uncertain data and to work with concepts. Thereare various sources of this uncertainty of information. Avariety of techniques can be applied to allow uncertainfacts and imprecise knowledge to be included and to givethe software the ability to work with both quantitativeand qualitative information.

    Uncertainty of facts arises for example in sampling apopulation. The uncertainty of sampling is a knownstatistical quantity and can be dealt with using classicalstatistical methods.

    Uncertainty also arises in the reasoning processes in whichthe same data can give rise to several alternativeconclusions or in vagueness ofdescription, so that differentobservers will classify their observations in different ways,giving rise to ’fuzzy sets’. Fuzzy sets are a particularbranch ofmathematics founded in 1965 by Lofti A. Zadeh,now Professor of Computer Sciences at the UniversityCalifornia at Berkeley. Fuzzy logic is so important thatthe Japanese Industry Ministry, together with theEducation Ministry and private industry have set up aFuzzy Logic Institute in Fukuoka Province.

    Early expert systems used certainty factors to deal withthis sort of uncertain information. MYCIN [8], one ofthe earliest expert systems in the clinical disciplines andoriginally developed to recommend antibiotic therapy forbacterial infections of the blood, used the representationsshown in table 2, with certainty factors scaled from 1.0to +1.0.

    Expert systems can deal with concepts by defining themusing attributes and then representing them in the formof symbols, frames, rules etc. For example in symbolicprocessing, the symbol can be a string of characters (forexample words describing the concept) or a bit pattern(for example an icon on a computer screen). Thesesymbols can then be manipulated by matching them,assembling sets etc. and subjected to straightforwardlogical decisions and heuristic operations. For the reasoningcapacity itself, it is immaterial what the knowledgerepresentation instance (symbol, frame, rule etc.) standsfor and hence what the concept actually is.

    Knowledge representation

    Several basic knowledge representation techniques areknown" these are listed in table 3. Fact lists are the simplest

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    Table 3. Knowledge representation techniques.

    Facts list--.no uncertainty

    Symbols--representing defined concepts--.uncertainty included in the definition

    Production rules--.premise + action part

    certainty factor given to conclusionStructured objects and object-oriented structures

    ---descriptive statements--symbolic, used in morphology and anatomy

    Associative networks--entities (nodes)--relationship between entities (arcs)

    Frames---complex nodes allowing data entry--contains slots with attributes---organized in class hierarchy

    form of knowledge representation. They are used to listdata and equations and the uncertainty attached to thefacts can be precisely defined. Symbols are used to defineitems or concepts. As described above, their uncertaintylies in the definition itself and not in the symbol.

    Production rules consist of a set of preconditions (thepremise) and an action part that leads to a conclusion(for example, see O’Connor and McKinney [9]). If thepremise is true, the action part is justified and a certaintytiactor is given to the conclusion.

    Structured objects are used, for example by Bartels et al.[-10], in morphological or anatomical representation thatare best dealt with in the form of descriptive, declarativestatements. In associative (or semantic) networks both theentities themselves and the relations between the entitiesare represented, the former by nodes and the latter byarcs connecting these nodes.

    Frames are analogous to complex nodes in a network (forexample Du Bois et al. [-11]). They allow a more preciseand comprehensive description of an entity. In a frame,a data object has slots that contain attributes of the objectand a slot can activate data gathering procedures if theinformation it contains is incomplete. Frames are usuallyorganized in a class hierarchy with information in higherclasses inherited by lower classes. Each problem requiresits own knowledge representation and there seems to beno consensus concerning the use of the various knowledgerepresentation techniques. Hybrid techniques abound. Anexample of a hybrid frame and rule architecture is givenby Sielaffet al. [12] in ESPRE, a knowledge-based systemto support decisions on platelet transfusion.

    Inference engine

    The inference engine implements the problem-solvingstrategies built up by the knowledge engineer. A commontechnique in developing the expert system is to identifythe ’primitives’ or independent variables first and thendevise a system in which these can be redefined torepresent higher level or dependent knowledge. Eachknowledge based system is thus built of ’primitive’

    knowledge bases linked by nodes to higher level knowledgebases. At each node, a reasoning process is designed andall the reasoning procedures together form the inferenceengine. The inference engine is independent of thecontents of the knowledge base.

    The way in which the inference engine reasons can beillustrated by looking at a typical rule. The condition part(the premise) has three components, the object, a logicoperator and an attribute. The action part then specifieswhat will be done if the condition part is found to betrue; for example the rule;

    IF (nuclear size is >40 gm2)THEN (compute the total absorbance [A]).

    At the start, the system is pointing at the location wherethe input ’nuclear size’ is written into the workingmemory. As this follows an IF statement it is identifiedas a component of the condition part of the rule. Theobject ’nuclear size’ is matched with an object table. Thelogical operator ’is >’ is identified and the value to beattributed is requested from the user. The value is enteredthrough the user interface into the working memory. Theintierence engine retrieves this value and compares it withthe value of 40 lam

    z specified in the rule. If the rule istriggered, the action part is interpreted by the inferenceengine and output is provided to execute the THEN part.Finally, the inference engine writes the completedsequence into the working memory, to add to the sessionhistory.

    It is well known that the higher level of pertbrmance ofexperts in any domain is related to their ability to makea structured analysis, rather than trying all the alternatives.Especially in medical diagnosis, powerful hierarchicalstructures lead from symptoms to diagnosis.

    The inference engine typically operates using two sets ofrules, the first concerned with the search space(s) amongknowledge representation instances within which to lookfor solutions and the second concerned with which nodesto evaluate and in which order.

    There are two basic ways of searching. If the routingdecisions at each node are highly certain and unambiguous,a depth first strategy is indicated. If such a strategy turnsout to give the wrong answer, the program returns to thestart and follows a new in-depth search. In the breadthfirst strategy the search space is searched exhaustively ateach level to avoid going deep into a blind path. Areasonable (in relation to time taken and use of memoryspace) focus on the problem is retained in a so-calledbounded, depth-first strategy.

    Once the strategy is indicated, the inference engine willcompare observed facts and rules, keep a record ot" thesequence and outcome of rule use, resolve conflictsbetween rules, present the user with a set of alternativeresults including degree of certainty and with a detailedexplanation of the reasoning sequence. It is importantto stress that problem-solving strategies are heavilydependent on the representation and inappropriaterepresentation can make an expert system fail.

    A number ofinference (problem-solving) mechanisms canbe recognized and some are listed in table 4.

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    Table 4. Some inference (problem-solving) mechanisms.

    Heuristic search---.rules of thumb

    Generate/test--.defining search spaces as required

    Forward/backward chaining--pruning decision trees

    Recognize/act--special conditions requiring specific actions

    Constraint directed search--main skeleton of acceptability is known

    Metarules---exhaustive mapping

    Heuristic searching is based on experience or ’rules ofthumb’. Some people, for example in maintenance andfailure detection, are able to diagnose problems morequickly and effectively than. others, because of domain-specific knowledge acquired as a result of experience. Thesearch space for a particular problem can be rapidlynarrowed down by this sort of knowledge.

    In some cases the search space is not provided, butgenerated as the system proceeds and evaluated shortlythereafter. This is known as ’generate/test’.

    Forward chaining is testing preconditions for the truth ofa given hypothesis for given data, rather like pruning adecision tree. Forward chaining is useful when the numberof possible outcomes is limited.

    Backward chaining, with the system seeking data to justifythe selected hypothesis, is the reverse and is moreappropriate when there are a large number of possibleoutcomes. Should a hypothesis turn out to be false, thesystem can undo conclusions that led to or followed fromthe false hypothesis.

    Some sets of features that occur during problem solvingrequire specific actions. These are solved usually by IF(features pattern) THEN (take action) rules. This typeof inference mechanism is called ’recognize/act’.

    In a constraint-directed search the main skeleton of whatis acceptable is known, but not exactly what it is.

    Metarules are part of the procedural knowledge of expertsystems. They are used to evoke programs and to guidethe inference process and thus define how to use the rulesin the system. They are basic blocks of (usually)IF_THEN rules for exhaustive mapping.

    The inference engine may make use of one or a numberof these problem-solving mechanisms.

    Problems with expert systems

    Since the expert system is built using knowledge from theexpert, it may perform more or less like an expert.However, as an expert system does not get tired, it willgive more consistent results than its human counterparts.A good expert system may even give better results thanhuman experts performing at their best.

    Unless knowledge already exists and is accepted (forexample in the form ofmaintenance manuals), knowledgeacquisition, i.e. what to put into the knowledge base andmore particularly what to leave out, is a problem. It maybe difficult to ensure that all the relevant knowledge hasbeen obtained from the human expert and indeed theknowledge engineer may be in doubt about what isrelevant.

    Domain experts will always provide their knowledge in aspecific context, whereas the expert system attempts toapply this knowledge to all cases. Knowledge acquisi-tion tends to increase the size of the knowledge basedramatically. For example in a thyroid report interpre-tation expert system, the knowledge base doubled in sizeto move from 96 to 99.7 agreement with experts(Compton [13]). For this reason the function of a newexpert system should be carefully defined. If the expertsystem is to be used for teaching, it will contain a largeamount of explanatory functions that are not needed ifthe expert system is to be used to provide expertjudgements in consultation. If the system is to be used toscreen input and select cases that need detailed attention,its knowledge base should be broad, but without greatdepth. On the other hand, if the system is to function asan expert in complicated cases, the knowledge base shouldbe large but limited to a defined field of expertise.

    One of the most important features of an expert systemis that it can be improved by the addition of furtherinformation from users. Provision should be made formaintaining the knowledge base. Because of rule overlapsand the complex interrelationships between rules, main-taining the knowledge base can become complex. Toolsexist for knowledge base validation, test and maintenance.Guidelines and references are given by Pau [14].

    A further question is one of acceptance and compliance.Will the intended users enter the (correct) informationrequested by the expert system and will they accept theadvice offered by the expert system? Many medicaldiagnostic expert systems seem to fail to acquire commonuse because they do not address these sorts of problems.

    Programming languages and hardware

    The first hardware architecture widely used in computersconsisted ofa central processor connected by a single databus to memory. The processor, acting on instructions fromthe software program stored in its memory, dictates whatis to be done at any given time, sends data to and receivesdata from memory and performs the bit operations thatare the basis of computations and logic operations at themachine (semi-conductor) level. As well as holding thestored program, the memory stores initial data andintermediate and final results of computations. Even verycomplex computers still build on this simple (vonNeumann) architectural unit.

    As the system becomes more complex, external memoriesare added, together with input/output units to allow thesystem to communicate with the outside world. Thisrequires an increase in the number of pathways for

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    Table 5. Computer languages.

    Generation Characteristics Examples

    Object-oriented Smalltalk, C + +Variable unification Prolog, LISP

    backtrackingRecursion Pascal, Modula, ADA, CParameter procedures Fortran, CobolMacros, ’go to’ BasicMachine code Assembler

    communication. Digital signals representing memorylocations, control instructions and data are passed alongsuch buses. An internal clock is required to pace theoperations and to ensure that all the parts of the computerstay synchronized. In fact software control and syn-chronization (housekeeping functions) become a majorconcern as complexity increases,, mainly because thecomputer operates in a series of sequential steps and eachoperation must wait for its predecessors to be completed.

    Many computer languages (table 5) have been developedover the years to deal with this increasing complexity.They range from the machine-code languages, likeAssembler, to object-oriented languages, such as Smalltalk.Most of these computer languages have been largelyconstrained by the von Neumann computer architectureand have been written for serial processing.

    The increase in computer capacity building on the vonNeumann architecture has been made possible by thedevelopment ofsurface (quasi two-dimensional) technologyin the 1970s and 1980s. As a result, computing capacityper dollar has been able to double every three years sincethe early 1950s [-15]. At first it looked as though theincrease in hardware capacity would meet the increaseddemands for power and speed that would be needed tosolve real problems. However, in 1973, SirJames Lighthillsuggested that real problems would require too great anincrease in the power and speed ofconventional computerhardware to be feasible [16]. He called this the ’combina-torial explosion’. Building intelligence on the base ofslavish searching through data by serial processors wasjust not practical--especially not for problems like patternrecognition that characterize intelligence.

    The human brain does not succumb to this combinatorialexplosion in order to identify, for example, a face. In fact,we find it relatively easy to do this, while artificial imagerecognition systems find this task particularly difficult.

    Some capacity problems can be solved by parallelprocessing. Human intelligence is of a concurrent natureand therefore many applications of artificial intelligenceconcern processes that occur concurrently and are bestsolved by processing in parallel. Actually, it is possible.and not unusual to emulate parallel processing using serialprocessors, but this requires extremely complex controlmechanisms to interrupt and synchronize the concurrentlyrunning processes.

    Attempts have been and are being made to design systemsmore suited to solving the mixed sequential and concurrentnature of real world problems, in which the software andhardware architectures are superimposable. One such

    design is the transputer chip designed to run the softwarelanguage Occam [17]. The transputer is a single chipcomputer with a processor, local memory, reset, clock andthe necessary interfaces for communication. Such a chipavoids most of the communication problems of conven-tional computers and can be built into arrays andnetworks that resemble more closely the sort of structurefound in the human brain. The Occam language has anarchitecture that corresponds to the transputer so thathardware and software is compatible in a structural way.Theoretically, at least, this makes the system more robustand would allow it to continue to perform even if a partis removed.

    Neural networks

    The human brain does not solve problems by using awell-defined list of computations or operations. Decisionsare based almost exclusively on what has been learnedby experience. The concept of modelling a system on thestructure ofthe human brain was introduced by McCullochand Pitts as the neural model in 1943 [18]. In neuralnetworks (figure 6), individual processing elementscommunicate via a rich set of interconnections withvariable weights. Just like biological neurones, processingelements exist in a variety of types that operate in differentways. One or more inputs are regulated by the connectionweights to change the stimulation level within theprocessing element. The output of the processing element

    Input

    Output

    Figure 6. Elements of a neural network.

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    is related to its activation level and this output may benon-linear or discontinuous. The network is taught ratherthan programmed and teaching results in an adjustmentof interconnect weights, depending on the transtrfunction of the elements, the details of the interconnectstructure and the rules of earning that the system follows.Thus memories are stored in the interconnection networkas a pattern of weights. Information is processed as aspreading changing pattern of activity distributed acrossmany elements. A short review of neural networks andtheir potential applications in biotechnology is given byCollins [19]. This includes a table of software tools thatare available for neural net development.

    The neural network is thus a third type of information-processing technology (the other two being conventionalsoftware languages and expert systems). The .neuralnetwork is good at doing the sort of things the humanbrain is good at, in particular pattern classification andfunctional synthesis. These will be described in the sectionon applications. The neural network acts as an associativememory, it is good at generalizing from specific examples,it can tolerate faults in the network and it can beself-organizing. Differences between neural networks andexpert systems..are given in table 6.

    In theory, part of the intelligent performance of neuralnetworks relies on the physical structure of the network.In practice, neural networks are usually simulated incomputers using conventional hardware, so that thesoftware structure and the hardware structure are notphysically related.

    Constructing networks in hardware is a problem. Thereason is that semiconductor technology is essentially twodimensional, whereas the physical embodiment of neuralnetworks is basically three dimensional. All integratedcircuits become problematic when they are miniaturized.Connections act as antennas over minute distances leadingto cross talk; electrical signals are slowed by inductanceand capacitance; the. dwell’ time of electrons passingrapidly may not be sufficient to effect a change in thecircuit; there may be problems with the random distribu-tion of circuits on the microscopic scale; electrons mayeven cause breaks by electromigration of atoms. Apartfrom this there are significant problems with the slowspeed of off-chip connections and physical limits to thenumber of connections that can be made to a chip.

    Table 6. Expert systems and neural networks compared.

    Expert system Neural network

    Software defined Hardware definedKnowledge base Structure Processing elementsInference engine InterconnectionsStandard computer Memory Interconnect weight

    (transputer) patternProgrammed Learning Trained

    Can generalizeDepends on knowledge Action mode Processes time-

    engineer dependent spatialpatternsSelf organizing

    Small tolerance Faults Very high tolerance

    Such physical constraints ofstructure have led researchersto look for alternatives to the semiconductor approach tohardware. Optical signals have several advantages overelectrical signals as a basis for computing. For example,they may allow speeds three orders of magnitude faster.Light beams do not need carriers and can even crosswithout interacting and are thus more amenable to three-dimensional structures. The main limitations to thisapproach are related to the materials used. Some progresshas been made, for example using holographic images asthe connectors between opto-electronic neurones inphotoreactive crystals [20-], still optical computing isprobably more science fiction than fact. Attempts are alsobeing made to develop hardware on a biological ormolecular basis.

    A model of human intelligence?

    While the expert system can justify its reasoning strategy,declaring which search space was used, explainingdecisions taken and the certainties involved, this is notpossible with a neural network. As indicated above, neuralnetworks process time-dependent spatial patterns andhave to be trained rather than programmed.

    just as we have no difficulty recognizing a face, but agreat deal of difficulty trying to explain how or why werecognized the face, a trained neural network canrecognize the solution to a problem but cannot reallyexplain why. A recent paper by Pau and G/itzsche [21]addresses this question. Future AI systems may come torely on a combination of the experienced neural networkto recognize the solution to a problem and an expertsystem to rationalize the solution given. This combinationis termed a cognitive expert system and provides aninteresting model of human intelligence.

    APPLICATIONS

    Introduction

    In the clinical laboratory, improvements in instrumentalsystems as a result of the intensive use of microprocessorsand better sensors have increased dramatically the rateat which measurement data (both quantitative andqualitative) can be obtained. This has been termed ’dataaffluence’. However, such complete and complex data donot represent information in a really useful form.

    Several tools, including expert systems, are available forthe interpretation of data or basic information into usefulinformation, using either conventional data processing orneural networking systems that are adequately trained.

    A major distinction can be made between, on the onehand, stand-alone systems (examples in table 7) that interactwith the user, the main purpose of which is to replacehuman experts, and embedded systems that are integratedinto large electrical or mechanical systems and functionautonomously.

    The classical illustration of this is two different chess-playing systems. The first is a computer program with thepotential to beat experts: players communicate with the

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    Table 7. Example of stand-alone expert systems used in medicine.

    Expert system Domain Reference

    ABEL Acid-base electrolytes Paril et al. [34]GARVAN-ES Thyroid Compton [ 13]

    Colon cancer Weber et al. [67, 68]PONJIP Blood gas Leng et al. [69]CASNET Glaucoma Weiss et al. [30]MYCIN Infectious diseases Shortliffe [8]INTERNIST-1 Internal medicine Miller et al. [35]AI/COAG Haemostasis/coagulation Lindberg et al. [32]MYCIN Bacterial infection Buchanon and Shortliffe [29]

    Erythrocyte enzymes Wiener et al. [70]cadi-yac Antibiotic sensitivity Peyret et al. [23]DIAG Dermatology Landau et al. [56]SESAM-DIABETE Diabetes Levy et al. [22]MEDICIS Diagnosis generally Du Bois et al. [11]ESPRE Transfusion Sielaff et al. [ 12-]OVERSEER Drug treatment Bronzino et al. [24]Braindex Brain death Pfurtscheller et al. [71]Audex HM Audiology instruments Bonadonna [72]Foetos Foetal assessment Alonso-Betanzos et al. [73]

    system via a keyboard terminal and text display. Thesecond plays poor chess, but it inspects the board via aTV camera, makes its own moves for itselfwith a computerdriven hand, explains its moves in passable English,improves its play with practice and can accept strategichints and advice from a tutor. Both of these systemsrepresent fairly advanced projects and both wouldprobably be regarded as intelligent by most people. Thestand-alone system is intelligent in a theoretical sense,while the second instrument is an example of what canbe called an integrated system with embedded AI.Developing a stand-alone computer expert system that iscapable of beating international grand masters would bea major task for the knowledge engineer. However, chessprograms operating at a popular level are now fairlywidespread. The chess-playing machine would be morechallenging in many areas such as image processing,mechanical manipulation, and natural language inter-facing.

    it is relatively simple to reuse the system shell for a differentknowledge base. Despite the development ofAI languagesthat are more efficient in declaring goals and acquiringknowledge, most expert systems are still written inconventional languages (for example C and Fortran).

    Many expert system development tools are thereforederived directly from existing expert systems simply byremoving the knowledge base. The classic example is theEMYCIN tool developed at Stanford University (VanMelle et al. [25]) from the MYCIN expert system. Thenewer tools for development applications are better thanEMYCIN. Using such development tools, rules anddecisions can be changed with a minimum ofinconvenienceand the system iterated until it performs at least as wellas the human expert would.

    Some examples of these development tools are given intable 8. In some tools, the knowledge base is entered inthe form of production rules. In others, the knowledge is

    Stand-alone systems

    Stand-alone expert systems have interactive user interfacesand are designed to be used like experts. Three types ofstand-alone systems are discussed below. Expert systemdevelopment tools have been developed to allow users tobuild their own knowledge-based systems. Medicaldiagnosis expert systems are developed using input fromdomain experts and are designed primarily for interactiveuse by medical staff and students.

    Development tools

    An expert system can be built from scratch using astructured language such as Pascal or C, or a fifthgeneration AI language such as LISP (for example Levyet al. [22]) or Prolog (Peyret et al. [23], Bronzino et al.[24]). In fact, expert systems are eminently suitable forthe development of complex decision sequences becausethe knowledge is separated from the program. Therefore,

    Table 8. Expert system (language) development tools.

    Tool References

    EMYCIN (Lisp) StanfordEXPERT (Fortran) RutgersKEE (Lisp) IntellicorpVP-ExpertPC-Plus (Lisp) Texas

    InstrumentsGoldWorks (Lisp) Bolesian

    SystemsART (Lisp)EXSYSAPEX Fault diagnosis shellNEXPERT (C) Neuron DataKAPPA IntellicorpDELFI (TurboPascal) Delft

    UniversityPro.M.D. (Prolog)

    Weiss and Kulikowski [74]Gill et al. [75]O’Connor [9]Pfurtscheller et al. [78]

    Marchevsky [76]Merry [77]

    de Swaan et al. [78]

    Pohl and Trendelenburg[79]

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    entered either as a matrix ofattributes and outcomes fromwhich the system derives rules, as the rules themselves, oras networks ofinteractions. Purely rule-based systems havethe limitation that the structural and strategic conceptsmust be incorporated in the rules on entry.

    The evaluation of tools for developing expert systems isdiscussed by Rothenberg et al. [26]. Predictably, thechoice of tool depends on the domain, the problems to besolved and the expertise available. The target environmentmay impose restraints, for example, the need to integratethe shell into existing hardware and with existing software.Clearly the user interface must be tailored to the user.

    Such development tools allow the domain expert, togetherwith the knowledge engineer, to create and debug aknowledge base and integrate it into an expert-systemframework that provides the necessary interference engineand user interface.

    Software tools for neural netdevelopment are also readilyavailable. Collins [19] lists 15 such tools, some of which(for .example BrainMaker from California ScientificSoftware) can be implemented on a standard personalcomputer.

    Medical diagnosis

    There is a clear need to be able to deal optimally withthe affluence of data and explosion of knowledge stilloccurring in the medical field. Computer-supportedmedical decision making (CMD) systems may be thesolution and there is evidence that clinical performancemay be improved by the use of CMD (see Adams et al.[27]).Perry [28] reviewed knowledge bases in medicine, lookingat three basic models. MYCIN and its domain-inde-pendent version EMYCIN are rule-based systems and arethe basis of expert systems, such as PUFF for lung disease(Buchanon and Shortliffe [29]) and ONCOCIN forcancer therapy (used by Stanford University). Causalmodels such as CASNET (Weiss et al. [30-]), that wasgeneralized and extended to form the shell systemEXPERT, support expert systems like AI/RHEUM(Porter et al. [31]) and AI/COAG (Lindberg et al. [32]).Present Illness Program, PiP (Pauker et al. [33]),Acid-Base Electrolytes, ABEL (Paril et al. [34]), andINTERNIST-1 (Miller et al. [35]) are hypothesis-basedsystems (Bouckaert [36]).Many medical problems, especially those related only tolaboratory data, can be solved using conventionalalgorithms as they contain data of known imprecision. Ithas been suggested that only solutions requiring between10 min and 3 h of clinician time are suitable for expertsystems (Frenzel [37]). The upper limit may be theboundary for knowledge representation.

    In a review by Winkel [38], 13 expert systems thatemphasize the use of laboratory data were listed. Winkelpoints out that for practical reasons (for example entering.data), it may not be meaningful to introduce an expertsystem into the laboratory or elsewhere unless it isintegrated into the .hospital or laboratory informationsystem, and this may be a difficult task. The main purposeof the OpenLabs project (with 28 participants from 12

    countries) under the European Community’s AIMResearch and Development Programme is to integrateknowledge-based systems and telematics with laboratoryinformation systems and equipment.

    Trendelenburg [39] makes a clear distinction betweeninternal medicine, with its large knowledge domains, wherethere is little computer-supported data flow and not muchfamiliarity with medical computer applications; andlaboratory medicine, where the domains are clear cut, staffare familiar with medical computer applications and thedata flow is computerized. More than 50 Europeanclinical laboratories are applying or developing knowledge-based systems for generating reports on special findingsin an instructive and interpretative way. Trendelenburgemphasizes that discussions about the computer-basednature of the reports produced are best avoided, and theresponsibility for using the report must be with the user.

    Comparisons have been made of the performance ofstandard computing programs, expert systems and neuralnetworks in medical diagnostic situations. For example,Wied et al. [40] discuss the ability ofdiscriminate functionanalysis and neural networks to classify DNA ploidyspectra. Dawson et al. [41-] discuss the use ofnuclear gradeas a prognostic indicator for breast carcinoma. Advancesin image analysis can reduce the effect of inter-observervariability. Both Bayesian analysis and neural networkingagreed with the human observer for low-grade lesions,whereas nuclear heterogeneity in high-grade tumoursresulted in poor agreement (only 20 successti21 classifica-tion). Wolberg and Mangasarian [42] discuss a moresuccessful comparison using 57 benign and 13 malignantsamples, in which a multi-surface pattern recognitionsystem scored one false negative, a neural network scoredtwo false positives, and a decision-tree approach scoredthree false positives. Other examples of neural networksused in medical diagnosis are given in table 9.

    A successful neural network will show a balance betweenthe error that it exhibits with its training set and its powerto generalize based on the training set. Reduction oftraining error should thus be treated with caution asovertraining can seriously impair generalization power(Astion et al. [43]). It is known that the number and sizeof hidden neuron layers influences the representationcapabilities and the generalization power of the neuralnetwork, yet these parameters are often chosen quitearbitrarily.

    Expert systems are used widely in clinical laboratories forthe validation ofbiochemical data. For example, Valdiguiet al. [44] have used VALAB, an expert system, incor-porating 4500 production rules and the forward chaininginference engine KHEOPS, to validate more than 50 000

    Table 9. Examples of neural networks used in medical diagnosis.

    Domain Reference

    Myocardial infarctionECGNeonatal chest radiographsDementiaIR spectroscopy

    Furlong et al. [80]Dassen et al. [81-]Gross et al. [82]Mulsant [83]Wythoff et al. [84]

    10

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    laboratory reports. The system runs as a silent partner inthe laboratory organization and is able to dramaticallyreduce the work involved in the routine validationofdata. Neural networks are also being applied to clinicallaboratory data. Reibnegger et al. [45] show that neuralnetworks are capable of extracting hidden features, butemphasize that validation is even more important whenneural networks are used for medical diagnosis based onlaboratory information.

    Embedded systems

    Embedded knowledge-based systems (i.e. AI systems withno human interaction) are used in laboratory instrumen-tation at a variety of levels (see table 10). Unfortunatelythey are not as commonly published as stand-alonesystems for medical diagnosis.

    The following is a brief description of six differentapplications ofembedded systems that illustrate increasingdegrees of compatibility with neural networks; these are:failure detection, user interfacing, image processing,natural language processing, robotics and machinelearning.

    Applications to failure detection, testing and maintenanceFailure detection, testing and maintenance are knowledge-intensive tasks that rely on experience [46]). Apart fromusing test procedures, skilled maintenance staff useheuristics and an understanding of how the system worksto solve a problem. The high level of performance ofmaintenance experts suggests that they use powerfulhierarchical structures in problem solving. The goal ofexpert systems in this area is to exploit this proceduralknowledge in conjunction with failure detection togenerate self-testing procedures and to ease the operatorworkload in the man-machine interface.

    The requirements for an ideal troubleshooting systeminclude minimizing the probability of false alarms and of

    Table 10. Typical applications of embedded expert systems.

    Instrument self testing and diagnosis, includingcalibration of sensors and displays, trouble shootingof instrument.

    Handling signal acquisition. Transduction of signalsfrom sensors and fast storage of measurement data.

    Treatment of measurement data from signal acquisitionand extraction of features for further interpretation.Conversion of data to primary information.

    Image processing in microscopy, for the sorting ofbiological structures by size, absorption, colour.

    Timing and sequencing of mechanical functions(especially for multichannel analysers). For examplekeeping track of position sensors, clock timing,parallel movements, and prioritization by queuingtheory.

    Storage and retrieval of information for statistics,record comparisons, calibration, standardization.

    Interpretation and presentation of results. Layout ofman-machine interface screens, generation andpresentation ofreports and explanations in customizedformats.

    not detecting failures, minimizing the time taken, effectiveintegration of knowledge, ease of maintenance and ex-pandability, broad application, the ability to handleuncertainties and a large explanation capability.

    A major task is to devise schemes of knowledge represen-tation to allow failure events to be analysed by the mergingofvery diverse information sources, for example analogue/digital signals, logical variables and test outcomes, textsfrom verbal reports and inspection images. Embeddingthe expert systems in a distributed reasoning structureshortens execution times and allows for easy updating.

    Groth and Modn [47] describe a system providingreal-time quality control for multitest analysers using arelational data-base management system. This includes aknowledge-based troubleshooter and advisor and wasdesigned for the PRISMA mutichannel analysis system.Another example is Woodbury [48] who developed anexpert system for troubleshooting an Hitachi analyser.

    Image understanding vision systemBartels et al. [6] describe an image understanding machinevision system for histology which is based on three expertsystems. A scene segmentation expert system extractsinformation from the image. This is an important functionand it has been shown [49] that machine vision systemsare able to detect subtle differences in morphologicarchitecture that the human observer is generally unableto discriminate beyond a certain lower limit. The knowl-edge representation is in the form of a semantic net withassociated frames and the knowledge is written in declar-ative statements. In the case of gland imaging for example,the system might look for a lumen (that is defined in termsof an ellipse ratio of more than 1"50 and an area less than650 pixels) with glandular nuclei grouped within a certaindistance around the periphery (that are defined underobject groups) using a statement such as:

    Gland IS A SINGLE Lumen WITHa SET OF Object Groups

    The different features found by this system are listed andcalled by the second interpretative expert system thatrelates diagnostic conceptual knowledge to them. The’degree of cibriformity’ in prostate glands is, for example,measured by tracing the circumferences of the lumina andrelating this to total area to obtain form factor ratio. Thestructure of the interpretative expert system is very similarto the scene segmentation system.

    The third diagnostic expert system is in the form of aspreadsheet, with diagnostic categories, clues, clue valuesand certainty values. This expert system can run auton-omously as an embedded system or can be run inter-actively as an expert system.

    User interfacesThe success of knowledge-based instrumentation systemsdepends to a great extent on the man-machine interface,both during development when the domain expert andknowledge engineer provide the system with a represen-tation of real knowledge and human reasoning, and laterwhen the user tries to implement the system.

    11

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    Table 11. Generic functions for man-machine interfaces (basedon Pau [50]).

    Data handling and presentation (acquisition, coding,compression, presentation, recording)

    Extractingfeatures from measurements for interpretationControl of instrument functions and utilities (trouble-

    shooting and repair are excluded from the man-machine interface)

    Calibration of sensors and displaysDevelopment of procedures and experiments (for

    example sequencing experimental steps)Generation of reports and explanationsRetrieval of user information/documentation (for

    example about the experimental set-up).

    Embedded expert systems may not require a user inter-face. Many embedded systems are controlled by a com-puter system, or are designed, to communicate with otherexpert systems for example through what is called a’blackboard’ mechanism.

    Pau [50] has described generic functions for man-machineinterfaces (see table 11). These functions can be backedup by knowledge-based techniques to render them in-telligent. Apart from the knowledge bases themselves,such techniques would incorporate search and inferenceto suggest feasible actions to the user, and rule-basedspreadsheets or other presentation systems for display,control and explanation.

    The potential of user interfaces goes far beyond simplescreen interaction. Tactile and vocal interaction can beadded to allow freedom of movement, with a possibilityof communication and delegation. This type of system(known as virtual reality) is being developed for com-mercial purposes, for example in the entertainment in-dustry. It is possible to imagine how such systems couldbe used for example for training surgeons and there is achallenge now for architects and designers to invent newforms of knowledge representation based on these newtechnologies that have become available.

    Natural language processing

    Landau et al. [51] argue that computer-aided medicaldiagnosis will not attain widespread acceptance amongphysicians until it can include automatic speech recogni-tion and proposed a prototype natural language interface.Speech recognition is not just a matter ofwords in a series.Not surprisingly, intelligent use of language depends onan understanding of the subject. Context, guessed mean-ings and memories of speech fragments have a lot to dowith being able to hear what is being said. This is apattern-recognition problem and typical of the humanbrain.

    Kohonen [52] developed a phonetic typewriter using acompetitive filter neural network. This system achieves92 to 97 accuracy, with continuous speech andunlimited vocabulary for Finnish and Japanese languages.One hundred words of each new speaker are taken astraining samples. The network automatically classifiesspeech sounds and stores related sounds spatially neareach other in the network. The fact that the system does

    not achieve greater accuracy is due to the many strangerules oflanguages for example co-articulation of syllables.Japanese and Finnish may actually be rather more logicalin this respect than other languages, for example English.

    In what might be an imitation of nature, Kurogi [53]used the anatomical and physiological findings on theafferent pathway from the ear to the cortex to develop aneural network that recognizes spoken words, allowingfor changes in pitch, timing etc.

    Robotics

    The mechanical functions of a robot are like a human is,but remain relatively unintelligent if the robot does notinclude sensor or recognition systems that can allow it toknow its environment. Robots are particularly useful forcontrol and handling of samples in sterile or hazardousenvironments, in situations where a human operatoreither cannot or should not be present [3].

    Robotic arms are usually programmed to perform onespecific task at a time. The intelligent robotic arm wouldbe able to move across a variety of pathways carrying avariety of loads. The adaptive robotic arm is a typicalapplication area for expert systems and neural networks.For example, Kawato and coworkers in Osaka havedeveloped an adaptive robot arm controller based on amodel of human motor control and neural networks,having three degrees of freedom.

    Machine learning

    As early as 1950, Turing [54] foresaw the potential ofsystems capable of autonomous learning and admittedthat he would regard a machine that could program itselfas intelligent.

    Bartels et al. [55] discuss the issues involved in the designof autonomous-learning expert systems in histopathology.The autonomous learning process must have a built-incontrol function to keep the process goal-oriented (andby implication the process must have a goal and criteriadefined by which progress towards the goal can beassessed).A decisive element in expert systems is the structuring ofknowledge and the interrelationships between knowledgecomponents. In histopathology, facts are obtained fromthe human observer or from the image processing andimage-extraction programs. One type of learning consistsof updating a knowledge base with new blocks of datathat can fine tune or stabilize decision rules. However,real autonomy is first reached in systems that can augmenttheir own rule sets or, like neural networks, are self-organizing.

    CONCLUSIONS

    Introduction

    This review has looked at the development of pro-grammed software, the flexibility this has allowed insystem operation and control, and how this has led tointelligent clinical laboratory systems.

    12

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    The knowledge base of expert systems contains the col-lective expertise of domain experts (in the clinical labor-atory and related medical domains) and the knowledgeengineers who represent this expertise. An unresolvedquestion concerns the ownership and exploitation rightsof such knowledge bases.

    Since this expertise will be used to guide medical decisionmaking, it will be subject to regulation by the appropriateauthorities. Just like any other medical device, expertsystems need to be evaluated and documented. Re-sponsibility for the consequences of any inappropriate useofknowledge bases and AI rests with the professional userof such systems.

    Regulatory affairsKahan [56] has reviewed US Food and Drug Adminis-tration (FDA) regulation of medical device software. Inthe US, the FDA regulates medical device software via adraft GMP Guidance for software [57] and a draftreviewer guidance [58]. This is supported by a technicalreport [59] used by FDA investigators.

    The 1987 Draft Policy on Software [60] is applied tocomputer products that are intended to affect the diag-nosis and treatment of patients. It was intended to beprimarily applicable to stand-alone medical devices, ratherthan to software that is a component of a medical device(embedded). It does apply to some stand-alone AI prod-ucts now entering the US market. For example, in 1988the FDA decided to include stand-alone software for bloodbanks under regulation (these were previously exempt).The 1987 Draft Policy will probably not be finalized andstand-alone software will probably be treated as exemptfrom premarket approval except when a risk is perceived.

    Embedded systems are a more difficult question. InNovember 1990 a Canadian company issued a safetybulletin because their embddded software to calculatepatient dose in radiation .therapy was flawed. Thissharpened FDA interest. The draft GMP guidance forsoftware is specific about the analysis of software and forexample, stipulates that software functions used in a devicemust perform as intended and testing should verify this.Such software must be tested separately by simulationtesting in the environment in which the software will beused.

    The Health Industry Manufacturers’ Association (HIMA)has expressed concern about the FDA Guidance--forexample arguing that source codes should not be copiedduring inspection by FDA and confidential algorithmspecifications should not be revealed.

    In Europe, a recent EC Medical Device Directive toTrade Associations includes software in the definition ofmedical devices for the first time. Previously the definitiononly included software that was incorporated into adevice, but now the definition includes software itself(which presumably means stand-alone systems). This hascaused some confusion among European manufacturers.

    In May 1991 the EC issued a Directive [61] on the legalprotection of computer programs, requiring memberstates to protect such programs by copyright under the

    Berne Convention for the Protection of Literary andArtistic Works.

    Evaluation of expert systems

    Miller [62] distinguishes three levels of evaluation ofexpert systems; evaluation of research contribution of adevelopment prototype, validation of knowledge andperformance and evaluation of the clinical efficacy of anoperational system.

    Evaluation of developmental prototypes is an iterativeprocess and the requirements of a particular system willchange as the system matures towards the final operationalsystem. One of the most interesting questions is how toevaluate and limit a neural network that can continue tolearn.

    Knowledge should be assessed in terms of accuracy,completeness and the consistency and performance of thesystem matched with that of experts. A critical aspect isthe capacity of the system to know its own limitationsand, for example, to fail to make a diagnosis whenpresented with an unfamiliar case. At best expert systemsused to perform just as well as the experts themselves (Yuet al. [63], Quaglini and Stefanelli [64]). This situationis changing as validation, verification and maintenancetools are introduced (Pau [14]).

    Evaluation of an AI system in operation depends on thedomain and the clinical role it is expected to play. Thereare a number of issues of interest that are very difficultto assess, for example the impact of the system on thequality of health care and the user’s subjective reactionto the system. Mainly because of this there are fewevaluations of operational systems.

    The need for transparency

    We would like greater transparency in the application ofartificial intelligence in the clinical laboratory. It isparticularly difficult in the clinical laboratory environ-ment to evaluate embedded expert system as they gener-ally appear as black boxes to the user. Wulkan [7] asksthe question who is to be held responsible if the systemcontributes with faulty information that could result ina decision that ultimately proves fatal to the patient?Hoffmann [65] suggests that responsibility should lie withthose who apply such systems to provide information forthe clinic.

    There is a danger that expert systems may lend authorityand even prejudices to shallow decisions by not disclosingthe exact nature of the inference mechanisms, not iden-tifying the human experts who have provided the expertiseor the knowledge engineers who have interpreted thisexpertise for the expert system, or not disclosing thelearning set used to train a neural network. Neuralnetworks in particular are less amenable to logical ex-planation, although they can be evaluated in terms of thetraining data and the network performance on test data(Hart and Wyatt [66]). There are positive signs that thissituation is beginning to change, with the introduction of

    13

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    explanation facilities that are also implemented on neuralnetworks (Pau [14]).

    It would seem ironic if the mechanisms of artificialintelligence were applied ineffectively, indeed unintelli-gently, to clinical laboratory information. Concerns havebeen expressed that applications of artificial intelligencecould lead to a false uniformity and discourage dissentingopinion by crystallizing the majority opinion and currentstate of the art in a more permanent form. Again thesituation seems to be improving and it now seems morelikely that minority opinions can be included in theaccumulation ofincremental knowledge as the knowledgebase is built up.

    We believe that it would be usethl to look at theknowledge base in an AI system as a form of’collectiveintellectual property’ and to elaborate a set of recom-mendations for its exploitation.

    Items for future debate

    This review could be supplemented with a survey,in the form of a.questionnaire, concerning applications ofembedded systems in laboratory instrumentation.

    We recommend that a Working Group should be set upto draft guidelines for applications of artificial intelligencein the clinical laboratory. For expert systems, recom-mendations should be considered for the identification ofthe domain experts who have contributed to the expertsystem, the representation used (and its authorship) andthe inference strategy used. For neural networks, webelieve that the Working Group should consider recom-mendations for the training data and network perform-ance on test data.

    In addition, we would like to see the Working Groupdebate the question of intellectual property rights ofauthors of expert systems and additional knowledge thatmay be added to an existing knowledge base. Finally, theWorking Group might debate the question of copy.protection of commercial systems in relation to publica-tion and the need for greater transparency.

    References

    1. B/STTNER, J. and HABRICH, C., XIII International Congress of ClinicalChemistry. (GIT Verlag, Darmstadt, 1987).

    2. SKEOGS, L. T., Analytical Chemistry, 38 (1966), 31A.3. OZAWA, L., SCHNIPELSKY, P., PARDUE, H. L., PLACE, J. F. and

    TRUCHAUD, A., Annales de Biochimie Clinique, 49 (1991), 528.4. PARDUE, H. L., OzAwA, K., PLACE, J. F., SCHNIPELSKY, P. and

    TRUCHAUD, A., Systematic top-down approach to clinical chemistryAnnales de Biologie Clinique 52 (1994) 311-320.

    5. CAU)ILL, M. and BUTLER, C., Naturally Intelligent Systems. (MITPress, Boston, 1990).

    6. BARTELS, P. H. and WEBER, J. E., Analytical and Quantitative Cytologyand Histology, 11 (1989).

    7. WULKAN, R. W., Thesis: Expert systems and multivariate analysisin clinical chemistry (Erasmus University, Rotterdam 1992).

    8. SHORTLIFFE, E. H., Computer-Based Medical Consultations: MYCIN(Elsevier, New York, 1976).

    9. O’CONNOR, M. L. and McKNNEY, T., Archives of Pathology andLaboratory Medicine, 113 (1989), 985.

    10. BARTELS, P. H., BIBBI, M., GRAHAM, A., PAPLANUS, S., SHOEMAKER,R. L. and THOMPSON, D., Analytical Cellular Pathology, (1989),195.

    11. Du Bos, P., HERMANUS, J. P. and CANTRAINE, F., Medical Informatics,14 (1989), 1.

    12. SIELAFF, B. H., CONNELLY, D. P. and SCOTT, E. P., IEEE Transac-tions on Biomedical Engineering, 36 (1989), 541.

    13. COMPTON, P., Clinical Chemistry, 36 (1990), 924.14. PA:, L. F., Guidelines for Prototyping, Validation and Maintenance of

    Knowledge Based Systems Software (Electromagnetics Institute, Tech-nical University of Denmark, 1990).

    15. TESLER, L. G., Scientific American, 265 1991 ), 54.16. ALEKSANDER, I., In M. Reeve and S. E. Zenith (Eds.), Parallel

    Processing and Artifical Intelligence (John Wiley & Sons, Chichester1989).

    17. FORNILX, S. L., Research and Development (Cahners Publishing,Hoofddorp, 1989).

    18. McCULLOCH, W. S. and PITTS, W., Bulletin ofMathematical Biophysics,5 (1943), 115.

    19. COLLINS, M., Bio/technology, 11 (1993), 163.20. PSALTIS, D., BRADY, D., Gu, X. G. and LIN, S., Nature, 343 (1990),

    325.21. PAu, L. F. and GSTZSCHE, T., Journal ofIntelligent and Robotic Systems

    (April 1992).22. LEVY, M., FERRAND, P. and CmRAT, V., Computers in Biomedical

    Research, 22 (1989), 442.23. PEYRET, M., FLANDROIS, J. P., CARRET, G. and PICHAT, C.,

    Pathologic Biologie, 37 (1989), 624.24. BRONZINO, J. D., MORELLI, R. A. and GOETHE, J. W., IEEE

    Transactions on Biomedical Engineering, 36 (1989), 533.25. VAN MELLE, W., SHORTLIFFE, E. H. and BUCHANAN, B. G., Machine

    Intelligence, Infotech State ofthe Art Report, Series 9 (1989), No. 3.26. ROTHENBERG, J., PAUL, J., KAMENY, I., KIPPS, J. R. and SWENSSON,

    M. Evaluating Expert System Tools. A Framework and Methodology (TheRAND Corporation, Santa Monica, 1989).

    27. ADAMS, I. D., CHAN, M. and CLIFFORD, P. C., British Medical Journal,293 (1986), 800.

    28. PERRY, C. A., Bulletin of the Medical Libraries Association, 78 (1990),271.

    29. BUCHANON, B. G. and SHORTLIFFE, E. H., (Eds.) Rule Based ExpertSystems: The MYCIN Experiments of the Stanford Heuristic ProgrammingProject (Addison-Wesley, Stanford, 1984).

    30. WEISS, S. M., KULIKOWSKI, C. A., AMAREL, S. and SAFIR, A., ArtificialIntelligence, 11 (1978), 145.

    31. PORTER, J. F. KINGSLAND, L. C., LINDBERG, D. A. B. and SHAH,I., Arthritis and Rheumatism, 31 (1988), 219.

    32. LINDBERG, D. A. B., GASTON, L. W., KINGSLAND, L. C. andVANIER, A. D., In Proceedings of the Fifth Annual Conference onComputer Applications in Medical Care (November 1981).

    33. PAUKER, S. G., GARRY, G. A., KASSIRER, J. P. and SCHWARZ, W. B.,American Journal of Medicine, 60 (1976), 981.

    34. PARIL, R. S., SZOLOVITS, P. and SCHWARTZ, W. B., In Szolovits, P.(Ed.) Artifical Intelligence in Medicine (Westview Press, Boulder, 1982).

    35. MILLER, R. A., POPLE, H. E. and MYERS, J. D., New EnglandJournal of Medicine, 307 (1982), 468.

    36. BOtJCKAERT, A., International Journal of Biomedical Computing, 20(1982), 123.

    37. FREZEL, L. E., Understanding Expert Systems (Howard W. Sams &Co., Indianapolis, 1987).

    38. WNKEL, P., Clinical Chemistry, 35 (1989), 1595.39. TRENDELENBURe, C. and POHL, B., Anales de Biologie Clinique, 51

    (1993), 226.40. WIED, G. L., BARTELS, P. H., BIBBO, M. and DYTCH, H. E., Human

    Pathology, 20 (1989), 549.41. DAWSON, A. E., AUSTIN, R. E. and WENBER, D. S., American

    Journal of Clinical Pathology, 95 (1991), 29.42. WOLBERe, W. H. and MANGASARIAN, 0. L., Analytical and Quanti-

    tative Cytology and Histology, 12 (1990), 314.43. ASTION, M. L., WENER, M. H., THOMAS, R. G., HtNDER, G. G. and

    BLOCH, D. A., Clinical Chemistry, 39 (1993), 1998.

    14

  • j. F. Place et al. Use of artificial intelligence in analytical systems for the clinical laboratory

    44. VALDIGUIt, P. M., ROGARI, E. and PHILIPPE, H., Clinical Chemistry,

    45. REIBNEGGER, G., WEISS, G., WERNER-FELMAYER, G., JUDMAIER, G.and WACHTER, H., Proceedings of the National Academcy of Sciences, 88(1991), 11426.

    46. PAU, L. F., Expert Systems, 3 (1986), 100.47. GROTH, T. and MODIN, H., Computer Methods and Programs in

    Biomedicine, 34 (1991), 175.48. WOODBURY, W. F., Laboratory Medicine, 20 (1989), 176.49. PAPLANUS, S. H., GRAHAM, A. R., LAYTON, J. M. and BARTELS,

    P. H., Analytical and Quantitative Cytology and Histology, 7 (1985), 32.50. PAU, L. F., Jinko Chino Gakkaishi, 5 (1990), 106.51. LANDAU, J. A., NORWICH, K. H. and EVANS, S. J., International

    Journal of Bromedical Computing, 24 (1989), 117.52. KOHONEN, T., In I. Aleksander (Ed.), Neural Computing Architectures

    (MIT Press, Harvard, 1989).53. KUROGI, S., Biol. Cybern., 64 (1991), 243.54. TURING, A. M., Mind, 59 (1950), 433.55. BARTELS, P. H., WEBER, J. E. and DUCKSTEIN, L., Analytical and

    Quantitative Cytology and Histology, 10 (1988), 299.56. KAHAN, J. S., Clinica, 463 (1990), 14.57. Medical Device GMP Guidance for Investigators: Application of Medical

    Device GMPs to Computerized Devices and Manufacturing Processes (USFDA Draft, November 1990).

    58. Reviewer Guidance for Computer Controlled Medical Devices undergoing510(k) Review (US FDA, Second Draft, February 1991).

    59. Software Development Activities--Reference Materials and Training Aidsfor Investigators (Office ofRegulatory Affairs, US FDA;July 1987).

    60. Draft Policyfor the Regulation of Computer Programs (US FDA, reissuedNovember, 1989).

    61. European Council Directive 91/250/EEC (May 1991 ).62. MILLER, P. L., Computer Methods and Programs in Biomedicine, 22

    (1986), 5.63. Yv, V. L., FAGAN, L. M. and WRAITH, S. M., Journal of the American

    Medical Association, 242 (1979), 1279.64. QtJAGLINI, S. and STEFANELLI, M., Computers in Biomedical Research,

    21 (1988), 307.65. Hoffmann Arbeitsgruppe Ger/ite-evaluation: Interaktion zwischen

    mechanisierten Analyseger/iten und wissensfor-arbeitenden Sys-temen. Mitteiling Deutsche Gesellschaftfr Klinische Chemie, (1990), 25.

    66. HART, A. and WYATT, J., Medical Informatics, 15 (1990), 229.67. WEBER,J. E., BARTELS, P. H., GRISWOLD, W., KUHN, W., PAPLANUS,

    S. H. and GRAHAM, A. R., Analytical and Quantitative Cytology andHistology, 10 (1988), 150.

    68. WEBER, J. E. and BARTELS, P. H., Analytical and Quantitative Cytologyand Histology, 11 (1989), 249.

    69. LENG, Y. W., PAu, L. F., SIGAARD ANDERSEN, O. and GOTHGEN,I. H. Proceedings A.A.A.I. Symposium, AI in Medicine, (Stanford, CA,March 1990).

    70. WIENER, F., DE VERDIER, C.-H. and GROTH, T., Scandinavian ffournalof Clinical Laboratory Investigation, 50 (1990), 247.

    71. PFURT$CHELLER, G., SCHWARZ, G., MOIK, H. and HAASE, V.,Biomed. Tech. Biomedicinische Technik, 34 (1989), 3.

    72. BONADONNA, F., Medical Informatics, 15 (1990), 105.73. ALONSO-BETANZOS, A., MORET-BONILLO, V. and HERNADEZ-SANDE,

    C., IEEE Transactions on Biomedical Engineering, 38 (1991), 199.74. WEiss, S. M. and KULIKOWSKI, C. A., In Proceedings Sixth International

    Conference on Artifical Intelligence (Information Processing Society ofJapan, Tokyo, 1979).

    75. GILL, H., LUDWIGS, U., MATELL, G., RUDOWSKI, R., SHAHSAVAR,R., STROM, C. and WIGERTZ, O., Int. J. Clin. Monit. Comput., 7(1990), 1.

    76. MARCHEBSKY, A. M., Analytical and Quantitative Cytology and Histology,13 (1991), 89.

    77. MERRY, M., GEC Journal of Research (1983), 39.78. DE SWAAN, A. H., GROEN, A., HOVLAND, A. G. and STOOP, J. C.,

    Delft Progress Report, 9 (1984), 63.79. POHL, B. and TRENDELENBURG, C., Methods ofInformation in Medicine,

    27 (1983), 111.80. FURLONG, J. W., DuPtY, M. E. and HEINSlMER, J. A., American

    Journal of Clinical Pathology, 96 1991 ), 134.81. DASSEN, W. R., MULLENEERS, R., SMELTS, J., DEN DtLI, K. and

    CRUZ, F., Journal of Electro-cardiology 23 (1990), 200.82. GRoss, G. W., BOONE, J. M., GRECO-HUNT, V. and GREENBERG, B.,

    Invest. Radiol., 25 (1990), 1017.83. MULSANT, B. H., MD Comput., 7 (1990), 25.84. WYTHOFF, B. J., LEVINE, S. P. and TOMELLINI, S. A., Analytical

    Chemistry, 62 (1990), 2702.

    15

  • Submit your manuscripts athttp://www.hindawi.com

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Inorganic ChemistryInternational Journal of

    Hindawi Publishing Corporation http://www.hindawi.com Volume 2014

    International Journal ofPhotoenergy

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Carbohydrate Chemistry

    International Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Journal of

    Chemistry

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Advances in

    Physical Chemistry

    Hindawi Publishing Corporationhttp://www.hindawi.com

    Analytical Methods in Chemistry

    Journal of

    Volume 2014

    Bioinorganic Chemistry and ApplicationsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    SpectroscopyInternational Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

    Medicinal ChemistryInternational Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Chromatography Research International

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Applied ChemistryJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Theoretical ChemistryJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Journal of

    Spectroscopy

    Analytical ChemistryInternational Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Quantum Chemistry

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Organic Chemistry International

    ElectrochemistryInternational Journal of

    Hindawi Publishing Corporation http://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    CatalystsJournal of