promoting productivity by propagating the practice

5
PROMOTING PRODUCTIVITY BY PROPAGATING THE PRACTICE OF "PLUG-COMPATIBLE" PROGRAMMING Robert A. Greenes, M.D., Ph.D. Decision Systems Group, Harvard Medical School Brigham and Women's Hospital, Boston, Massachusetts Information needs of health care professionals or managers in- clude the ability to "horizontally integrate" a variety ofdata and knowledge access and manipulation tasks, to support their problem solving, decision making, and educational activities. This introduces a complexity into application environments not present in more traditional, vertical, task-oriented applications. As a consequence, applications arefacing new demands to draw upon and utilize externally developed data and knowledge re- sources and tools in a consistentfashion, and to support human interface paradigms suited to the particular needs of the user. To enable these capabilities requires (a) partitioning of information requirements into their component data and knowledge access and manipulation tasks; and (b) environmentsfor combining these into problem-focused applications. This has created a need for software engineering practices that (a)facilitate separate de- velopment, maintenance and update ofdata and knowledge re- sources and tools, and (b) compositional methods for supporting particular views, organizations, or uses of these data and knowl- edge resources. Such capabilities are becoming available, and have major implications, not onlyfor application development butfor group, collaborative work. Standards are not Sufficient Standardization is a double edged sword. On the one hand, it encourages compatibility. On the other, it discourages innova- tion and exploration. Furthermore, standardization alone will not allow us to build complex integrative applications. What we need in addition is a methodology for combining components, and for utilizing and building on the work of others, in the de- velopment of our applications. Standardization efforts have to a large extent focused on (a) terminology [1] and (b) format of messages between computers [2-5]. We focus here on a different problem - how to build complex applications out of modular components. This task will be greatly aided by the kinds of standards resulting from the above efforts, but they are not enough. What we also need are modularity of software, separability of data and knowledge from the applications, "plug compatibility" communication protocols, and methods for composition, organi- zation, and viewing of data and knowledge that are easily adap- ted to specific problems and needs. This may be viewed as a form of standadization itself, but it is also a development frame- work or methodological approach that allows internal standard- ization to evolve gradually. For example, we do not insist on a single format for hypermedia records (which does not exist, but is being explored [61), or for expert system rule logic, or for data of a variety of other types. We isolate the access to these components through adaptor entities that know how to access them. Variations of this idea are "knowbots" proposed by Cerf and Kahn in [7] and the "mediator" proposed by Wiederhold in [8]. If another data format needs to be accommodated, we spe- cialize or adapt an existing adaptor entity for this. Eventually, one would hope that the number of variants of data or knowledge formats for specific purposes would converge on "standards", but we are able to proceed to integrate disparate components without such convergence. This is, in a sense, similar to the network data transfer standards, except that it works on a much smaller data or knowledge component level. Demands for Distributed Data and Decision Aids Whether one realizes it or not, a paradigm shift has occurred in terms of the kinds of applications that we are building and that users are demanding [9,10]. This has been engendered in part by a shift in the user population from being primarily focused on the specialized power users in the past to also including the man- agers and professionals in the present The former were able to have their needs met by tailored, powerful, "vertical" applica- tions. The latter require different kinds of applications that sup- port an integrated, "horizontal" approach to problem solving, utilizing many different components, as needed, supporting the "cut and paste" metaphor, and providing a broad range of ca- pabilities. Inhibitory Impact of Isolated Implementations The vertical application development approach, while meeting specific needs, may serve to impede the development of com- prehensive applications. Reinventing the wheel, incompatibility While many developments that have been carried out in indepen- dent, isolated fashion are of very high quality, they are typically incompatible with one another, and do not relate to or take ad- vantage of each other. Because of their narrow focus, incompat- ibility, as well as lack of ease of incorporating existing compo- nents, there is much reinventing of the wheel. Embedded data and knowledge A number of developments embed the data or knowledge they use in the application, i.e., it is not set up in a way that facili- tates, or even permits, access and utilization by another pro- gram, or incorporation in an application with another purpose. 22 0195-4210/90/0000/0022$01.00 © 1990 SCAMC, Inc.

Upload: others

Post on 12-Feb-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

PROMOTING PRODUCTIVITY BY PROPAGATINGTHE PRACTICE OF "PLUG-COMPATIBLE" PROGRAMMING

Robert A. Greenes, M.D., Ph.D.

Decision Systems Group, Harvard Medical SchoolBrigham and Women's Hospital, Boston, Massachusetts

Information needs ofhealth care professionals or managers in-clude the ability to "horizontally integrate" a variety ofdata andknowledge access and manipulation tasks, to support theirproblem solving, decision making, and educational activities.This introduces a complexity into application environments notpresent in more traditional, vertical, task-oriented applications.As a consequence, applications arefacing new demands to drawupon and utilize externally developed data and knowledge re-sources and tools in a consistentfashion, and to support humaninterfaceparadigms suited to the particular needs ofthe user. Toenable these capabilities requires (a) partitioning ofinformationrequirements into their component data and knowledge accessand manipulation tasks; and (b) environmentsfor combiningthese into problem-focused applications. This has created a needfor software engineering practices that (a)facilitate separate de-velopment, maintenance and update ofdata and knowledge re-sources and tools, and (b) compositional methodsfor supportingparticular views, organizations, or uses ofthese data and knowl-edge resources. Such capabilities are becoming available, andhave major implications, not onlyfor application developmentbutfor group, collaborative work.

Standards are not Sufficient

Standardization is a double edged sword. On the one hand, itencourages compatibility. On the other, it discourages innova-tion and exploration. Furthermore, standardization alone willnot allow us to build complex integrative applications. What weneed in addition is a methodology for combining components,and for utilizing and building on the work of others, in the de-velopment of our applications.

Standardization efforts have to a large extent focused on (a)terminology [1] and (b) format of messages between computers[2-5]. We focus here on a different problem- how to buildcomplex applications out of modular components. This task willbe greatly aided by the kinds of standards resulting from theabove efforts, but they are not enough.

What we also need are modularity of software, separability ofdata and knowledge from the applications, "plug compatibility"communication protocols, and methods for composition, organi-zation, and viewing of data and knowledge that are easily adap-ted to specific problems and needs. This may be viewed as aform of standadization itself, but it is also a development frame-work or methodological approach that allows internal standard-ization to evolve gradually. For example, we do not insist on asingle format for hypermedia records (which does not exist, but

is being explored [61), or for expert system rule logic, or fordata of a variety of other types. We isolate the access to thesecomponents through adaptor entities that know how to accessthem. Variations of this idea are "knowbots" proposed by Cerfand Kahn in [7] and the "mediator" proposed by Wiederhold in[8]. If another data format needs to be accommodated, we spe-cialize or adapt an existing adaptor entity for this.

Eventually, one would hope that the number of variants of dataor knowledge formats for specific purposes would converge on"standards", but we are able to proceed to integrate disparatecomponents without such convergence. This is, in a sense,similar to the network data transfer standards, except that itworks on a much smaller data or knowledge component level.

Demands for Distributed Data and Decision Aids

Whether one realizes it or not, a paradigm shift has occurred interms of the kinds of applications that we are building and thatusers are demanding [9,10]. This has been engendered in partby a shift in the user population from being primarily focused onthe specialized power users in the past to also including the man-agers and professionals in the present The former were able tohave their needs met by tailored, powerful, "vertical" applica-tions. The latter require different kinds of applications that sup-port an integrated, "horizontal" approach to problem solving,utilizing many different components, as needed, supporting the"cut and paste" metaphor, and providing a broad range of ca-pabilities.

Inhibitory Impact of Isolated Implementations

The vertical application development approach, while meetingspecific needs, may serve to impede the development of com-prehensive applications.

Reinventing the wheel, incompatibility

While many developments that have been carried out in indepen-dent, isolated fashion are of very high quality, they are typicallyincompatible with one another, and do not relate to or take ad-vantage of each other. Because of their narrow focus, incompat-ibility, as well as lack of ease of incorporating existing compo-nents, there is much reinventing of the wheel.

Embedded data and knowledge

A number of developments embed the data or knowledge theyuse in the application, i.e., it is not set up in a way that facili-tates, or even permits, access and utilization by another pro-gram, or incorporation in an application with another purpose.

220195-4210/90/0000/0022$01.00 © 1990 SCAMC, Inc.

Tool-centered rather than problem-centered focus

Another difficulty is that much work in the past has been fo-cused on the building of tools, and on then seeking to find waysin which that tool can be applied. In that sense, the emphasishas been on technology in search of problems. Problems, onthe other hand, arise in real life, for which a variety of tools maybe needed- perhaps at different stages in the problem's explo-ration or solution. Thus, to support problem solving, we need aproblem-based focus, not a tool-oriented focus. We need to se-lect the appropriate tools, as needed during the solution process.

Difficulty in building complex applications, orreaching critical mass

Building of complex, comprehensive applications, is very diffi-cult without the ability to build on the work of others, and to docooperative work. As a result of the above problems, coopera-tive work and sharing do not occur to any great degree. Werarely reach a critical mass of capability, where an application istruly robust, comprehensive, and useful, over a wide range oftasks and needs.

Gaining Ground with "Groupware"

Given the above difficulties, how do we address the problem?The approach we advocate, which we are pursuing in our ownlaboratory [ 11,12], with the DeSyGNER software architecturewe have been developing, involves the following components:

Functional decomposition of program tasks

Complex applications may involve a number of separate compo-nent activities. These may utilize discrete data or knowledgeelements, and involve displaying or manipulating them in spe-cific ways. For example, a patient workup consultation applica-tion may involve: (a) identification of the clinical problem, (b)selection of necessary data elements from the patient's computer-based medical record, (c) review and modification by the physi-cian user, (d) analysis of patient data and production of recom-mendations, (e) invocation of explanation or other detail to sup-port recommendations, (f) retrieval of the abstract of a refer-enced article, and (g) ordering of indicated diagnostic procedure,with conveying of relevant clinical history. Each of the abovetasks can be viewed as involving a particular kind of user-com-puter, or computer-computer interaction, and a specific set ofdata or knowledge, and the problem-solving process movesfrom one task to another as the interaction proceeds, passingrelevant data from one task to the next as needed.

Identification of modular toolsAs applications are developed that perform functions such as theabove, it is clear that certain kinds of data or knowledge interac-tion tend to be needed over and over again. These can be sup-ported by having a library of tools that are able to support thespecific classes of interactions, e.g., display and browsingthrough hypertext; retrieval and manipulation of medical images(such as xrays); tabular or graphical display of data; modeling,which might include display of data as above, but also support-ing dynamic manipulation of input quantities and viewing of re-sults; and inference procedures for probabilistic or heuristicevaluation. Each of these tools operates on specific classes ofdata or knowledge in a particular way, but each instance of useoperates on specific data or knowledge "entities".

Archiving reusable content (data and knowledge)

In addition to the tools library, therefore, is a need for storageand retrieval of the data and knowledge entities to which the

tools will be applied, in a particular domain. This constitutes thedatabase or knowledge base for the application. Each data orknowledge entity contains as part of its database record, an indi-cation of the tools necessary to store, retrieve, and otherwisemanipulate it. This is a form of "object-oriented" databasestructure [13], in which the tool is actually a prototype object,and the data elements are instances of the object. Note that weemphasize here the approach of organizing data and knowledgeresources in a fashion that lets them be incorporated in applica-tions as needed, rather than an approach in which the data andknowledge are embedded within the applications.

Data and control structure independence: Develop-ment of compositional paradigms, views, and or-ganizational methods

The view of data and knowledge as separable entities can onlybe pursued to a limited extent. What gives applications theirpower is often the ability to superimpose on individual entitiesan organizational framework, a view, a sequence, or othermethod of composition of the entities that suits the problem ortask [14]. For example, individual knowledge entities can becomposed into page layout displays, book layout hierarchicalorganizations, webs of hyperlink connectivity, entities collectedon the basis of the semantic concepts by which they are indexed,or conditional sequences of displays guided by a protocol ortutorial strategy.

Given a set of data or knowledge content elements, links mayalso be superimposed on them in the context in which they aredisplayed, allowing "hyperlinking". A different set of links maybe superimposed on the same data or knowledge in differentcontexts. These superimpositions of control information consti-tute higher level entities. Other compositions, e.g., in page lay-out or book layout form, can recursively combine more elemen-tal entities. Other methods for composition of data and knowl-edge elements may also be used, e.g., semantic indexing ofthem, protocol-based linking of them, or combination in a ques-tion/testing mode. We consider data and knowledge to consti-tute a hierarchy, ranging from raw data and knowledge ele-ments, to their formatting into elemental entities, to their com-position into higher level groupings, presentations, and controlsequences. These composition methods are supported by a va-riety of compositional tools, analogous to the tools used to sup-port elemental data and knowledge entities.

Development of "plug compatibility" conventions

Within an application, the ability to create a link to another pro-gram module can be done in a variety of ways..To be generic,this capability needs to be able to (a) dynamically link during runtime, (b) link to separately developed, non-integrated applica-tions, (c) have common access to needed data and control infor-mation, and (d) utilize common display areas, ideally being ableto update existing contents. Given that we can hyperlink fromone data or knowledge content entity to another, for example,we need to not only invoke the target entity, but optionally indi-cate the position within the entity, and pass parameters to theentity, for all types of entities that support hyperlinking.

Extensibility and adaptability to unique formats

While the above capabilities can be provided for any number ofanticipated data and knowledge types and presentation formats,there will invariably be others. In addition, given the lack offixed standards, extemal databases will typically be in their ownunique formats. Rather than being unable to utilize them or un-able to respond to specialized needs, the entity-based approachto data and knowledge storage is one that allows extensibility to

23

accommodate those needs within a generic plug compatibilityframework.

Evolution to consistent formats

Over time, it can be expected that those data and knowledgeformats that are most frequently used will become defacto stan-dards. Other standards will be decreed, or explicitly developed.These can be accommodated as they become available, but thedevelopment of useful applications need not wait for them.

Development of "shells" or "authoring and user envi-ronments"

To support various compositions, organizations, and controlstructures we may wish to use in building applications, we need"shell" or "author environment" tools that facilitate creation,composition, and structuring of individual entities into applica-tions. User environments also need to be provided for interpret-ing the structure and navigating it, which can be implementedusually as a reduced-functionality version of the authoring envi-ronment.

SubjectExpertA

The distribution of responsibility for development and supportof individual data and knowledge entities, and the separation ofthis activity from that of constructing applications are depicted inFig. 1.

Implications of Integrative Initiatives

Re-examination of what a program is

We present this model based on our experience with it, and thepower we believe it imparts to developers. The most importantaspect of the process is the functional decomposition of an appli-cation into modular units, the recognition of similarity of theseand of the ability to support them with tools, and the ease of re-combining them through the use of compositional tools into thefunctional applications desired A consequence of this view isthat the perspective on what needs to be implemented, given afunctional problem to solve, may be changed, once a tool box ofmodules exists. An author may either combine existing mod-ules, or do so with the addition of a limited number of additionalmodules.

-Potocol-based tutorialHyplerlinkd browsing

N1 \ \.

\ \' \

N*\ \

9%

I ./

I / -

I /,7 - -#

.,,w.I 001

-- -

/

/

- :- WR

Ext I I/

Exput

F.untiv lkw*Lhmnipnf:, ..Q,nn -t Comal*ion of Entities to (Create Anim .tinns

Figure 1. Separation of responsibility for (a) entity development and support via partitioning, and(b) construction of applications through use of composition methods

24

SubjectExpeii

-9 -

.00.000 Page layout

indexing

AU-12-LAS V iumv 'GM Q-U-U.,FL a A-MULWER

Valuation of individual contribution vs. collectivetool box and data poolA consequence of this approach is that the contribution of an in-dividual author will become less and less at the tool level, andmore and more at the content and compositional levels. To theextent that data and knowledge entities become collected inreusable archives, furthermore, the need for de novo content de-velopment may also decrease.

Methods for collection, distribution, dissemination

Several kinds of "products" result from this view: the kerneltools, libraries of additional tools and extensions, shells or au-thoring environments, data or knowledge content entity libraries,and application knowledge bases. The libraries of tools anddata and knowledge content entities are somewhat unique to thismodel, in that they become part of a growing body of resourcesthat are ideally available to all developers on some basis.

Financial, legal, proprietary factorsHow should libraries of tools or content be offered? What con-stitutes a copy [15}? How should communal data and knowl-edge bases be updated and maintained? Who are most likely totake on these tasks- publishing companies, academic institu-tions, the government [16], consortia, or private industry?Should a royalty meter be incremented each time a knowledgeelement is accessed, as in in Nelson's "Docuverse" model [17]?Or should individuals give up rights to their contributions in ex-change for the ability to utilize a much larger library ofcommu-nal resources? What are the rules for developing proprietaryproducts that utilize communal resources?

The idea of "groupware" is gaining ground in the businesscommunity currently as the needs of professionals and managersto collect, assimilate, manipulate, and communicate data fromdiverse sources are being recognized. This is resulting in achange in the nature of products and services being offered bysoftware vendors [18]. The change is being enabled by in-creased power of workstations, with graphical user interfacesand multitasking operating systems, as well as the increasingavailability of network communications and access to specializedservers. In turn, it will invariably result in new software archi-tectures and development tools.

Social/organizational impact, and enabling strategiesWe could go so far as to say that ifwe had the ability to createapplications by assembling modular tools and content from avast menu of them, it may have major impact on our organiza-tions, channels of communication, and professional interactions.We need to consider these in terms of strategies for promotingand enabling communal exchange and cooperation.

AcknowledgmentSupported in part by grants LM 04572 andLM 07037 from theNational Library of Medicine, and by a grant from the Scienceand Technology Division, NYNEX, Inc.

References

1. Humphreys BL, Lindberg DAB. Building the UnifiedMedical Language System. Proc Thirteenth AnnualSympos on Computer Applications in Medical Care(SCAMC), Washington, DC. New York: IEEE ComputerSociety Press. November, 1989; 475-480.

2. McDonald C, Hammond W. Standard formats for elec-tronic transfer of clinical data. Ann Int Med, 1989; 110:333-335.

3. Health Level Seven Standards: An Application ProtocolforElectronic Data Exchange in Health Care Environments,Version 2, HL7, Philadelphia, PA, 1988; 1-15.

4. ACR-NEMA Digital Imaging and Communications Stan-dard, publication No. 300-1985, available from the Nat-ional Electrical Manufactures Association, 2101 L St, NW,Suite 300, Washington, DC 20037.

5. Open Systems Interconnection (OSI)- Standard Architee,-ture and Protocols. Folts HC, des Jardis R. (eds): ProcIEEE. Washington, DC, IEEE Computer Society Press,1983.

6. Gallagher L, ed. Proc Hypertext Standardization Work-shop. Gaithersburg, MD, January, 1990. Gaithersburg:National Institute of Standards and Technology, 1990.

7. Lederberg J, Uncapher K. Towards a National Collabor-atory, NSF, March 1989.

8. Wiederhold G. Future architectures for information proces-sing systems. Rishe, Navathe, Tal (eds): Proc Parbase-90,Int Conf on Databases, Parallel Architectures, and TheirImplementation, IEEE Computer Society Press, March1990; 160-176.

9. Greenes RA. Medical informatics: Academic and institu-tional perspectives of an emerging specialty. (Keynoteaddress). Proc Computer in Health Sciences Sympos,Newark, NJ, November, 1988; 1-7.

10. Greenes RA, Shortliffe EH: Medical informatics: Anemerging academic discipline and institutional priority.JAMA, 1990; 263: 1114-1120.

11. Greenes RA: "Desktop knowledge": A new focus formedical education and decision support. Proc First IMIAMedical Informatics and Education International Sym-posium, Victoria, BC, Canada, May, 1989; 89-96. Alsopublished in: Meth InfMed, 1989;28(4): 332-339.

12. Greenes RA, Deibel SRA: The DeSyGNER knowledgemanagement architecture: A building block approach basedon an extensible kernel. Artiflntell in Med,. 1990; inpress.

13. Wiederhold G. Views, objects, and databases. Computer,1986; 19(12): 37-44.

14. Approaches to sharing and collaboration through modularsystem design: A focus on knowledge management. ProcSEMI: IMIA Working Confon Software Engineering inMedical Informatics, Amsterdam, The Netherlands,October, 1990; in press.

15. Fisher FD. The electronic lumber yard and builders' rights:Technology, copyrights, patents, and academe. Change.1989; 21(3): 13-21.

16. Kahin B. Toward a public information infrastructure:Information policy and the Internet. Report prepared forU.S. Congress Office of Technology assessment, ContractNo. L3-5445.0, May 15, 1990.

25

17. Nelson TH. Literary Machines. (Edition 87.1) San Anto-nio: Project Xanadu, 1987.

18. Software: It's a new game. Business Week, June 4, 1990;3162: 102-105.

26