[ieee 2013 fourth international conference on information and communication technology and...

5
Intelligent System to support SOAQs Assessment S. Ben Salem, L. Cheniti Belcadhi, R. Braham PRINCE Research Unit, ISITCom, Sousse University Tunisia [email protected] Abstract—Concept mapping has been applied in several fields, including curriculum development, learning, and assessment. The validity of concept mapping process has been proven by many empirical studies. The present paper describes a concept map based system that has been conceived for learners’ knowledge assessment in process-oriented short and open answers questions SOAQs’ evaluation. The presentation of the system focuses on its architecture on terms of modules, their interactions and their functions. We have three modules: the learner module, the tutor module and the administrator module. Keywords- concept mapping, computer-assisted assessment, SOAQs. I. INTRODUCTION Educational assessment is a process of drawing reasonable inferences about what students know on the basis of evidence derived from observation of what they say, do or make in selected situations [14]. Educational assessment does not exist in isolation, but should be aligned with instruction in order to support and enhance learning. Assessment functions may be varied, ranging from a need to identify the students’ prior knowledge to a need to draw conclusions about their overall understanding of the subject matter [8]. Educational assessment helps learners to identify what they have already learned, to observe their personal learning progress and to decide how the further direct their learning process. On the other hand, tutors can exploit the assessment results in the direction of giving appropriate feedback and support to the learner during the instruction, formulating judgments about the quality and the effectiveness of the provided educational material and modifying the curriculum, their instruction and their teaching practices/strategies. In educational settings, where assessment is aligned with instruction, concept maps are considered to be a valuable tool of an assessment toolbox, as they provide an explicit and overt representation of learners’ knowledge structure and promote meaningful learning [12]. There is an extensive and growing literature on the use of concept maps as an alternative form of assessment [15]. Jonassen and Wang propose that well-developed structural knowledge (the knowledge of the structural inter-relationship of knowledge elements) of a content area is necessary in order to flexibly use that knowledge; concept maps may be an appropriate approach for assessing structural knowledge [3]. Concept maps facilitate creative thinking, enabling the user to make relationships between concepts. Apart from being use in educational environments, concept maps can be used in businesses. Some of the uses include note taking, summarizing, idea generating and knowledge creation [12]. A concept map is a graphical tool that allows the user to visualize the learner own knowledge and express it in comprehendible form [11]. In educational assessment, concept maps in general are usually evaluated to reveal the user’s understanding or misunderstanding of concepts by a user topic. This is done by comparing the user’s concept maps to the expert’s concept map. Further, the relationship between learner essays and concept maps has been used for high-stakes assessment. Questions with short and open answer (SOAQs) are a subcategory of free-text answers or essays. They are open- ended questions that require students to create an answer. They are commonly used in examinations to assess the basic knowledge and understanding (low cognitive levels) of a topic before more in-depth assessment questions are asked on the topic [1]. Short and open answer questions do not have a generic structure. Short Answer Questions can be used as part of a formative and summative assessment, as the structure of short answer questions are very similar to examination questions, learners are more familiar with the practice and feel less anxious. Questions may require answers such as short descriptive or qualitative answers, etc. The answer is usually short, from one word to a few lines [1]. Unlike MCQs, there is no guessing on answers, students must supply an answer. The main objective of this research work is to conceive the architecture of a system for SOAQs examination assessment. It must provide a result such as a final score which should be close to the tutor’s one, done manually. So, this knowledge assessment system will not exclude or substitute the tutor but rather will help him in the SOAQs’ web-based assessment. This type of examination (SOAQs) is quite difficult because we are marking a developed natural language response written

Upload: r

Post on 22-Mar-2017

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [IEEE 2013 Fourth International Conference on Information and Communication Technology and Accessibility (ICTA) - Hammamet, Tunisia (2013.10.24-2013.10.26)] Fourth International Conference

Intelligent System to support SOAQs Assessment

S. Ben Salem, L. Cheniti Belcadhi, R. Braham PRINCE Research Unit,

ISITCom, Sousse University Tunisia

[email protected]

Abstract—Concept mapping has been applied in several fields, including curriculum development, learning, and assessment. The validity of concept mapping process has been proven by many empirical studies. The present paper describes a concept map based system that has been conceived for learners’ knowledge assessment in process-oriented short and open answers questions SOAQs’ evaluation. The presentation of the system focuses on its architecture on terms of modules, their interactions and their functions. We have three modules: the learner module, the tutor module and the administrator module. Keywords- concept mapping, computer-assisted assessment, SOAQs.

I. INTRODUCTION Educational assessment is a process of drawing reasonable

inferences about what students know on the basis of evidence derived from observation of what they say, do or make in selected situations [14]. Educational assessment does not exist in isolation, but should be aligned with instruction in order to support and enhance learning. Assessment functions may be varied, ranging from a need to identify the students’ prior knowledge to a need to draw conclusions about their overall understanding of the subject matter [8].

Educational assessment helps learners to identify what they have already learned, to observe their personal learning progress and to decide how the further direct their learning process. On the other hand, tutors can exploit the assessment results in the direction of giving appropriate feedback and support to the learner during the instruction, formulating judgments about the quality and the effectiveness of the provided educational material and modifying the curriculum, their instruction and their teaching practices/strategies.

In educational settings, where assessment is aligned with instruction, concept maps are considered to be a valuable tool of an assessment toolbox, as they provide an explicit and overt representation of learners’ knowledge structure and promote meaningful learning [12].

There is an extensive and growing literature on the use of concept maps as an alternative form of assessment [15]. Jonassen and Wang propose that well-developed structural knowledge (the knowledge of the structural inter-relationship

of knowledge elements) of a content area is necessary in order to flexibly use that knowledge; concept maps may be an appropriate approach for assessing structural knowledge [3].

Concept maps facilitate creative thinking, enabling the user to make relationships between concepts. Apart from being use in educational environments, concept maps can be used in businesses. Some of the uses include note taking, summarizing, idea generating and knowledge creation [12].

A concept map is a graphical tool that allows the user to visualize the learner own knowledge and express it in comprehendible form [11].

In educational assessment, concept maps in general are usually evaluated to reveal the user’s understanding or misunderstanding of concepts by a user topic. This is done by comparing the user’s concept maps to the expert’s concept map. Further, the relationship between learner essays and concept maps has been used for high-stakes assessment.

Questions with short and open answer (SOAQs) are a subcategory of free-text answers or essays. They are open-ended questions that require students to create an answer. They are commonly used in examinations to assess the basic knowledge and understanding (low cognitive levels) of a topic before more in-depth assessment questions are asked on the topic [1].

Short and open answer questions do not have a generic structure. Short Answer Questions can be used as part of a formative and summative assessment, as the structure of short answer questions are very similar to examination questions, learners are more familiar with the practice and feel less anxious.

Questions may require answers such as short descriptive or qualitative answers, etc. The answer is usually short, from one word to a few lines [1]. Unlike MCQs, there is no guessing on answers, students must supply an answer.

The main objective of this research work is to conceive the architecture of a system for SOAQs examination assessment. It must provide a result such as a final score which should be close to the tutor’s one, done manually. So, this knowledge assessment system will not exclude or substitute the tutor but rather will help him in the SOAQs’ web-based assessment. This type of examination (SOAQs) is quite difficult because we are marking a developed natural language response written

Page 2: [IEEE 2013 Fourth International Conference on Information and Communication Technology and Accessibility (ICTA) - Hammamet, Tunisia (2013.10.24-2013.10.26)] Fourth International Conference

by the learner. So why, we will introduce the use of concept map like a novel technique to assess the SOAQs examination. This is another way to reproduce the knowledge presented in the written answers. So we will compare only learner’s concept maps with model’s concept maps done by the tutor as a correct answer to the examination.

The use of concept map in assessment process of SOAQs will be our first contribution on this research work.

The remainder of this paper is organized as follows. Some related works are discussed in section 2. Section 3 presents such underlying concepts of the developed system as process oriented assessment and concept maps. The architecture of the system, the scenario of interaction between the system and their users are described in section 4. Finally, section 5 presents conclusions and outlines some directions for future related to the further development of the system.

II. STATE OF THE ART The automated assessment of open learner’s responses

could be seen as the highest level of a hierarchy in which two sub-categories could be identified: automatic assessment of short open-ended responses and automatic assessment of free text or tests.

Until now, research has focused on the two main sub-tasks of computer-aided assessment and has created several systems that differ in the type of assessment but all with one goal: to assist tutors and to have a correct and fairly close assessment to the tutor one.

The idea to computerize the assessment of questions with short and open answers is not new at all. A number of commercial and non-commercial tools already exist, since 1966. Each of them had different approaches, different aims and different backgrounds.

Most approaches of short answer assessment systems are sited in an educational context. Some focus on GCSE tests (The General Certificate of Secondary Education), others aim at university assessment tests in the medical domain [21].

Another thread of approaches focuses on language teaching and learning. These entire approaches share one theme: they assess short texts written by students. These may be answers to questions that ask for knowledge acquired in a course, e.g., in computer science, or to reading comprehension questions in second language learning. While thematically related, short answer assessment is different from essay grading. Short answers are formulated by students in a much more controlled setting. Not only are they short, they usually are supposed to contain only a few facts that answer only one question.

Another common theme of these systems is that they compare the student answers to one or more previously defined correct answers that are either given in natural language as target answers or as a list of concepts in an answer key.

We pick some systems such as Facets System [7], Texas [18], CoMiC-EN [5], CoMiC-DE [6] and CoSeC-DE [17]. They constitute recent and interesting instances of their respective fields.

Nielsen and al. [7] base their system on what they call facets. These facets are meaning representations of parts of sentences. They are constructed automatically from dependency and semantic parses of the model responses. Each facet in the model response for a question is then looked up in the corresponding learner response and equipped with one of five labels ranging from unaddressed (the student did not mention the fact in this facet) to expressed (the student named the fact). This step is taken via machine learning [21].

Mohler et al. [18] described another recent. Learner responses and model responses are annotated using a dependency parser. Thereupon, subgraphs of the dependency structures are constructed in order to map one response to the other. These alignments are generated using machine learning. Dealing with subgraphs allows for variation in word order between the two responses that are to be compared. In order to account for meaning, Mohler and al. combine lexical semantic similarity with the aforementioned alignment. They make use of several WordNet-based measures and two corpus-based measures, namely Latent Semantic Analysis and Explicit Semantic Analysis (ESA) [21].

While almost all short answer assessment research has targeted answers written in English, there are two recent approaches dealing with German answers.

The CoMiC-EN [5] reimplementation of CAM [22] was motivated by the need for a modular architecture supporting a transfer of the system to German, resulting in its counterpart named CoMiCDE [6]. The German system utilizes the same strategies as the English one, but with language-dependent processing modules being replaced. The authors of [6] evaluated CoMiC-DE on a subset of the Corpus of Reading Comprehension Questions in German (CREG) [19], collected in collaboration with the German programs at The Ohio State University and the University of Kansas [21].

Hahn and Meurers [17] present the CoSeC-DE approach based on Lexical Resource Semantics (LRS) [10]. In a first step, they create LRS representations from POS-tagged and dependency-parsed data. These underspecified LRS representations of student responses and target responses are then aligned. Using A* as heuristic search algorithm, a best alignment is computed and equipped with a numeric score representing the quality of the alignment of the formulae. If this best alignment scores higher than a threshold, the system judges student response and target response to convey the same meaning. The alignment and comparison mechanism does not utilize any linguistic representations other than the LRS semantic formulae. These semantic representations abstract away from surface features, e.g., by treating active and passive voice equally [21].

The answers of SOAQs are written and developed by learners in natural language. Most automatic assessment systems of questions with short and open answers are using automatic assessment techniques such as automatic natural language processing, machine learning and information extraction techniques from the texts of learner’s response.

Questions with short and open answers, a favorite tool for tutors, can be computer marked by natural language based

Page 3: [IEEE 2013 Fourth International Conference on Information and Communication Technology and Accessibility (ICTA) - Hammamet, Tunisia (2013.10.24-2013.10.26)] Fourth International Conference

assessment engines which aim to imitate human marking of free-text. Short and open answer questions are traditionally used throughout the learning process because they are believed to reinforce learning and help develop cognitive skills, and they are the preferred instrument of the examiner because they effectively assess understanding without offering prompts or clues.

However, we note that the use of concept maps has not yet been introduced in the assessment process of SOAQs. This makes the contribution of our system which is based on comparing the learner’s concept map with the tutor or the model’s concept map.

III. UNDERLYING CONCEPTS OF SOAQ ASSESSMENT SYSTEM

A. Questions with short and open answers: The first element that must be prepared when teaching an

online course is assessment. It is a vital part of determining learner achievement. It is used to determine the knowledge gained by learners and to determine if adjustments need to be made to either the teaching or learning process.

Assessment process is used primarily to measure cognitive abilities, demonstrating what has been learned after a particular educational event has occurred, such as the end of an instructional unit or chapter.

There are two forms of assessment: formative and summative. In our approach, we are interested on the second one. Summative assessment measures attainment, understanding or achievement at a particular time and contributes towards the grade that the learner receives at the end of the test.

There are several techniques for assessment of learner’s knowledge. Among them we find questions with short and open answers SOAQs where the learner is expected to develop a response in natural language ranging from a few words up to 5 lines [2].

SOAQs or short answer free-text questions are traditionally used in the classroom, in class tests, in revision, and of course in real examinations, but only now can they be embedded in eLearning and e-Assessment.

Marking of short and open answer questions by computer is analogous to the human marking process. Both the human and the computer will : (i) make use of a pre-defined mark scheme which lists acceptable model answer for each question, (ii) analyze each learner response in turn to determine if it matches any of the model answer, and (iii) award marks based on how many, if any, model answer were matched [23].

B. Concept maps

Assessment in the proposed system is based on the notion

of concept maps.

Formally, a concept map is a graph consisting of nodes and labeled lines [13]. The nodes correspond to important terms (representing concepts) in a domain. The connecting lines denote a directional relationship between a pair of concepts (nodes). The label on the line (explanation) conveys how the two concepts are related. The combination of two nodes and a labeled line is called a proposition. A proposition is the basic unit of meaning in a concept map and the smallest unit used to judge the validity of the relationship drawn between two concepts [9].

When constructing a concept map, one needs to be careful that every two concepts together with their linking phrases form a unit of meaning, a claim, a short sentence. Thus a concept map consists of a graphical representation of a set of propositions about a topic.

Concept maps, then, purport to represent some important aspects of a learner’s declarative knowledge found in the short answer written by the learner. Each concept in the map consists of the minimum number of words needed to express the object or event, and linking words are also as concise as possible and usually include a verb. The words of concepts and the verbs of linking phrases are extracted from the learner’s answer of examination questions [4].

Within any domain of knowledge, there is hierarchy of concepts, where the most general concepts are at the "top" of the hierarchy and the more specific, less general concepts are arranged hierarchically below. Concept maps tend to be represented in a graphically hierarchical fashion following this conceptual hierarchy.

The technology for developing, using, and evaluating concept maps as an assessment tool is currently being investigated. Concept map can be thought as a set of procedures used to measure important aspect of the structure of a learner’s declarative knowledge. The term “assessment” reflects the belief that reaching a judgment about an individual’s achievement in a domain requires an integration of several pieces of information [16].

A concept map assessment is composed of two parts: • a concept mapping task • a concept map evaluation.

The concept mapping task is defined by those procedures that result in the construction of a concept map representing a learner’s knowledge.

There are two ways: “construct-a-map” task and “fill-in-the-map” task [20].

For instance, a map may be constructed by the learner based on his/her responses to an assessment activity such as an examination test. Alternatively, learners may be asked to construct a concept map themselves using a list of concepts and verbs provided and extracted from their text answers. The concepts and verbs are provided from the natural language processing of the learner’s text response.

As this second type of task seems most practical for applications, this type of concept mapping task was used in the assessments evaluated in our research study. We will help learners to construct their own concept maps by giving a blank area with some circles and edges. They must put concepts on

Page 4: [IEEE 2013 Fourth International Conference on Information and Communication Technology and Accessibility (ICTA) - Hammamet, Tunisia (2013.10.24-2013.10.26)] Fourth International Conference

the circles and verbs on the edges as a relation between concepts.

A concept map evaluation involves an examination of the content and structure of a concept map. The nature of an evaluation may involve making qualitative and/or quantitative observations. The research reported here compares the learner’s concept map which represents the learner’s question answer in the test with the tutor’s concept map which is the map of a model answer to the same question.

C. Similarity measures:

The assessment method presented in this system is intended

to use similarity measures. We can define the similarity measures which are

constructed from the analysis of occurrences of words in the text and based on the assumption (Harris) as follows: two words are semantically much closer they are often used in the same contexts [24].

Determining the degree of similarity between two concepts related to the terms of a document is a problem in many applications: disambiguation, automatic summarization, information extraction, automatic indexing, etc. We will use the similarity measures in order to raise the level of similitude between learner concept map and tutor or model concept map. This technique is comparing one especially used to assess questions with short and open answers.

IV. SYSTEM’S ARCHITECTURE The conceived intelligent system consists of an intelligent

module for assessment of learner’s responses in a test composed of SOAQs. It is a virtual intelligent tutor.

The following scenario describes interaction between the system and its two users: the tutor and the learner (Fig. 1).

The tutor using the system; selects some key words and some useful action verbs and the system can generate some questions with short and open answers. Once the test is ready, the tutor creates concept maps for each question’s answer. This concept map is called a reference or model concept map that the assessment activity is based on.

The learner, in front of the framework of the assessment activity, begins to develop his/her answers in natural language and which must be conforming to the format of the responses of SOAQs. After a natural language automatic processing, the learner must create a concept map for each answer in the test.

After finishing the concept map, the learner confirms his/her solution and the ‘virtual intelligent tutor’ make its analysis, comparing concept maps of the learner and the ‘real tutor’. The virtual intelligent tutor gives a final score of comparison, and gives feedback to the learner. Fig. 1 displays the described scenario.

Fig. 1 The scenario of the system’s operation

Modules of tutor, learner and administrator make the

intelligent virtual tutor system’s architecture. Their names display a category of system’s users for which the module provides a set of operations.

The modules act together sharing a database which stores data about user‘s system (tutors and learners and administrator), test feedback, database of SOAQs.

The administrator preserves the system and his/her crucial responsibilities are to supervise data about different type of users (learner and tutor), test feedbacks, SOAQs test, using such functions of the administrator module like data input, deleting and editing (fig. 2).

The tutor’s module supports the tutor in the development of SOAQs test and concept maps for each question in the test. Its main operations are the following: selecting some key words from learning course keyword list and some useful action verbs to design questions that it would be in the test and developing concept maps for each answer question from the test.

Graphical user interface is used for writing the responses of the learner and for concept map development and editing.

There is another tutor who is called virtual intelligent tutor; this one assists the real tutor to make a final score and feedback to the learner. The VI tutor has as a main function: to compare the different concept maps of the learner for one test with the concept map models given by the real tutor in the beginning of the assessment activity. The comparison process is performed based on the similarity measurement techniques and an external resource WordNet.

We note that this virtual intelligent tutor it does not exclude the real tutor but rather it will help him in order to assign a

Page 5: [IEEE 2013 Fourth International Conference on Information and Communication Technology and Accessibility (ICTA) - Hammamet, Tunisia (2013.10.24-2013.10.26)] Fourth International Conference

final score to the learner and provide appropriate feedback to the answers in the assessment activity.

Fig. 2 The architecture of the system

V. CONCLUSION AND FUTURES WORKS The paper describes the intelligent tutoring system for

learner’s knowledge assessment at the end of a learning course as a summative assessment activity. The underlying concepts of the system are questions with short and open answers, concept maps and similarity measures. The core system is composed of three interdependent modules (administrator, learner and tutor).

In future, we will explain the core basic of the assessment process which is the virtual intelligent tutor. It will be able to assess the learner’s responses and to attribute a final score and feedback.

REFERENCES [1] C. Chan, “Assessment: Short Answer Questions”, University of Hong

Kong, 2009, [http://ar.cetl.hku.hk]. [2] D. Callear, J. Jerrams-Smith, and V. Soh, “CAA of short non-mcq

answers”, in Proceedings of the 5th International Computer Assisted Assessment conference, 2001.

[3] D. H. Jonassen and S. Wang, “acquiring structural knowledge from semanticlyy structured hypertext”, in proceedings of selected research and developpement presentations at the 14th Annual Concention of the association for educational communications and technology, 1992.

[4] D. Jennings, “The Use of Concept Maps for Assessment,” UCD TEACHING AND LEARNING/ RESOURCES, 2012.

[5] D. Meurers, R. Ziai, N. Ott, and S. Bailey, “Integrating parallel analysis modules to evaluate the meaning of answers to reading comprehension questions, “ IJCEELL, Special Issue on Automatic Free-text Evaluation, p.355–369, 2011a.

[6] D. Meurers, R. Ziai, N. Ott, and J. Kopp, “Evaluating answers to reading comprehension questions in context: Results for german and the role of information structure”, In Proceedings of the TextInfer 2011 Workshop on Textual Entailment, Edinburgh, Scotland, UK, July. Association for Computational Linguistics, p. 752–762, 2011b

[7] D. Rodney Nielsen, W. Wayne, and James H. Martin,”Recognizing entailment in intelligent tutoring systems,” Natural Language Engineering, p. 479–501, 2009.

[8] E. Gouli, A. Gogoulou nd M. Grigoridou, “A coherent and integrated framework using concept maps for various educational assessment functions,” Journal of Information technology education, vol. 2, 2003.

[9] F. J. R. C. Dochy, “assessment of domain specific and domain-transcending prior knowledge: Entry assessment and the use of profile analysis”, Alternative in assessment in achievements, learning process and prior knowledge, Boston, MA: Kluwer Academic Publishers, p. 93-129, 1996.

[10] F. Richter and M. Sailer, “Basic concepts of lexical resource semantics,” In Arnold Beckmann and Norbert Preining, editors, ESSLLI 2003 – Course Material I, vol. 5, p. 87–143, 2003.

[11] J. Canas, Carff, Hill, M. Carvalho, Arguedas, C. Eskridge, Lott and Carvajal, “Concept maps: integrating knowledge and information visualization,” in Sgmar-Olaf Tergan and Tanja Keller, editors, Knowledge and Information Visualization, vol. 3426 of lecture Notes in computer science, p. 205-219,Springer, 2005.

[12] J. Novak, and D. Gowin, “learning how to learn,” New York: Cambridge University Press, 1984.

[13] J. Novak, D. Gowin and J.T. Johansen, “the use of concept mapping and knowledge vee mapping with junior high school science students”, Science education, p. 625-645, 1983.

[14] J. Pellergrino, N. Chudowsky and R. Glaser, “Knowing what students know: the science and design of educational assessment,” National academy of sciences.Washington DC: National Academy Press, 2001.

[15] K. M. Markham, J. J. Mintzes nd M.G. Jones, “the concept map as research and evoluationtool: further evidence of validity. Journalof research in science teaching, vol. 1, p. 91-101, 1994.

[16] L.J. Cronbach, “Essentials of psychological testing”, New York: Harper and Row Publishers, 1990.

[17] M. Hahn and D. Meurers, “Evaluating the meaning of answers to reading comprehension questions: A semantics-based approach,” In Proceedings of the 7th W orkshop on Innovative Use of NLP for Building Educational Applications (BEA-7) at NAACL-HLT Montreal, 2012.

[18] M. Mohler, R. Bunescu, and R. Mihalcea, “Learning to grade short answer questions using semantic similarity measures and dependency graph alignments,” In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, Oregon, USA, Association for Computational Linguistics, p. 752–762, 2011.

[19] N. Ott, R. Ziai, and D. Meurers, “ Creation and analysis of a reading comprehension exercise corpus: Towards evaluating meaning in context,” In Thomas Schmidt and Kai Worner, ¨editors, Multilingual Corpora and Multilingual Corpus Analysis, Hamburg Studies in Multilingualism (HSM). Benjamins, Amsterdam, 2012.

[20] R. John McClure, B. Sonak, K. Hoi Suen, “Concept Map Assessment of Classroom Learning: Reliability, Validity, and Logistical Practicality,” journal of research in science teaching,vol. 36, NO. 4,p. 475–492, 1999.

[21] R. Ziai, N. Ott and D. Meurers, “Short Answer Assessment: Establishing Links Between Research Strands,” The 7th Workshop on the Innovative Use of NLP for Building Educational Applications, Montreal, Canada, p. 190–200, 2012.

[22] S. Bailey, “Content Assessment in Intelligent Computer-Aided Language Learning: Meaning Error Diagnosis for English as a Second Language,” Ph.D. thesis, The Ohio State University, 2008.

[23] T. Mitchell, N. Aldridge and P. Broomhead, “Computerised marking of Short –answer free-text responses”, 2003.

[24] Z. Harris, M. Gottfried, T. Ryckman, P. Mattick, A. Daladier, T.N. Harris and S. Harris. “The form of Information in Science: Analysis of an immunology sublanguage”. Dordrecht: Kluwer Academic Publishers, 1989.