an architecture of narrative memory - core.ac.uk · form of communication, but a concrete...

15
RESEARCH ARTICLE An architecture of narrative memory Carlos Leo ´n Department of Software Engineering and Artificial Intelligence, Computer Science Faculty, Universidad Complutense de Madrid, Spain Received 1 March 2016; received in revised form 31 March 2016; accepted 5 April 2016 KEYWORDS Narrative; Cognitive architecture; Episodic memory; Semantic memory; Procedural memory; Script Abstract Narrative is ubiquitous. According to some models, this is due to the hypothesis that narrative is not only a successful way of communication, but a specific way of structuring knowledge. While most cognitive architectures acknowledge the importance of narrative, they usually do so from a functional point of view and not as a fundamental way of storing material in memory. The presented approach takes one step further towards the inclusion of narrative-aware structures in general cognitive architectures. In particular, the presented architecture studies how episodic memory and procedures in semantic memory can be redefined in terms of narrative structures. A formal definition of narrative for cognition and its constituents are presented, and the functions that an implementation of the architecture needs are described. The relative merits and the potential benefits with regard to general cognitive architectures are discussed and exemplified. Ó 2016 The Author. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Introduction Narrative phenomena are essentially ubiquitous. All societies and in general all members of these societies can produce and understand narratives naturally. Several authors defend the thesis that human cognition and narra- tive are tightly related. Schank and Abelson, for instance claim that narrative is not just an activity or a sophisticated form of communication, but a concrete underlying topology of memory (Schank & Abelson, 1977). Herman (2002), for instance, proposes a Story Logic in which instead of identifying narrative with stories, it is considered as a full logic, a way of reasoning on experience. Bruner takes one step beyond, arguing that experience and memory happen ‘‘in the form of narrative, leveraging the role of narrative way beyond literary aspects (Bruner, 1991). Szilas defends this idea, which he calls the narrative hypothesis (Szilas, 2015). He does so by remarking that, despite criticism (Ryan & Ryan, 2010), narrative happens with narrative- related cognitive processes (emotions or chronological ordering). This narrative hypothesis is assumed all through this paper, and it is used as scaffolding to build the presented cognitive architecture of narrative knowledge. http://dx.doi.org/10.1016/j.bica.2016.04.002 2212-683X/Ó 2016 The Author. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). E-mail address: [email protected] Biologically Inspired Cognitive Architectures (2016) 16, 1933 Available at www.sciencedirect.com ScienceDirect journal homepage: www.elsevier.com/locate/bica

Upload: lekhue

Post on 30-Aug-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

Biologically Inspired Cognitive Architectures (2016) 16, 19–33

Avai lab le a t www.sc ienced i rec t .com

ScienceDirect

journal homepage: www.elsev ier .com/ locate /b ica

RESEARCH ARTICLE

An architecture of narrative memory

http://dx.doi.org/10.1016/j.bica.2016.04.0022212-683X/� 2016 The Author. Published by Elsevier B.V.This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

E-mail address: [email protected]

Carlos Leon

Department of Software Engineering and Artificial Intelligence, Computer Science Faculty, Universidad Complutense deMadrid, Spain

Received 1 March 2016; received in revised form 31 March 2016; accepted 5 April 2016

KEYWORDSNarrative;Cognitive architecture;Episodic memory;Semantic memory;Procedural memory;Script

Abstract

Narrative is ubiquitous. According to some models, this is due to the hypothesis that narrative isnot only a successful way of communication, but a specific way of structuring knowledge. Whilemost cognitive architectures acknowledge the importance of narrative, they usually do so froma functional point of view and not as a fundamental way of storing material in memory. Thepresented approach takes one step further towards the inclusion of narrative-aware structuresin general cognitive architectures. In particular, the presented architecture studies howepisodic memory and procedures in semantic memory can be redefined in terms of narrativestructures. A formal definition of narrative for cognition and its constituents are presented,and the functions that an implementation of the architecture needs are described. The relativemerits and the potential benefits with regard to general cognitive architectures are discussedand exemplified.� 2016 The Author. Published by Elsevier B.V. This is an open access article under the CC BYlicense (http://creativecommons.org/licenses/by/4.0/).

Introduction

Narrative phenomena are essentially ubiquitous. Allsocieties and in general all members of these societies canproduce and understand narratives naturally. Severalauthors defend the thesis that human cognition and narra-tive are tightly related. Schank and Abelson, for instanceclaim that narrative is not just an activity or a sophisticatedform of communication, but a concrete underlying topologyof memory (Schank & Abelson, 1977). Herman (2002), for

instance, proposes a Story Logic in which instead ofidentifying narrative with stories, it is considered as a fulllogic, a way of reasoning on experience. Bruner takes onestep beyond, arguing that experience and memory happen‘‘in the form of narrative”, leveraging the role of narrativeway beyond literary aspects (Bruner, 1991). Szilas defendsthis idea, which he calls the narrative hypothesis (Szilas,2015). He does so by remarking that, despite criticism(Ryan & Ryan, 2010), narrative happens with narrative-related cognitive processes (emotions or chronologicalordering). This narrative hypothesis is assumed all throughthis paper, and it is used as scaffolding to build thepresented cognitive architecture of narrative knowledge.

20 C. Leon

This work does not dig into the physiological aspectscausing narrative behavior, not it does into the source ofthis phenomenon. Whether narrative happens in cognitionbecause of learning or because of innate capabilities is,from the point of view of the functional description of theaverage human cognition, not strictly relevant. This workfocuses on the implications of the hypothetical narrativecharacteristics of some functions of human cognition andnot on what produces them. This narrative hypothesis hasstrong implications in the way we understand cognition inhumans, and as such a cognitive architecture accepting itmust describe its components including narrative propertiesnot only with additional modules, but as an in-depth reviewof the functional characteristics of more general aspects ofmemory and processing.

To the best of our knowledge, a general architecture fornarrative cognition has not been yet proposed. This is prob-ably due a number of issues. The fact that narrative behav-ior is certainly complex and many times assumed to be aliterary phenomenon puts it away from the focus of generalcognition. Besides, achieving shallow narrative behavior isdoable without explicit models of narrative, since procedu-ral information about how to build a useful story is probablyenough for most practical cases.

However, given the recent advances of cognitive archi-tectures and the growing interest on narrative and its rela-tion to general cognition, proposing an architecture fornarrative cognition can be relevant and useful in generalas a first step towards the study of the working hypothesis,and its influence on practical architectures both for Cogni-tive Science and Artificial Intelligence.

Most modern cognitive architectures are complex enoughas to consider different types of memory. This paper onlyaddresses two main parts of explicit memory, namely episo-dic and semantic. This is due to practical and theoreticalaspects. First, analyzing the narrative properties all pro-posed kinds of memory would require a more extensivework, outside reasonable scope. Second, it is unlikely thatall cognitive models of memory present narrative aspects,at least as they are conceived in this paper. Therefore, ithas been considered illustrative and useful to propose amodel of narrative cognition restricted to two main, gener-ally accepted types of memory where, hypothetically, nar-rative properties can be clearly observed.

The current proposal makes no claims about the applica-bility of the proposed solution to other domains, forinstance, non-narrative messages. However, it is hypothe-sized that some features of this architecture could beadapted to fit other non-narrative sources of information,as discussed in Section ‘Conclusions and future work’.

This paper is organized as follows: Section ‘Definition ofnarratives for the cognitive architecture’ proposes a formaldefinition of narratives suitable as a unit of information inthe narrative memory and Section ‘Representation of narra-tives as cognitive objects’ specifies the constituents of nar-rative objects. Section ‘Episodic memory as a narrativememory’ describes an architectural description of episodicmemory as narrative memory, and Section ‘Procedural–semantic memory and narrative’ does the same for processesstored in semantic memory.

An overall architecture of narrative memory is describedin Section ‘Overall architecture of narrative memory’, and

the corresponding examples are explained in Section ‘Examples’. Sections ‘Discussion’ and ‘Conclusions and futurework’ provide discussion and conclusions, respectively.

Definition of narratives for the cognitivearchitecture

Along the years, several definitions of what a narrative ishave been proposed. All of them have some aspects in com-mon, but they in general depart from each other when thefocus is set on literary aspects. This work, while not leavingliterature and artistic behavior apart, is centered on a moregeneral conceptualization of narrative, more in line withcognitive structures.

For the sake of this work, the definition of narrative pro-posed by Finlayson will be used (Finlayson & Corman, 2013).Briefly, this definition states that a narrative:

1. is a sequence of events,2. that are causally related,3. involving specific characters and times,4. and displays a certain level of organization beyond the

basic coherence.

The particular difference with regard to a plain sequenceof events is made by point 4, which does actually highlight afundamental property with regard to the proposed architec-ture. This account tries to describe the common activity ofnarrative behavior, and as such it identifies the particularcharacteristics that differentiate it from the simple descrip-tion of an event list (which could also be a narrative).

The fact that narratives have a certain organizationbeyond pure coherence and order is what makes the defini-tion useful for the proposed architecture that this work isabout to describe. This organization is commonly assumedto happen at several levels both from the point of view ofstructure (discourse, content) and the cognitive effect(focus, salience, inference).

From here on, this study will be based on the assumptionthat:

� From a functional point of view, part of human cognitionis structured with narrative properties.

� Narratives have certain properties that make them dif-ferent from plain sequences of events.

Representation of narratives as cognitiveobjects

While this architecture for narrative memory offers aframework and not a final implementation, it is relevantto describe the main properties that a narrative mustaddress in order to be suitable for the model. This sectionproposes a formal definition of narrative information, whilestill leaving room for particular instantiations for specificimplementations.

As previously pointed out in Section ‘Definition of narra-tives for the cognitive architecture’, the number of defini-tions of what constitutes a narrative is vast and some

An architecture of narrative memory 21

aspects bring conflict between authors. For the sake ofdesigning a useful architecture, it is mandatory to establishthe range of application of the model. In this case, thearchitecture contemplates two subsystems of a general cog-nitive architecture: the episodic memory and the procedu-ral information in the semantic memory, the explicit‘‘know-how”, in contrast with the implicit information inthe procedural memory. Given that this work focuses on asubset of the semantic memory, From here on the text willrefer to the this subset as procedural–semantic memory.

For simplicity, simple models of both kinds of memorywill be assumed. Episodic memory stores events experi-enced by the cognitive entity, and procedural–semanticmemory stores explicit information about how to performdifferent activities. The focus is set on functional aspectsand their suitability for a computational implementation.Detail about the particularities of how the model managesepisodic and procedural–semantic memory will be given inSections ‘Episodic memory as a narrative memory’ and ‘Procedural–semantic memory and narrative’ respectively. Ingeneral, this paper will from here on refer to narrativememory, defined like this:

Narrative memory is the subset of episodic and semanticmemory in a cognitive storing information that presentsnarrative features.

In the architecture explained in this paper, the narra-tive memory is composed by the episodic memory andthe procedural–semantic memory, as these will be usedto exemplify the relative improvements that the narrativememory provides. In order to make the model suitable forarchitectures in which episodic memory and procedural–semantic memory also include non-narrative information,narrative is assumed to be a subset of these parts of gen-eral memory. However, it would be possible to simplify themodel and include the whole episodic and proceduralmemory in it.

The narrative memory handles a certain type of object:the narrative object, which is defined next for the purposesof this paper:

A narrative object is any cognitive object stored in thenarrative memory, therefor having narrative properties.

According to the previous definition, all informationstored in or retrieved from the narrative subset of the epi-sodic and procedural–semantic memory are assumed to bea narrative object. With these definitions in mind, we cantake one more step and describe what a narrative objectcontains. A narrative object includes the information tobe stored, but it is structured in terms of kernels, satellitesand narrative relations.

Figure 1 Example of kernel and satellites. In this example,nodes with arrows pointing to the kernel are satellites. Topnodes represent causes and bottom nodes represent effects in ahypothetical causality network.

Narrative objects: Kernels and Satellites

The kernel is the main part of a narrative object, the mostsalient part, main objective or conflict. The current archi-tecture has borrowed both the concept and the name fromChatman’s work (Chatman, 1978). The semantics this workassigns to the term are notably different since Chatmanfocuses on structure and this work describes a cognitiveprocess.

The satellites, again using Chatman’s nomenclature, arethose constituents of a narrative object that are not centralto the narrative itself, but are connected to the kernel andneeded or useful for a variety of reasons, namely causation,contextualization, explanation of the effects or

Let ni be a narrative object describing the event i. Then,

ni ¼ ði; ki; Si;CiÞ ð1Þwhere i is the information to be processed by the narrativememory (event, procedure), ki is the kernel of the informa-tion Si is the set of satellites of the information, and Ci is theset containing a number of labeled directed relationsbetween the kernel and the satellites.

Graphically, the relation between satellites and kernelcreates a directed graph that resembles the work by Tra-basso et al. regarding narrative causality (Trabasso &Sperry, 1985). Figure 1 depicts an example.

Narrative memory

A narrative memory, N, is then a set of narrative objectsand relations between them, as formally expressed inEq. (2).

N ¼ ðfo1; o2; . . . ; omg; fR1ðoa; obÞ; . . . ;Riðoy ; ozÞgÞ ð2Þwhere o1; . . . ; om are narrative objects and R1ðoa; obÞ; . . . ;Riðoy ; ozÞ are relations between these narrative objects.

In Section ‘Narrative objects: Kernels and Satellites’ theconcepts of narrative kernel and satellite have beendefined. In the case of external material to be learnt, how-ever, information is input to our cognitive system which,ideally, stores everything. It is at this point where the nar-rative memory starts playing a role in cognition.

A narrative-aware cognitive architecture takes raw infor-mation as input. This input must be converted to symbolsand aggregated into events in order to be available for therest of the cognitive system. Events, however, do not formfull narratives. A sequence of plain events must be restruc-tured into a narrative (i.e., narrative properties must beadded to the events) to produce narrative objects and fullnarratives from them.

The architecture for narrative memory defines a functionthat accepts a linear sequence of events of processes and

22 C. Leon

returns a set of narrative objects. Eq. (3) describes the thisfunction formally.

nð½ev1; ev2; . . . ; evn�Þ ¼ ðfo1; o2; . . . ; omg;fR1ðoa; obÞ; . . . ;Riðoy ; ozÞgÞ ð3Þ

where n is a narrative, ev1; . . . ; evn are events coming froman external environment and o1; . . . ; om are narrativeobjects that are produced by the function creating the nar-rative and correspond to the narrative-processed version ofthe input. The second set in the output contains relationsbetween different narrative objects, for instance Riðoy ; ozÞcontains the existing relation between the narrative objectsoy and oz. The number of events and the number of narra-tive objects, n and m in Eq. (3), can be equal but it is notnecessary, n > m, n ¼ m and n < m are perfectly valid val-ues. This is so because an episode can be divided in severalnarrative objects, some events in the episode can be dis-carded, and some events can appear in more than one nar-rative object. The function, then, creates content for anarrative memory from external input.

The function must be able to:

1. Build an appropriate set of kernels, filtering out irrele-vant content and identifying important material to builda coherent network of satellites for each kernel.

2. Identify relations between the different kernels andbuild a full narrative using the particular narrativeobjects as constituents.

Sections ‘Building the narrative network between Kerneland Satellites’ and ‘Creating relations between kernels’detail the architectural approach taken to address theseobjectives.

Building the narrative network between Kernel andSatellites

One of the relative contributions of the proposed approachis the inclusion of narrative knowledge into the architectureas a cognitive process. Narratology, especially CognitiveNarratology, can offer useful insight on how narrative phe-nomena is managed by human cognition. Given that thiswork follows the hypothesis that there exist underlying nar-rative properties in several cognitive processes, a model ofidentification of relevant information based on narrativeproperties is proposed.

Two cognitive processes of narrative are addressed: nar-rative focus and narrative inference. Narrative focus is thesalience of some element or elements of the narrative insome entity’s mind. Studies show that the salience of ele-ments impacts interpretation in such a way that salient ele-ments are more easily comprehended and understood by thehuman receiving the narrative.

Narrative inference is a process by which humans tend tocreate both knowledge and relations between facts in orderto give coherence to some discourse, which is usually con-veyed in an incomplete form (for instance, omittingdetails). For instance, there is evidence suggesting thatreaders perform narrative inference while they read. Whatis really interesting is that this process is carried out almostno conscious effort by the reader (Graesser, Singer, &

Trabasso, 1994). It seems that there are two different kindsof narrative inferences: elaborative and necessitated. Elab-orative inferences are those that, while augmenting thecontextual information of the world described in the narra-tive, are not strictly needed for pure comprehension. On theother hand, necessitated inferences are created to satisfycomprehension requirements. This architecture proposes ageneric definition of the process of identifying kernels andtheir associated satellites based on narrative focus andinference.

A narrative kernel is a salient element of some narrative.Therefore, any narrative event with a particularly high levelof salience is a candidate for being encoded as a kernel.Thus, the architecture assumes that the implementationwill provide a function l assigning a given narrative eventa probability of being represented as a kernel that is directlyproportional to the cognitive salience that the cognitiveentity experiments, as formalized in Eq. (4).

lðeventÞ / cognitive salience of event ð4ÞAfter identifying the most salient events of the input

material in the form of narrative, the kernels for the narra-tive objects are selected. In order to complete them it isnecessary to find the satellites. The model proposes narra-tive inference to identify them and link them to their corre-sponding kernel or kernels.

In order to do so, the model defines a function m that,given a narrative event and a kernel, returns both the prob-ability that the event is linked to the kernel as a satellite,and the kind of link that makes it part of the narrativeobject. The probability is directly proportional to the‘‘strength” of the inference. The relation defines the kindof the link and conveys the semantics. Eq. (5) shows a for-mal definition of the m function.

mðevent; kernelÞ ¼ ðPlink;Rðevent; kernelÞÞPlink / strength of relation

ð5Þ

The specific definitions of the l and m are left to theimplementation of the architecture. The feasibility of theapproach is evidenced by the existence of computationalapproaches implementing salience and inference (Niehaus& Young, 2014).

Creating relations between kernels

The relations potentially present in narrative objects, thoselinking satellites and kernels, must capture properties glu-ing the constituents in such a way that overall narrativeproperties are explicitly stored and used by the architec-ture. These relations are complex, and there is a highdegree of coupling between them. The architecture definesthe relations at a certain granularity, specific enough as tocapture common narrative semantics and general enoughas to leave room for particular implementation particulari-ties. The level of granularity has been chosen with com-putability in mind. That is, the proposed relation try toserve the purpose of describing narrative structures thatcan be processed by computational means.

The final implementation of the functions used tocompute the relations and create the actual network ofrelations is not covered in this approach. Nonetheless, there

An architecture of narrative memory 23

exist computational approaches proposing partial solutionsfor every proposed relation. Restricting the architectureto a fixed set of functions would narrow the generalapproach of this architecture. References to existingapproaches are, nevertheless, proposed on each case.

Time

According to the definition of narrative definition of narra-tive used for this work (Section ‘Definition of narrativesfor the cognitive architecture’) narrative is a sequencelinked by, among other things, time relations. It is hard tothink of a narrative not involving some sense of time. Assequences of event usually happen over time, even the mostsimple narrative requires time aware structures. For this,time relations are included in the architecture.

An in-depth study of temporal logics and formal timerepresentation is outside scope, but many approaches exist(Allen, 1991). It is however important to remark that timecan be absolute and relative. Absolute time relations hap-pen when an exact time is given (i.e. ‘‘at 7 pm”) and rela-tive time occurs when events are related by its order (i.e.‘‘later on”, ‘‘before”).

While relative time can be well represented with classi-cal relations linking two events, absolute time is slightly dif-ferent. An absolute time is a property of an event, andtherefore it is modeled in the architecture as a special rela-tion, as show in Figure 2. In this case, the absolute timewould be part of the event, and not a satellite. Graphically,this is depicted by a dotted edge. While technically part ofthe event, it has been considered useful to include absoluteinformation in the graph.

Causality

Causality is crucial in narrative. Causal relations are thebase of narrative understanding. Some authors claim thatnarrative understanding happens, among other things, whena coherent causal network has been built (Trabasso &Sperry, 1985). Many computational approaches base theirprocess on causality as the main narrative relation (Riedl,2004).

Figure 2 Time relations can be absolute or relative. Relationsbetween events and absolute time are represented with adotted edge, while relations between satellites and kernels arerepresented with a solid edge.

Figure 3 Causal relations can repres

Causality is a very complex concept and it usually encap-sulates many specific aspects of preconditions and effects,as exemplified in Figure 3. Most approaches, especially com-putational ones, assume a straightforward definition basi-cally accepting a broad sense of causal relations. Theseare then aggregated into a single kind of causality link.

Location

Locations are also fundamental in narratives because theyrepresent the setting and context where related action hap-pens. Due to obvious physical aspects, events occurring inthe same place form the same scene. Entities performingtogether influence each other if they are in the same loca-tion. Therefore, a full representation of narratives cannever leave location out.

In the same way as with time, locations can be absoluteor relative. Absolute locations refer to specific places (‘‘thecinema”, ‘‘the table”), while relative ones actually linkevents together as satellites and kernels (see Figure 4).

Most of the times, location relations are implicit in nar-rative discourses. Scenes are described and actions happen-ing in them are just assumed to be related by a relationexpressing proximity. However, the underlying representa-tion of a narrative (the fabula, as opposed to the sjuzhet)must include location information explicitly.

Agency and objects

Narratives are usually complex sequences of actions pro-duced by agents and received by objects (objects can beagents too). Most stories are structured over a characteror a set of characters—many times humans, or entities play-ing the role of humans. The events happening to or beingproduced by characters are the backbone of mostnarratives.

Objects also play a predominant role when structuringnarratives. A magical ring or the ball in a football gamebeing told are examples of entities not able to produceevents but actually causing and receiving most actions in anarrative.

Agents and objects, then, are the constituents triggeringnarrative relations. Again, they are usually properties ofevents (as shown in Figure 5), and narrative relations as

ent different aspects of causality.

Figure 4 Location relations in narratives. The kernel (bold)has one satellite connected by a solid line. This satellitehappens at the same place. The dotted edge represents eventinformation.

Figure 5 Agency relations in narratives. The kernel (steal)has a satellite linked by a character relation (desire). Agentsand objects themselves, while part of the events, are whatmakes the relation hold.

Figure 7 A narrative considered from different levels ofdetail.

24 C. Leon

‘‘same agent” or ‘‘carried out by enemy” are the actuallinks used to connect kernels and satellites.

Composition

Narrative objects constructed by the previously describedmeans are not meant to be isolated in the narrative mem-ory. Instead, narrative objects actually take part in longernarrative chains, actually creating full narratives. There-fore, the relation between narrative objects must beincluded in the architecture.

Composing narrative objects into narratives is formalizedby letting kernels be satellites of other kernels. Connectingkernels to other kernels is then carried out in the same wayas connecting plain satellite events, except for the fact thatthese satellites can act as kernels too. Figure 6 shows anexample of a temporal relation in which there is a relation(before) happening between a kernel (drink coffee) and asatellite (wake up). In this case, wake up acts as a satellitefor kernel, but as a kernel for bell ‘‘alarm goes off”.

Abstraction

Narratives and the presented architectural model for narra-tive memory are composed by interconnected elements.The kernels created through the processes described inSection ‘Building the narrative network between Kerneland Satellites’ form a coherent set of narrative objects.In order to build a network of cognitive relations betweenthem, the architecture requires explicit modeling of thekinds of relations.

Narratives can be managed from different perspectives.In particular, it is natural for humans to abstract a narrativeand handle it with different levels of detail. Figure 7 showsan example of a narrative told from different levels ofabstraction.

Abstractions are defined in the same terms as non-abstracted narratives, that is, as narratives themselves. Inthe example show in Figure 7, there would be one story,‘‘boy meets girl”, then, the second one ‘‘John meets Mary,John and Mary argue, John and Mary get married”, and thethird story would contain the other facts. In this case, onlythe kernels are shown, since there would be satellite infor-mation attached to them. The stories would then be linked

Figure 6 Composition

by a new family of relations, abstraction, linking the speci-fic kernels as satellites of the abstracted ones. Figure 8shows it graphically.

The functions for constructing narratives (l and m) andthe relations (time, location, causality, agency, abstractionand composition) are the building blocks for creating narra-tive objects. These constituents play a fundamental role inthe architecture, since the main modules (to be explained inSections ‘Episodic memory as a narrative memory’ and ‘Procedural–semantic memory and narrative’) will rely on theirexistence to perform.

Episodic memory as a narrative memory

Episodic memory is the memory of autobiographical events(Tulving, 1972, 1983). The relation between episodic mem-ory and narrative structures is hypothetically very importantsince, according to the hypothesis by Schank and others(detailed in Section ‘Introduction’) narrative schemas playa fundamental role in human cognition.

It then follows that, when storing episodic content, thememory units are linked by narrative relations as, forinstance, the ones proposed in Section ‘Creating relationsbetween kernels’ (time, causality, location and composi-tion, abstraction and agency). When producing a narrativefor thinking, reasoning or telling, these relations play animportant role too. In this way, this work proposes the def-inition of episodic memory as a narrative memory. Episodicmemory can then be described in terms of the formalizationpreviously explained in Section ‘Narrative memory’, in par-ticular Eq. (2), where a narrative memory was defined as aset of narrative objects and relations between them. Under-standing and producing narrative-episodic objects are thenthe main processes tackled by this model of episodic mem-ory. Sections ‘Understanding episodic narratives’ and ‘Pro-ducing episodic narratives’ describe how the currentarchitecture carries out these functions.

Understanding episodic narratives

Modern perspectives of narratology consider narrative ascognitive phenomena (Herman; Herman, 2013, chap.Cognitive). This, in contrast to more classical structuralistapproaches (Propp, 1928; Barthes, 1977; Genette, 1979),shifts the perspective and offers a more psychological view,

of narrative objects.

Figure 8 Narratives linked as abstractions.

An architecture of narrative memory 25

which gives useful insight on how what the underlying pro-cesses are. This change of focus from the story itself tothe cognitive system suggests that both the creator andthe audience of an episode play an active role (either con-sciously or unconsciously) to build a representation of thenarrative and to connect that narrative to other informationin the cognitive system.

There exist formal and computational models of narra-tive understanding (Mueller, 2002). In general, most of themacknowledge the active role of the cognitive system inunderstanding. Understanding is also generally assumed bythese models to consist on building a coherent network con-necting the received information as a meaningful graph-likestructure. This includes creating a structure linking all rele-vant data of the perceived narrative and connecting parts ofthis new narrative with previous knowledge.

Referring back to the descriptions of narrative focus andnarrative inference (Section ‘Building the narrative networkbetween Kernel and Satellites’) it can be seen that the cog-nitive narrative model left the final definition of l and mopen (Eqs. (4) and (5) respectively).

Narrative salience is proportional to recency and related-ness (Zwaan, Langston, & Graesser, 1995; Kintsch, 1991;Langston, Trabasso, & Magliano, 1999, cha). Recency refersto the evidence that an event that has been recently men-tioned has a higher salience than other that was mentionedin the past. Additionally, events semantically related to thecurrent event also have stronger salience. This informationcan be used to implement the function for identifying thekernel, l.

For the satellites, the implementation has to define the mfunction. This must create semantic relations between thekernel and the potential satellites for each narrative object

composing the full narrative. A computational systemimplementing the narrative architecture needs some sourceof knowledge to discover these relations. However, theprobability of a link between two events seems to bedirectly proportional to the number of shared dimensionsof the two events (Zwaan et al., 1995).

An implementation of these ideas is proposed by Niehausand Young (2014). In the implementation, a computationalmodel of narrative discourse understanding was tested.Results evidence the plausibility of the model.

Producing episodic narratives

The task of narrative production has been addressed frommany perspectives (Bringsjord & Ferrucci, 1999; Perez yPerez, 1999; Riedl & Young, 2010; Leon & Gervas, 2014).Since the 1970s, computational story generation systemshave been created, all of them addressing different featuresfrom a more or less cognitive perspective. Many times,these systems have been created with an engineering objec-tive in mind and they have not addressed general aspects ofnarratives. While from the point of view of Artificial Intelli-gence story generation systems represent a relevantadvance, they shed little light on the more general problemof formalizing narrative cognition.

Gervas et al. propose a model of the story generationtask, ICTIVS (Gervas & Leon, 2014; Gervas & Leon, 2015).In this model, the entity producing the story models theaudience and iteratively cycles through a set of steps, refin-ing the draft and updating the internal constraints. Thesesteps involve invention, composition, transmission, inter-pretation and validation of stories, putting strong focus

Figure 9 Episodic memory in the narrative architecture. CDstands for Content Determination, DP stands for DiscoursePlanning and RM stands for Reader Model.

26 C. Leon

on the task as a creative process. The ICTIVS model partiallyrefines other models, like the account of writing proposedby Sharples (1999). Sharples proposes a model of writingbased on a dual cycle he calls engagement and reflection.During the engagement step, the author creates contentand produces new material related to the story. In thereflection stage, the material just added is reviewed andboth the material and constraints that led to it are evalu-ated and changed accordingly.

The translation from a cognitive psychological model anda computational cognitive architecture is not straightfor-ward because the high level concepts are so far difficultto map into a computer algorithm. Story production, how-ever, can be modeled computational without the require-ment of mimicking human behavior. The architecture,while focusing on a model inspired by human cognition,encapsulates episodic production inside a module that canbe implemented computationally.

This module is designed according following a commonlyused division of tasks for automatic language generation,including Content Determination (deciding what to include)and Discourse Planning (deciding the order and the explicitrelations for creating a discourse) (Reiter & Dale, 2000). Theactual realization into language is left to the Realizationmodule.

The generation process includes a reader model. Thisreader model, R, captures the expected reaction of anexternal cognitive system receiving the output of the narra-tive production.R could be used as a specific objective (i.e.by targeting a specific system) or general (if the audience isunknown or formed by more systems). Moreover, the readermodel could even be incomplete, and then the cognitivesystem producing the narrative would have to completethe model with its own understanding mechanisms, if any.An in-depth study of the reader model would be too exten-sive for the purposes of this paper, but a general idea isused. The literature offers a number of systems which givemore insight about how a reader model could be imple-mented (Mueller, 2002).

When a reader model R is available, the generation pro-cess can follow any computational algorithm executingagainst the R. For instance, a generate-and-test patterncould try to produce narrative messages until R is satisfied.A smarter search based on narrative properties can producegood results in tractable time, as evidenced in (Niehaus &Young, 2014).

From a cognitive point of view, it is more than likely thathumans do not perform planning on each story to be nar-rated at least not as we understand planning in computers.Information is functionally linked, and pure planning is prob-ably not a full model of how information retrieval works.The narrative model can also learn connections betweennarrative episodes (formally connected sets of narrativeobjects), and use them as pre-made schemas to be used ingeneration. More formally, a schema would be an orderedsequence of narrative objects in which the kernels are con-nected by a follows relation, which would be a temporalrelation.

These schemas can be learnt after a successful planningprocess, and be stored as a particular abstraction of a nar-rative. The abstraction would be included in the system asexplained in Section ‘Creating relations between kernels’.

Since, as previously explained, this abstraction would be anarrative object, it requires a kernel. The kernel would bethe narrative objective, the main concept or idea that thecognitive system would try to convey in the form of anarrative. This idea, an objective as a kernel, links withSection ‘Procedural–semantic memory and narrative’,which explains how the procedural–semantic memory canbe modeled as a narrative memory too.

Figure 9 shows a diagram of the architecture of theepisodic memory, including the generation module.

Procedural–semantic memory and narrative

Procedural memory is the cognitive subsystem storing howto perform particular types of action (Squire, 2004). In gen-eral, this memory has been assumed to be implicit, and thusthe kind of procedures it stores hardly fall in the category ofnarrative knowledge, since narratives are easily managed byconscious structures and language. Semantic or declarativememory, in contrast, stores explicit information. We defineprocedural–semantic memory as the part of the declarativememory including ‘‘how things are done”. For example, theprocess of buying a ticket in an on-line platform follows cer-tain steps that form a short narrative: accessing the web-site, logging in, choosing the seats, paying, receiving thebill, etc. The proposed relation between the procedural–semantic memory and the procedural memory is discussedin Section ‘Procedural memory and procedural–semanticmemory’.

In contrast with the account of the episodic memory as anarrative memory, the procedural–semantic memoryimplies a stronger abstraction when creating narratives.The kind of material stored in it does not refer to a specificticket, but to any ticket, following the previous example.Additionally, the kind of relations used for this narrativememory do not exclude any of the episodic memory, butnecessitates other explicit relations for expressing the pro-cedural nature of the stored information.

Storing and carrying out processes as narrativescripts

Section ‘Representation of narratives as cognitive objects’described how narrative objects are built from kernels andsatellites in the proposed architecture. The way in whichkernels are identified when understanding and constructinga process to be stored in the procedural–semantic memoryis modeled against the same principles previously explained,

An architecture of narrative memory 27

but salience and inference are influenced by differentaspects.

From here on we will borrow the term script from the lit-erature (Schank & Abelson, 1977; Bower, Black, & Turner,1979). In short, a script is a specific narrative describing acertain process. In the proposed architecture, a script is anarrative in the procedural–semantic memory and, in orderto learn a script, a cognitive system must experience it. Thefact that a system receiving the script through languageinstead of through experience will be omitted, since thatis considered a special case involving language, not in thescope of the current study. For procedural–semantic mem-ory, then, the script becomes the narrative, a set of narra-tive objects.

Analogously to what was described for the narrativeproperties of the episodic memory in the presented archi-tecture, we define the function l and m for the procedu-ral–semantic memory. The m function captures thesalience of a certain event, thus making it more likely toform a kernel. In the case of the procedural–semanticmemory, an objective is needed. When carrying out a num-ber of actions, there is an object to be fulfilled by the cog-nitive entity. The objective is assumed to be computed in anexternal module, not necessarily part of the narrative mem-ory. The architecture, then, considers that there is anexternal module in the system computing a specific objec-tive, and this objective is borrowed by the narrative mem-ory, in this case by the procedural–semantic submodule.Eq. (6) formalizes l.

lðeÞ ¼ 1

distanceðe ! objectiveÞ ð6Þ

where e is any element coming from the conceptualizationmodule, the objective is the objective given by the externalmodule that computes the objective, and distanceðÞ is animplementation-dependent function that computes the dis-tance between the objective and the current event. Thedistance can be semantic or structural. A semantic distancecould be the difference between buying and renting ahouse. Both objectives have many aspects in common.Structural distance would represent the relation betweenbuying a house and applying for a mortgage loan. Thesetwo things do not have much in common, but they are struc-turally related since there is a preconditional link betweenthem and usually appear in the same narrative.

The m function computes the strength and the category ofthe relation between a satellite and a kernel. For procedu-ral–semantic memory, the most important family are thecausal relations. It is fundamental to note that, as we com-monly use it, causality has several levels of meanings andstrong implications in coherence and understanding. In nar-rative, this has been deeply studied (Goldman, Graesser, &van den Broek, 1999). For example, if a glass falls andbreaks, the cause could be the existence of gravity, the rea-son why the glass felt, or both. Analyzing the many semanticoptions is not the target of this research, it is just assumedthat there is a global, general notion of causality and thathuman cognition has the functional ability of learning andinferring it.

In order to model this high level sense of causality, wedefine a particular narrative relation, the precondition. Aprecondition of an event A is a narrative event B that must

happen for A to be possible. The probability that the m func-tion finds a precondition between events e1 and e2 is com-puted by the distance between those two events. It isformally described in Eq. (7). The definition of the distanceis the same as the one used for Eq. (6). In this case, comput-ing the strength of the precondition takes into account theaverage of the distances between e1 and e2 for all previouslylearnt scripts. This tries to capture the fact that causality isnot script-specific, but the result of many instances of thepair ðe1; e2Þ. Therefore, previous experiences affect theway in which cognitive systems learn causal rules.

mðe1; e2Þ ¼ ðdistanceðe1 ! e2Þ; preconditionðe1; e2ÞÞ ð7ÞOther relations computed by m would be created in the

same way as shown for the episodic memory. This wouldinclude time relations, location, agency and so on.

Analogously to the proposed model for generation in theepisodic memory, the process for performing an actionusing the procedural–semantic memory is based on a com-putational approach, focusing on the functional aspectsrather than the structural ones.

The architecture models the execution of a script as aplanning process. Instead of using a reader model, as thearchitecture does for episodic memory, the model relieson a planner. Given that performing an action has an objec-tive and narrative objects can be used as operators, thearchitecture can apply a more or less complex planningalgorithm. This algorithm can be driven not only by thescripts but also by external information like the currentstate of the cognitive entity.

Overall architecture of narrative memory

Figure 10 shows a diagram of the overall narrative architec-ture. As it can be seen, it is strongly simplified with respectto other more general cognitive architectures, due to thefact that this architecture for narrative memory onlyaddresses certain subsystems and procedures (Andersonet al., 2004; Cai, Goertzel, & Geisweiller, 2011; Snaider,McCall, & Franklin, 2011). As such, only the relevant aspectsfor the narrative memory are covered in this architecture.

The Input and Output nodes represent narrative content.They are assumed to be physical, raw content. For this con-tent to be usable in the cognitive architecture, it must bepromoted to symbolic information. This task is carried outby the Preprocessing module. Once the narrative informa-tion has been produced in the narrative memory of the cog-nitive system, it is taken out as raw output through theRealization module.

The Preprocessing module feeds the episodic memoryand the procedural–semantic memory. These are the datathat will become the constituents of events, and thereforekernels and satellites as well.

The intentions of the cognitive entity are summarized inthe Objectives module. This module controls narrative pro-duction and understanding by taking into account theentity’s specific objectives for each process (generation ofa narrative, understanding of a procedural narrative, andso on).

The Realization module captures narrative objectivesand the specific output (tasks to be performed, stories to

Figure 10 Overall architecture of the narrative memory. PS stands for procedural–semantic memory and NG stands for narrativegeneration.

28 C. Leon

be told) and realizes the narrations in several ways. Thisrealization is performed as the output of the system.

The narrative memory is embedded in the general cogni-tive system and connected with the preprocessed input andthe objectives. The modules for the episodic memory andthe procedural–semantic memory (whose functionality hasbeen described in Sections ‘Episodic memory as a narrativememory’ and ‘Procedural–semantic memory and narrative’respectively) are its main constituents. A reader model isalso included and used by the episodic memory. The outputof these two modules is connected to the Realization mod-ule, producing the actual output.

Examples

This section describes example interactions involving thenarrative memory and the narrative processing module.Although theoretical, it is meant to highlight both the roleand the need of each part of the architecture.

Episodic memory: telling and understanding eventsas a story

Let us assume that a cognitive entity A (a person) tells astory about her morning to another entity B. For this exam-ple, let us just assume that the sequence of events is thenext one:

1. A wakes up.2. A has breakfast.3. A gets out.4. A drives her car to the office.5. A looks at a bird.6. A hits a traffic sign.7. A calls the police.8. The police arrives.

Events ½1; 8� are not fed into the system as proper narra-tive events, but as a low level physical stimuli. These phys-ical stimuli are received, discretized and made symbolic bythe Preprocessing module. The symbolic representation isthen passed to the episodic memory. Leaving sequentialitydetails out to make the explanation simpler, let us assumethat the episodic memory has already received all the infor-mation and the narrative can be constructed with the whole

material available. Otherwise, the process would be analo-gous, but several iterations would have to take place in aspecific implementation.

Before telling the story, A must have created a narrativestructure (connected kernels and satellites) storing thesequence of events in the episodic memory. We can assumethat the implementation would find the main kernel of theevents, probably corresponding to point 6: A hits a trafficsign. Choosing point 6 as a kernel would be the result ofcomputing the value of the l function (Section ‘Buildingthe narrative network between Kernel and Satellites’).The particular details of the implementation are left to avalid implementation. The relevant part is identifying thecognitive salience of each one of the events and the pickthe most salient one. A function that computes salienceand takes unexpectedness into account would most likelychoose point 5 as the most salient element. Then,lð5Þ ¼ maximumðlð1Þ; lð2Þ; . . . ; lð8ÞÞ.

This would be the base for creating the network of satel-lites. An abstraction relation in which the kernel would bepoint 5, and the full story would be story in detail, as a ker-nel, as show in Figure 11. The m function in this case wouldreturn one single satellite (the whole narrative) and therelation abstraction. The implementation should assign ita high probability.

Temporal order could then be constructed. Since points1–8 happened in order, a simple next relation could beadded. However, narrative usually convey broader informa-tion. First, all temporal information must be constructedaround the kernel. Events 1–5 happen before the kerneland events 7 and 8 happen later on. Additionally, is usuallyrelevant enough as to make it necessary to remember theexact time. Assuming that A would not be looking at theclock at the time of the accident, probably the only abso-lute time available would be time at which the police arrive.Figure 12 depicts the time relations graphically.

Causality relations would also be produced by the m func-tion. This time, not all events take place in the creation ofthe relations. Relevant satellites are looking at a bird, driv-ing, calling the police and the arrival of the police. The factthat A woke up does not really produce a car accident.Although it is possible to find a causal link (if A was asleep,no car accident would have happened), we can assume animplementation of the m function omitting this aspect. Ofcourse, other implementation could have included it inthe set of causal relations. Figure 13 shows the graphicalrepresentation of the causality network.

Figure 11 Example abstraction relations.

Figure 12 Example time relations.

Figure 14 Example location relations.

Figure 15 Example agency relations.

An architecture of narrative memory 29

Location graphs would be constructed in terms of twoplaces: A’s home and the street. That would create two sce-nes and divide events accordingly, as shown in Figure 14.

Agency focuses on A. The m function would identify themain character A plus the police. The bird would alsobe an important character, but only as a passive agent.Figure 15 represents the corresponding graph.

The narrative object would have been created and wouldbe available for retrieval. In order to tell the story, theentity would have to rely on its reader model. A computa-tional model would generate a story omitting the parts thatthe reader model would easily infer. It would also omitthose events that do not causally affect the episode.

Figure 13 Example

Additionally, a proper narrative communication wouldemphasize the most relevant events. In this case, the kernelwould receive the attention, probably being transmittedfirst. The narration would then become a story in a classicalsense:

1. (6) A hits a traffic sign.2. (4) A drives her car to the office.3. (5) A looks at a bird.4. (7) A calls the police.5. (8) The police arrives.

causal relations.

30 C. Leon

Procedural–semantic memory: telling andunderstanding a process

In order to illustrate the way in which material stored in theso-called procedural–semantic memory, the next script willbe and used:

1. take a cup from the cupboard,2. put coffee in the cup,3. add milk,4. add sugar,5. stir,6. drink coffee.

It is the script for ‘‘drinking a coffee”. As explained inSection ‘Storing and carrying out processes as narrativescripts’, the kernel of the narrative object is set by the mainobjective. Obviously, the objective (coming from othermodule of the general cognitive architecture) is to drink acoffee. In order to compute the l function, we would haveto measure the relative distance of each one of the eventsof the script to the main objective. As such, we can hypoth-esize that any reasonable implementation would provide a lfunction satisfying Eq. (8):

lð6Þ ¼ maximumðlð1Þ; lð2Þ; . . . ; lð6ÞÞ ð8Þwhere lð1Þ; . . . ; lð6Þ correspond to the salience of points1–6 in the previous list. This would have been computedusing Eq. (6). Point 6 would then be the kernel.

Computing the value for m would follows the same pro-cess previously explained, so in this case it makes sense toconcentrate on the creation of the preconditions. Eq. (7)provides a definition in which the average of the distancebetween two events in different scripts sets the probabilityof the relation. That is, if adding sugar and stirring are usu-ally close, it implies some sense of causality. In this case, itis a kind of precondition. By assuming that the cognitive

Figure 16 Example causal network for a

entity has successfully repeated the process many times(i.e. it has drank the coffee, which was the original objec-tive), it makes sense to assume that there would be a highprobability that the steps were caused in order.

Moreover, the definition would not only assign precondi-tional or causal relations between consecutive events, butto close ones. This means that points 5 and 6 are stronglyconnected, but points 3 and 6 would be strongly connectedas well. The satisfaction of the objective is also part of thenarrative object, because it is actually part of the story.

The causal network could be represented as depicted inFigure 16.

After learning the script and having it stored in the pro-cedural–semantic memory, the cognitive architecture canuse the information to satisfy the objective. Since theobjective is included in the narrative, retrieval is simple.Let us assume a computational implementation of the nar-rative architecture. If the entity has a desire, the systemmust only search for a narrative in which the desire has beensatisfied, which can be done very efficiently (effectivelymimicking biological behavior). Then, a simple backtrackingprocess could give the cognitive system the set of events,now considered as instructions, to perform in order to sat-isfy the objective.

Discussion

The present architecture proposes a model of narrativememory. As pointed out all through the paper, the objectiveis neither to provide a replacement for existingcognitive architectures, nor to include specific modules inthem. The approach taken tries to provide a specific archi-tectural improvement for knowledge representation andinformation paths, assuming that the ‘‘narrative hypothe-ses” stating that many human cognitive structures presentnarrative properties.

script describing how to drink a coffee.

An architecture of narrative memory 31

There are a number of aspects that are worth some dis-cussion. The architectural nature of this proposal leavescertain aspects open, and in order to draw general conclu-sions, implementations must prove the validity of the pro-posed changes and relative benefits. Next sectionselaborate several of these aspects in detail.

Procedural–semantic memory as narrativebehavior

The working hypothesis leading this design states that narra-tives are not only the product of a certain generation pro-cess, but are rooted in the basic structures of ourcognition, specially in the way we store sequences of events(hence this proposal links it to the episodic memory).According to this, the inner structure of procedural–semantic memory, the way in which humans story not onlyepisodic memory but procedural–semantic memory as well,has these narrative properties. That is, producing narrativesis not only an ability that is more or less natural, but a speci-fic topology for storing information. If the hypothesis istrue, a cognitive model of the narrative memory in humanswould also have to include the procedural–semanticmemory. An in-depth study of other memory structures inhumans could also inform about which ones show narrativecharacteristics.

General architectures not specifically addressingproduction and understanding of narrative objects can stillrepresent narrative behavior, probably in the form of proce-dural–semantic knowledge in the cognitive entity. While infunctional terms an implementation of that family of archi-tectures could be fully equivalent, the presented architec-ture proposes an explicit module for narrative memoryand processing.

The presented approach can potentially allow for a morenatural representation of behavior in terms of narrative.Intentions, desires, objectives, constraints and outcomeshave been long studied in Narratology as specific units andnot only as more or less inter-dependent parts of humanbehavior. Abstract representation of narrations made expli-cit in architectures like the one presented in this work canmake it easier to explain, for instance, the hypothesis thathumans do not perform searches for each step of a longactivity. Instead, the process can be formalized as stepsof a short script of narrative.

Procedural–semantic memory has been defined in termsof abstract narratives used to learn and perform. Whileinformation vignettes have been proposed from a long time,the fact that these have narrative properties enriches theprocedures and informs the algorithms for identifying thesescripts.

Procedural memory and procedural–semanticmemory

Procedural memory is assumed to store implicit functionslike tying a shoe. This is certainly implicit, but it is possiblefor humans to analyze the process and create a narrativeover it. This narrative is what the proposed architecture sit-uates in the procedural–semantic memory.

At this point, formalizing the actual process by which thisis carried out by humans is too difficult. However, the def-inition of the narrative architecture makes it possible tohypothesize that the process implies a narrative. It makessense to think of a cognitive entity that mentally reproducesthe steps, as if it would imagine itself tying a shoe, andstores the mental image in the episodic memory. The human(assuming that only humans can perform narratively in theproposed sense) would then retrieve the narration fromthe episodic memory, abstract it as proposed in the previoussections, and store it again in the semantic memory as ascript.

In this way, the narrative architecture could provide ascaffolding for representing how narratives are producefrom the implicit procedural memory to the explicit proce-dural–semantic memory. The narrative architecture canprovide this behavior if embedded in a general cognitivearchitecture capable of internally reproducing and episode.If this mental episode is fed back into the procedural–se-mantic memory, it is possible then to describe an explicitanalysis of information stored in the procedural memory.

Multiple narrative objects for the same events

The architecture has not explicitly addressed the possibilityof constructing more than one narrative for a single eventsequence. This, however, is quite plausible from a cognitivepoint of view. As previously pointed out, experiments onnarrative processing suggest that humans produce explana-tions for narratives when trying to understand them, whichcould be modeled as creating many plausible narrativeobjects for a concrete event list.

The current state of the architecture allows for an imple-mentation producing more than one narrative per eventsequence. However, it is not clear what the best approachfor relating these narratives is. Connecting them togetherwith a special kind of relation or creating a meta structurecontaining every plausible narrative object, but the relativemerits of each approach must be examined in detail beforeproposing an improvement for the narrative architecture.

Additionally, it makes sense to consider on-line schemacreation based on an ‘‘canonical” narrative object. Maybestoring one single narrative object per event sequence couldbe enough if the architecture was added a module adaptingthe canonical object for different purposes, modifying itaccordingly. Future work will contemplate this.

More narrative relations

While the overall approach for defining narrative objects isconsidered to be general enough as to provide valid descrip-tions for most narrative schemas, the proposed relationslinking satellites and kernels could be insufficient fordescribing all narrative aspects commonly used. This couldseem like a limitation of the current state of the narrativearchitecture, but the model leaves the set of relations opento further refinement and particular implementation.

An exhaustive analysis of all types of relations exceedsthe scope of this paper. Additionally, there are many layersof narrative relations. Whereas at the presented level ofdetail the focus has been put on the most basic knowledge

32 C. Leon

relations, narrative can be way richer. It is possible to thinkof very elaborate schemas involving emotions, narrativearcs, and other high level aspects.

On the other hand, not every narrative construct canmake it into the model of narrative memory. While thehypothesis that a part of our common knowledge is struc-tured in narrative terms is supported in this proposal, wedo not claim that every literary aspect is involved in mem-ory. This point is crucial for our discussion, since it sets athreshold between what we call narrative knowledge andliterary narrative. We make the hypothesis that manyaspects of narrative are actually procedures and propertieslearnt, and as such do not take a fundamental role in infor-mation storage.

Conclusions and future work

This paper has described a cognitive architecture for narra-tive processing (understanding and generation), anddescribed a formal representation of narratives and its rela-tion with the broader function of the episodic memory. Epi-sodic and semantic memories have been addressed as themain parts involving narrative properties inside a generalcognitive system. Examples have shown its plausibility forreal use, namely its connection to a bigger, more generalcognitive architecture.

More specifically, the architecture proposes specificfunctions to be implemented in particular instantiationsof the architecture. These functions compute values fromthe events reaching the narrative memory and help toorganize these events in terms of their narrative proper-ties. Narrative objects are then created. Full narrativesare then created from these narrative objects as con-stituents. Finally, narratives are the main elements ofthe narrative-aware episodic memory and procedural–semantic memory.

According to this, the present work proposes a specificway of structuring a subset of common sense knowledge.The main relative contribution of the approach consists onapplying semantic and structural information identifiedand formalized by Narratology and Computational Narrativeto knowledge representation in a cognitive system. In par-ticular, the paper explores the use of narrative structuresin different types of memory.

This opens up many new paths. For instance, there is animportant set of information stored in structures in thesemantic memory that is very relevant for narrative. Infor-mation about expected impact, linguistics, attention impactrealization and narrative construction. Moreover, knowl-edge information about a narration (i.e. explicit informa-tion about what a narrative is) can actually be used asmeta-information when processing stories.

Emotions also play a fundamental role in narrative. Whilethis paper has only addressed storage and retrieval of narra-tive structures, it is hypothesized that emotional states playa role in structure creation. Sad or happy contexts can makethe cognitive agent build a respectively sad or happy narra-tive arc, thus modifying the output of the l and m functions.These and other aspects will be addressed as part of thefuture work for improving the presented model for anarrative-aware cognitive architecture.

Acknowledgements

This paper has been partially supported by the Project WHIM611560 funded by the European Commission, FrameworkProgramme 7, the ICT theme, and the Future EmergingTechnologies FET program.

References

Allen, J. F. (1991). Time and time again: The many ways torepresent time. International Journal of Intelligent Systems, 6,341–355, <citeseer.ist.psu.edu/allen91time.html>.

Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C.,& Qin, Y. (2004). An integrated theory of the mind. PsychologicalReview, 111(4), 1036–1060. http://dx.doi.org/10.1037/0033-295X.111.4.1036.

Barthes, R. (1977). The death of the author. In Image, music, text.New York, NY, USA: Hill and Wang.

Bower, G. H., Black, J. B., & Turner, T. J. (1979). Scripts in memoryfor text. Cognitive Psychology, 11(2), 177–220. http://dx.doi.org/10.1016/0010-0285(79)90009-4.

Bringsjord, S., & Ferrucci, D. (1999). Artificial intelligence andliterary creativity: Inside the mind of brutus, a storytellingmachine.Hillsdale, NJ: Lawrence Erlbaum Associates.

Bruner, J. (1991). The narrative construction of reality. CriticalInquiry, 18(1), 1. http://dx.doi.org/10.1086/448619. Availablefrom: 0402594v3.

Cai, Z., Goertzel, B., & Geisweiller, N. (2011). OpenPsi: RealizingDorner’s Psi cognitive model in the OpenCog integrative AGIarchitecture. In Artificial general intelligence – 4th interna-tional conference, AGI 2011, Mountain view, CA, USA, August3–6, 2011. Proceedings (pp. 212–221). http://dx.doi.org/10.1007/978-3-642-22887-2_22.

Chatman, S. (1978). Story and discourse. Cornell University Press.Finlayson, M. A., & Corman, S. R. (2013). The military interest in

narrative. Sprache und Datenverarbeitung, Special Issue onFormal and Computational Models of Narrative, 37(1–2),173–191.

Genette, G. (1979). Narrative discourse: An essay in method.Cornell University Press.

Gervas, P., & Leon, C. (2015). When reflective feedback triggersgoal revision: A computational model for literary creativity, pp.32–39. <http://ceur-ws.org/Vol-1407/AInF2015paper5.pdf>.

Gervas, P., & Leon, C. (2014). Reading and writing as a creativecycle: The need for a computational model. In 5th internationalconference on computational creativity, ICCC 2014, Ljubljana,Slovenia. .

Goldman, S. R., Graesser, A. C., & van den Broek, P. (1999).Narrative comprehension, causality, and coherence: Essays inhonor of Tom Trabasso. Taylor & Francis, <https://books.-google.es/books?id=iqj{_}6{_}ZD3YYC>.

Graesser, A. C., Singer, M., & Trabasso, T. (1994). Constructinginferences during narrative text comprehension. PsychologicalReview (101), 371–395.

Herman, D. (2000). Narratology as a cognitive science. Image andNarrative.

Herman, D. (2002). Story logic: Problems and possibilities ofnarrative, <https://books.google.es/books/about/Story_Logic.html?id=6jAy0Yml51gC&pgis=1>.

Herman, D. (2013). The living handbook of narratology. HamburgUniversity, <http://www.lhn.uni-hamburg.de/article/cogni-tive-narratology-revised-version-uploaded-22-september-2013>.

Kintsch, W. (1991). The role of knowledge in discourse comprehen-sion: A construction–integration model. Advances in Psychol-ogy, 79(C), 107–153. http://dx.doi.org/10.1016/S0166-4115(08)61551-4.

An architecture of narrative memory 33

Langston, M. C., Trabasso, T., & Magliano, J. P. (1999). Under-standing language understanding (pp. 181–226). Cambridge,MA, USA: MIT Press, <http://dl.acm.org/citation.cfm?id=305135.305144>.

Leon, C., & Gervas, P. (2014). Creativity in story generation fromthe ground up: Non-deterministic simulation driven by narrative.In 5th international conference on computational creativity,ICCC 2014, Ljubljana, Slovenia. .

Mueller, E. T. (2002). Story understanding. Encyclopedia of Cogni-tive Science, 238–246.

Niehaus, J., & Young, R. M. (2014). Cognitive models of discoursecomprehension for narrative generation. Literary and LinguisticComputing, 29(4), 561–582. http://dx.doi.org/10.1093/llc/fqu056,<http://llc.oxfordjournals.org/content/29/4/561.abstract>.

Perez y Perez, R. (1999). MEXICA: A computer model of creativityin writing Ph.D. thesis. The University of Sussex.

Propp, V. (1928). Morphology of the folk tale.Leningrad:Akademija.

Reiter, E., & Dale, R. (2000). Building natural language generationsystems. Cambridge University Press.

Riedl, M. (2004). Narrative planning: Balancing plot and characterPh.D. thesis. Department of Computer Science, North CarolinaState University.

Riedl, M., & Young, M. (2010). Narrative planning: Balancing plotand character. Journal of Artificial Intelligence Research, 39,217–268.

Ryan, M.-l., & Ryan, M.-l. (2010). Narratology and cognitivescience: A problematic relation. Style, 44(4), 469–496.http://dx.doi.org/10.1007/s13398-014-0173-7.2.

Schank, R., & Abelson, R. (1977). Scripts, plans, goals andunderstanding: An inquiry into human knowledge structures.Hillsdale, NJ: L. Erlbaum.

Sharples, M. (1999). How we write. Routledge.Snaider, J., McCall, R., & Franklin, S. (2011). The LIDA framework

as a general tool for AGI. Artificial General Intelligence, LectureNotes in Computer Science, 6830, 133–142. http://dx.doi.org/10.1007/978-3-642-22887-2_14, <http://www.springerlink.com/content/g4563306506388k1/>.

Squire, L. R. (2004). Memory systems of the brain: A brief historyand current perspective. Neurobiology of Learning and Memory,82(3), 171–177. http://dx.doi.org/10.1016/j.nlm.2004.06.005.

Szilas, N. (2015). Towards narrative-based knowledge representa-tion in cognitive systems. In Proceedings of the 6th workshop oncomputational models of narrative (CMN’15) (pp. 133–141),<http://narrative.csail.mit.edu/cmn15/schedule_cmn15.html>.

Trabasso, T., & Sperry, L. (1985). Causal relatedness and impor-tance of story events. Journal of Memory and Language, 24,595–611.

Tulving, E. (1972). Episodic and semantic memory. http://dx.doi.org/10.1017/S0140525X00047257, <~jorkin/generals/papers/Tulving_memory.pdf" xlink:type="simple">http://web.media.mit.edu/�jorkin/generals/papers/Tulving_memory.pdf>.

Tulving, E. (1983). Elements of episodic memory.Oxford, UK; NewYork: Oxford University Press.

Zwaan, R. a., Langston, M. C., & Graesser, a. C. (1995). Theconstruction of situation models in narrative comprehension: Anevent-indexing model. Psychological Science, 6(5), 292–297.http://dx.doi.org/10.1111/j.1467-9280.1995.tb00513.x.