dictionar de psihologie cognitiva

Upload: iulia-stoian

Post on 02-Mar-2016

128 views

Category:

Documents


3 download

DESCRIPTION

dictionar psihologie

TRANSCRIPT

Adaptation

In Piaget's Theory of Development, there are two cognitive processes that are crucial for progressing from stage to stage: assimilation, accommodation. These two concepts are described below.

Assimilation

This refers to the way in which a child transforms new information so that it makes sense within their existing knowledge base. That is, a child tries to understand new knowledge in terms of their existing knowledge. For example, a baby who is given a new knowledge may grasp or suck on that object in the same way that he or she grasped or sucked other objects.

Accomodation

This happens when a child changes his or her cognitive structure in an attempt to understand new information. For example, the child learns to grasp a new object in a different way, or learns that the new object should not be sucked. In that way, the child has adapted his or her way of thinking to a new experience.

Taken together, assimilation and accomodation make up adaptation, which refers to the child's ability to adapt to his or her environment.

References:

1. Siegler, R. (1991). Children's thinking. Englewood Cliffs, NJ: Prentice-Hall.

2. Vasta, R., Haith, M. M., & Miller, S. A. (1995). Child psychology: The modern science. New York, NY: Wiley.

Alzheimer's Disease

Alzheimer's Disease (AD), a term coined by Alois Alzheimer in 1907, is a relentlessly progressive disease characterized by cognitive decline, behavioural disturbances, and changes in personality. Current estimates of prevalence of AD in Canada suggest that 5.1% of all Canadians 65 and over meet the criteria for the clinical diagnosis of AD, which translates into approximately 161,000 cases. AD prevalence is slightly higher in women than in men. It may be that this difference is due to the longer life expectancy of women although other factors have not been ruled out. The prevalence of dementia is strongly associated with age, affecting 1% of the Canadian population aged 65 to 74, 6.9% of individuals 75-84 and 26% of individuals 85 years and older (Canadian Study of Health and Aging, 1994).

The diagnostic criteria for dementia of the Alzheimer's Type (DAT) are as follows:

(A) The development of multiple cognitive deficits manifested by both:

3. Memory impairment (impaired ability to learn new information or to recall previously learned information)

4. One or more of the following cognitive disturbances:

aphasia (language disturbance)

apraxia (impaired ability to carry out motor activities despite intact motor function)

agnosia (failure to recognize or identify objects despite intact sensory function)

disturbances in executive functioning (i.e., planning, organizing, sequencing, abstracting)

(B) The cognitive deficits in Criteria A1 and A2 each cause significant impairment in social and occupational functioning and represent a significant decline from a previous level of functioning.

(C) The course is characterized by gradual onset and continuing cognitive decline

(D) The cognitive deficits in Criteria A1 and A2 are not due to any of the following:

1. other central nervous system conditions that cause progressive deficits in memory and cognition (e.g., cerebrovascular disease, Parkinson's Disease, Huntington's Disease, subdural hematoma, normal pressure hydrocephalus, brain tumor).

2. systemic conditions that are known to cause a dementia (e.g., hypothyroidism, vitamin B12 or folic acid deficiency, hypercalcemia, neurosyphilis, HIV infection)

3. substance-induced conditions

(E) The deficits do not occur exclusively during the course of a delirium

(F) The disturbance is not better accounted for by another Axis 1 disorder (e.g., Major Depressive Disorder, Schizophrenia)

The diagnosis of AD is based on exclusionary criteria (i.e., the absence of an identifiable cause) with diagnosis confirmed at autopsy. Treatment strategies to date have been largely ineffective, with experimental treatments mainly directed toward overcoming the cholinergic deficit.

References:

1. American Psychiatric Association (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: Author.

2. Canadian study of health and aging: Study methods and prevalence of dementia. (1994). Canadian Medical Association Journal, 150(6).

3. Whitehouse, P.J. (1993) Dementia. Philadelphia: F.A. Davis.

Analogy

In cognitive psychology, analogy is considered an important method of problem solving. The problem solver attempts to use his or her knolwedge of one problem to solve another problem about which she or he has very little or no information. Barsalou (1992) provides the following example of problem solving by analogy:

"...someone who has worked at the complex for a while could simply explain to you that the layout is analogous to a starfish. On hearing this analogy you might transfer knowledge about starfish to the office complex. Thus the knowledge that a starfish has a circular body, with five legs extending from it radially and symetrically would lead to the belief that the office complex contains a center circular body, with five tapered buildings extending from it in a radially symmetric pattern." (p.110)

Obviously people do not use all of their knowledge about one problem to solve another problem. In the context of his starfish example Barsalou points out that we would not begin to think that the office complex is alive, or that it lives underwater.

One problem facing cogntive psychologists is to determine how people decide upon the extent to which an analogy applies. Determining how this may be done is more difficult than it may seem. Consider that, given enough time people can find analogies between any two phenomena. We might want to say that, like the starfish, the office complex is alive--its heating ducts are like blood vessels, its doors are like mouths eating the people who enter the office complex every day. As a cognitive process analogy seems limitless. In a science that strives for regularity and lawfulness the limitlessness of analogical thinking poses a serious problem.

References:

5. Barsalou, L. (1992). Cognitive psychology: An overview for cognitive psychologists. Hillsdale, NJ: Lawrence Erlbaum Associates.

Apparent Motion

This is a perceptual phenomenon that occurs when we perceive motion in two or more static images that are presented in succession with appropriate spatial and temporal displacements. The ability to perceive this phenomenon is mediated by the visuospatial pathway of the visual association regions of the brain.

We see examples of this phenomenon almost everyday when we view television or movies.

This is an example of a cognitively impenetrable perception. That is, even though we know that the images are not moving, we still perceive motion.

References:

6. Marr, D. (1982). Vision. Freeman: San Francisco, pp.159-182.

7. Zeki, S. (1992). The visual image in mind & brain. Scientific American, 241(3), 150-162.

Articulatory Loop

The articulatory loop (AL) is one of two passive slave systems within Baddeley's (1986) tripartite model of working memory. The AL, responsible for storing speech based information, is comprised of two components. The first component is a phonological memory store which can hold traces of acoustic or speech based material. Material in this short term store lasts about two seconds unless it is maintained through the use of the second subcomponent, articulatory subvocal rehearsal. Prevention of articulatory rehearsal results in very rapid forgetting. Try this experiment with a friend. Present your friend with three consonants (e.g., C-X-Q) and ask them to recall the consonants after a 10 second delay. During the 10 second interval, prevent your friend from rehearsing the consonants by having them count 'backwards by threes' starting at 100. You will find that your friend's recall is significantly impaired! See Murdoch (1961) and Baddeley (1986) for a complete review.

References:

8. Baddeley, A. (1986). Working memory. Oxford: Clarendon Press.

9. Murdock, B.B. Jr. (1961). The retention of individual items. Journal of Experimental Psychology, 62, 618-625.

See Also:

Working Memory | Visuospatial Sketchpad | Central Executive

Artificial Intelligence

Artificial intelligence is concerned with the attempt to develop complex computer programs that will be capable of performing difficult cognitive tasks. Some of those who work in artificial intelligence are relatively unconcerned as to whether the programs they devise mimic human cognitive functioning, while others have the explicit goal of simulating human cognition on the computer.

The artificial intelligence approach has been applied to several different areas within cognitive psychology, including perception, memory, imagery, thinking, and problem solving.

There are a number of advantages of the artificial intelligence approach to cognition. Computer programming requires that every process be specified in detail, unlike cognitive psychology which often relies on vague descriptions. AI also tends to be highly theoretical, which leads to general theoretical orientations having wide applicability. The main disadvantage of AI is that there is a lot of controversy about the ultimate similarity between human cognitive functioning and computer functioning.

Some of the major differences between brains and computers were spelled out in the following terms by Churchland (1989, p.100):

"The brain seems to be a computer with a radically different style.

For example, the brain changes as it learns, it appears to store and process

information in the same places...Most obviously, the brain is a parallel

machine, in which many interactions occur at the same time in many different

channels."

This contrasts with most computer functions which involves serial processing and relatively few interactions.

References:

10. Churchland, P.S. (1989). From Descartes to neural networks. Scientific American , July, 100.

11. Eysenck, M.W. (Ed.). (1990). The Blackwell Dictionary of Cognitive Psychology. Cambridge, MA: Basil Blackwell.

See Also:

Cognitive Science | Cognitive Psychology

Associative Memory

At its simplest, an associative memory is a system which stores mappings of specific input representations to specific output representations. That is to say, a system that "associates" two patterns such that when one is encountered subsequently, the other can be reliably recalled. Kohonen draws an analogy between associative memory and an adaptive filter function [2]. The filter can be viewed as taking an ordered set of input signals, and transforming them into another set of signals---the output of the filter. It is the notion of adaptation, allowing its internal structure to be altered by the transmitted signals, which introduces the concept of memory to the system.

A further refinement in terminology is possible with regard to the associative memory concept, and is ubiquitous in connectionist (neural network) literature in particular. A memory that reproduces its input pattern as output is referred to as autoassociative (i.e. associating patterns with themselves). One that produces output patterns dissimilar to its inputs is termed heteroassociative (i.e. associating patterns with other patterns).

Most associative memory implementations are realized as connectionist networks. Hopfield's collective computation network [1] serves as an excellent example of an autoassociative memory, whereas Rosenblatt's perceptron [3] is often utilized as a heteroassociator. There are many practical problems implementing effective associative memories however, most notably their inefficiency; the tendency is for them to fill up and become unreliable rather quickly. This is a long running open problem for both connectionism and adaptive filter theory---one that Kohonen refers to as the "problem of infinite state memory" [2].

References:

12. J.J. Hopfield. Neural networks and physical systems with emergent collective computation abilities. Proceedings of the National Academy of Science. 79:2554-2558, 1982.

13. T. Kohonen. Self-Organization and Associative Memory. Springer Series In Information Sciences, Vol.8. Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1984.

14. F. Rosenblatt. Principles of Neurodynamics. Spartan, New York, 1962.

See Also

Connectionism| Content Addressable Memory

Attention

"Attention" is a term commonly used in education, psychiatry and psychology. The definition is often vague. Attention can be defined as an internal cognitive process by which one actively selects environmental information (ie. sensation) or actively processes information from internal sources (ie. visceral cues or other thought processes). In more general terms, attention can be defined as an ability to focus and maintain interest in a given task or idea, including managing distractions.

William James, a 19th century psychologist, explains attention as follows:

"Everyone knows what attention is. It is the taking possession by the

mind in clear and vivid form, of one out of what seem several simultaneously

possible objects or trains of thought...It implies withdrawl from some things

in order to deal effectively with others, and is a condition which has a real

opposite in the confused, dazed, scatterbrained state." (1890, p. 403)

Attention is important to psychologists because it is often considered a core cognitive process, a basis on which to study other cognitive processes; most importantly learning. DeGangi and Porges (1990) illustrate only "when a person is actively engaged in voluntary attention, functional purposeful activity and learning can occur." (p. 6) Poor attention is often a key symptom of behaviour disorders such as hyperactivity and learning disorders.

References:

15. DeGangi, G., & Porges, S. (1990). Neuroscience foundations of human performance. Rockville, MD: American Occupational Therapy Association.

16. James, W. (1890). Principles of psychology. New York: Holt.

See Also:

Attention Getting | Attention Holding | Sustained Attention

Attention Getting

Attention getting is more than just the orienting reflex, it is the "initial orientation or alerting to a stimulus." Though this may be considered an automatic act, in fact it requires complex active thought processing. Attention getting is reliant on the qualitative nature of the stimulus. The stimulus must be stong enough to elicit a response.

DeGangi and Porges (1990) explain the types of stimuli that are attention getting vary according to past experiences of the individual, what they already know, individual reactivity to sensory stimuli, and what an individual has determined to be important to them. A hungry person may be more apt to pay attention to the smell of food than the sounds surrounding them in a traffic jam!

Attention getting is important to psychologists, particularily developmental psychologists because of its role in learning. A child's chosen attention getting stimuli can guide his/her learning abilities. "A child who learns better through the auditory channel will orient more readily to a song about body parts than a picture of a body."

References:

17. DeGangi, G., & Porges, S. (1990). Neuroscience foundations of human performance. Rockville, MD: American Occupational Therapy Association.

See Also:

Attention Holding | Attention Releasing | Sustained Attention

Attention Holding

Attention holding is the "maintenance of attention when a stimulus is intricate or novel." Stimuli that hold our attention must be both novel and complex in order to encourage information processing. Attention holding is measured by how long one engages in a cognitive activity involving that stimulus.

Attention holding is important because of its role in learning. If an activity or stimulus is moderately complex, the person will expend energy in information processing. In other words, the person will expend energy in learning. Unfortunately, this can be complicated by poor motivation. Low motivation may present a challenge as the psychologist (or other professional) must determine if the decreased motivation is due to sensory processing problems, cognitive impairment, or other learning-related problems (of which poor attention holding may be identified).

References:

18. DeGangi, G., & Porges, S. (1990). Neuroscience foundations of human performance. Rockville, MD: American Occupational Therapy Association.

See Also:

Attention Getting | Attention Releasing | Sustained Attention

Attention Releasing

Attention releasing is the final stage in DeGangi and Porges' (1990) process of sustained attention. Attention releasing can simply be defined as the "releasing or turning off of attention from a stimulus." Attention releasing can occur for a variety of reasons. A person can fatigue physically or mentally requiring release of attention. Arousal level can decrease, therefore a different type/strength of stimuli becomes required to maintain an alert and active state.

Attention releasing provides a person with a method to reach closure on a given activity, task, or event thereby allowing that person to switch attention to something new. As with attention getting and holding, attention releasing (the ability to shift focus) plays an important role in the learning process.

References:

19. DeGangi, G., & Porges, S. (1990). Neuroscience foundations of human performance. Rockville, MD: American Occupational Therapy Association.

See Also:

Attention Holding | Attention Getting | Sustained Attention

Behavioural Indeterminacy

The claim that in principle psychology is restricted to establishing weak equivalence. Weak equivalence is equivalence with respect to input/output behaviour. Therefore, measuring behavioural data is unable to establish equivalence at the level of functional architecture. Behavioural studies are indeterminate with respect to strong equivalence.

This issue is of importance to cognitive psychology because, if true, it implies that cognitive psychology cannot generate insight into cognition without importing knowledge based on non-behavioural observations from other disciplines.

References:

20. Pylyshyn, Z. W. (1989). Computing in cognitive science. In M. I. Posner (Ed.), Foundations of cognitive science, Cambridge MA: MIT Press.

See Also:

Functional Architecture | Strong Equivalence | Weak Equivalence

Bilogical Naturalism

Promoted by John Searle, Biological Naturalism states that consciousness is a higher level function of the brain's physical capabilities. The neurophysiological processes in the brain cause mental phenomena, which are also a feature of the brain. However, such features as consciousness are not reducible to neurophysiological systems. Not all brains produce this higher level functioning, and there are many questions still open in Biological Naturalism, which Searle himself points out, for example: how does neurophysiology account for the range of mental phenomena? how does consciousness come about? how advanced does a neurophysiological system have to be to produce consciousness?

References:

21. Searle, John. The Rediscovery of the Mind. MIT Press, Massachusetts. 1994

Bottom-Up Processing

The cognitive system is organized hierarchically. The most basic perceptual systems are located at the bottom of the hierarchy, and the most complex cogntive (e.g. memory, problem solving) systems are located at the top of the hierarchy.

Information can flow both from the bottom of the system to the top of the system and from the top of the system to the bottom of the system. When information flows from the bottom of the sytstem to the top of the system this is called "bottom-up" processing. Lower level systems categorize and describe incoming perceptual information and pass this descriptive information onto hiher levels for more complex processing.

See Also:

Top-Down Processing

Broca's Area

Named for Paul Broca who first described it in 1861, Broca's area is the section of the brain which is involved in speech production, specifically assessing syntax of words while listening, and comprehending structural complexity. People suffering from neurophysiological damage to this area (called Broca's aphasia or nonfluent aphasia) are unable to understand and make grammatically complex sentences. Speech will consist almost entirely of content words.

Auditory and speech information is transported from the auditory area to Wernicke's area for evaluation of significance of content words, then to Broca's area for analysis of syntax. In speech production, content words are selected by neural systems in Wernicke's area, grammatical refinements are added by neural systems in Broca's area, and then the information is sent to the motor cortex, which sets up the muscle movements for speaking.

References:

22. Gray, Peter. (1994). Psychology. New York, NY: Worth Publishing.

See Also:

Wernicke's Area

Cascade Processing

Under the assumption that a cpmplex task can be broken down into distinct stages of information processing, and that these stages can be sequentially ordered, the complex task can be performed by completing each distinct stage.

Unlike discrete processing, with cascade models the latter stages of information processing can begin operating before the completion of earlier information processing stages. Connectionist models of information processing operate in a cascade manner and are important for the way in which these models can learn relationships between stimule and responses.

Depending on the complexity of the information being processed, it may be transmitted between some processing stages in a cascade manner, but in other stages it may be processed in a discrete manner.

References:

23. Eysenck, M.W. (Ed.). (1990). The Blackwell Dictionary of Cognitive Psychology. Cambridge, MA: Basil Blackwell.

See Also:

Discrete Processsing

Central Executive

The central executive, the most important yet least well understood component of Baddeley's (1986) working memory model, is postulated to be responsible for the selection, initiation, and termination of processing routines (e.g., encoding, storing, retrieving). Baddeley (1986, 1990) equates the central executive with the supervisory attentional system (SAS) described by Norman and Shallice (1980) and by Shallice (1982).

According to Shallice (1982), the supervisory attentional system is a limited capacity system and is used for a variety of purposes, including:

tasks involving planning or decision making

trouble shooting in situations in which the automatic processes appear to be running into difficulty

novel situations

dangerous or technically difficult situations

situations where strong habitual responses or temptations are involved

Extensive damage to the frontal lobes may result in impairments in central executive functioning. Baddeley (1986) coined the term dysexecutive syndrome (DES) to describe dysfunctions of the central executive. The classic frontal syndrome is characterized by

disturbed attention, increased distractibility, a difficulty in grasping the

whole of a copmlicated state of affairs ... well able to work along old routines

... (but) ... cannot learn to master new types of task, in new situations ...

[the patient is] at a loss. (Rylander, 1939, p.20)

In other words, patients suffering from frontal lobe syndrome lack flexibility and the ability to control their processing resources, functions attributed to the central executive.

References:

24. Baddeley, A.D. (1990). Human memory: Theory and practice,. Oxford: Oxford University Press.

25. Baddeley, A.D. (1986). Working memory. Oxford: Clarendon Press.

26. Norman, D.A., & Shallice, T. (1980). Attention to action. Willed and automatic control of behavior. University of California San Diego CHIP Report 99.

27. Shallice, T. (1982). Specific impairments of planning. Philosophical Transactions of the Royal Society London B 298, 199-209.

28. Rylander, G. (1939). Personality changes after operations on the frontal lobes. Acta Psychiatrica Neurologica, Supplement No. 30.

See Also:

Articulatory Loop | Visuospatial Sketchpad | Working Memory

Cognitive Development (In Children)

Generally it is referred to the changes which occur to a person's cognitive structures, abilities, and processes. The most widely known theory of childhood cognitive development was proposed by Jean Piaget in 1969. He proposed the idea that cognitive development consisted of the development of logical competence, and that the development of this competence consists of four major stages:

29. sensori-motor

30. preoperational

31. concrete operational

32. formal operational

He also argued that a child's cognitive performance depended more on the stage of development he was in than on the specific task being performed.

More recent studies have cast some doubt on Piaget's theory of homogeneous performance within a given stage. Instead, it is now believed that performance varies greatly within each stage and depends more on the acquisition and development of language, perception, decision rules, and real-world knowledge for any individual child.

Cognitive Mapping

Cognitive mapping is a general term that applies to a series of methods for measuring mental representations. These techniques attempt to describe mental images that subjects use to encode knowledge and information. Most researchers treat cognitive maps as a tool that can usefully summarise and communicate information rather than as a literal description of mental images.

References:

33. Huff, A.S. (1990). Mapping Strategic Thought Chichester, John Wiley & Sons

Cognitive Penetrability

An approach to testing strong equivalence. The cognitive penetrability approach seeks to establish whether phenomena are equivalent at the level of functional architecture by investigating whether phenomena are independent of beliefs and goals, that is if they are primitive. If manipulation of beliefs and goals systematically alters the empirical phenomenon then the phenomenon is not describing functional architecture and is cognitively penetrable.

The cognitive penetrability approach was used in the imagary debate in cognitive science in the 1980's.

References:

34. Pylyshyn, Z. W. (1989). Computing in cognitive science. In M. I. Posner (Ed.), Foundations of cognitive science. Cambridge, MA: MIT Press.

See Also:

Strong Equivalence | Weak Equivalence Cognitive Psychology

Cognitive psychology is concerned with information processing, and includes a variety of processes such as attention, perception, learning, and memory. It is also concerned with the structures and representations involved in cognition. The greatest difference between the approach adopted by cognitive psychologists and by the Behaviorists is that cognitive psychologists are interested in identifying in detail what happens between stimulus and response.

Some of the ingredients of the information processing approach to cognition were spelled out by Lachman, Lachman, and Butterfield (1979). In essence, it is assumed that the mind can be regarded as a general purpose, symbol processing system, and that these symbols are transformed into other symbols as a result of being acted on by different processes. The mind has structural and resource limitations, and so should be thought of as a limited capacity processor.

A key issue in the field is the extent to which human and computer information processing systems resemble one another. The consensual view is probably that there are indeed striking similarities between computer minds, but there are also probably substantial differences. In recent years, explicitly cognitive approaches have been adopted in social and developmental psychology, as well as in occupational and clinical psychology.

References:

35. Eysenck, M.W. (Ed.). (1990). Blackwell Dictionary of Cognitive Psychology. Cambridge, MA: Basil Blackwell.

36. Lachman, R., Lachman, J.L., & Butterfield, E.C., (1979) Cognitive psychology and information processing. Hillsdale, NJ: Lawrence Erlbaum Associates.

Cognitive Science

Several students have supplied definitions for this term:

#1 | #2 | #3

Definition 1

"the study of intelligence and intelligent systems, with particular reference to intelligent behaviour as computation" (Simon & Kaplan, 1989)

Simon, H. A. & C. A. Kaplan, "Foundations of cognitive science", in Posner, M.I. (ed.) 1989, Foundations of Cognitive Science, MIT Press, Cambridge MA.

Contributed by J. Andrews, November 23, 1995

Definition 2

Cognitive science refers to the interdisciplinary study of the acquisition and use of knowledge. It includes as contributing disciplines: artificial intelligence, psychology, linguistics, philosophy, anthropology, neuroscience, and education. The cognitive science movement is far reaching and diverse, containing within it several viewpoints.

Cognitive science grew out of three developments: the invention of computers and the attempts to design programs that could do the kinds of tasks that humans do; the development of information processing psychology where the goal was to specify the internal processing involved in perception, language, memory, and thought; and the development of the theory of generative grammar and related offshoots in linguistics. Cognitive science was a synthesis concerned with the kinds of knowledge that underlie human cognition, the details of human cognitive processing, and the computational modeling of those processes.

There are five major topic areas in cognitive science: knowledge representation, language, learning, thinking, and perception.

Eysenck, M.W. ed. (1990). The Blackwell Dictionary of Cognitive Psychology. Cambridge, Massachusetts: Basil Blackwell Ltd.

See Also:

Cognitive Psychology I Artificial Intelligence

Contributed by: L.A. Keple, November 5, 1995

Definition 3

Generally stated, this is the study of intelligence and intelligence systems.

It is a relatively new science that combines knowledge gained from a number of disciplines. These include: computer science,neuroscience, cognitive psychology, philosophy, and linguistics.

As a result of the collaborative effort between these disciplines, there have been, and will continue to be, huge advancements in our understanding of human cognition.

See Also:

Neuroscience

Contributed by M. Kincade

Dictionary Home Page

Connectionism

Connectionism is an alternate computational paradigm to that provided by the von Neumann architecture. Originally taking its inspiration from the biological neuron and neurological organization, it emphasizes collections of simple processing elements in place of the monolithic processors seen more commonly within computing. These simple processing elements are typically only capable of rudimentary calculations (such as summation), however possess a high degree of weighted inter-connectivity with one another and generally operate in parallel [2].

A particular organization of inter-connected processing elements (a network), is paired with a mathematical basis by which the connection weights are adjusted (or simply calculated directly). This allows a network to either learn a task by iterating on training examples (induction learning), or to provide a system in which solutions to particular problems can be computed. Arguably the most widely used example of the former is the multi-layer perceptron trained via error back-propagation (see [5], for example); whereas the latter is typified by networks such as the Hopfield and Tank model for combinatorial optimization [3].

To the casual reader, "connectionism", "parallel distributed processing" (PDP) and "neural networks" may be entirely synonymous. The term "neural network" is somewhat misleading to begin with as, aside from the original inspiration coming from biology, there is nothing particularly "neural" about them and any perceived biological relevance is often debatable. There is also merit in making a philosophical distinction between PDP and connectionism. For example, over time, PDP has been disposed to seek biological relevance for their models, tended to emphasize learning oriented tasks and follow a largely empirical approach. The field of neural networks has become richer than is encompassed by the traditional view of PDP.

Connectionism distinguishes itself by also viewing the network model as a computational architecture. This encompasses a wider range of network structures for which biological relevance is not an issue or for which a learning process per se is not utilized. Falling into areas such as these include a wealth of recent work which has sought to establish the formal relationship between computational power of connectionist networks and abstract machines (for example [1],[4]), and even harkens back to the aforementioned Hopfield and Tank model which computes solutions to problems by minimizing energy within a pre-wired system of weights [3].

In this respect, connectionism subsumes PDP. That is to say that PDP researchers are connectionists, however not all connectionists consider themselves to be PDP researchers. Although debatable, this point is one that this author, among others, feels is an important one.

References:

37. C.L. Giles, B.G. Horne, T. Lin. Learning a class of large finite state machines with a recurrent neural network. Neural Networks. 8(9):1359-1365, 1995.

38. J. Hertz, A. Krogh and R.G. Palmer. Introduction to the theory of neural computation. Addison-Wesley, Redwood City, 1991.

39. J.J. Hopfield and D.W. Tank. `Neural' computation of decisions in optimization problems. Biological Cybernetics. 52:141-152.

40. S.C. Kremer. On the computational power of Elman-style recurrent networks. IEEE Transactions on Neural Networks. 6(4):1000-1004, 1995.

41. D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representations by error propagation. In D.E. Rumelhart and J.L. McClelland, editors, Parallel Distributed Processing, volume 1. MIT Press, Cambridge, 1986.

See Also

Associative Memory| Content Addressable Memory| Induction Learning| Learning Rule| Machine Learning| Parallel Distributed Processing Models

Consciousness

Consciousness refers to awareness of our own mental processes (or of the products of such processes). This awareness can be made manifest by introspective reports, in which an individual provides information about his or her mental experience.

There has been a considerable amount of controversy over the centuries concerning the value of psychology of assessing the contents of consciousness by means of introspective evidence. Aristotle claimed that the only way to study thinking was by introspection. Others, such as Galton (1883), argued that the position of consciousness "appears to be a helpless spectator of but a minute fraction of automatic brain work. Behaviorists tend to agree with Galton that psychologists should not concern themselves with consciousness and introspection.

There are certain cognitivists who would disagree with these definitions. Marvin Minsky (1985), maintains that human consciousness can never represent what is occurring at the present moment, but only a little of the recent past. This is due both because agencies have limited capacity to represent what happened recently and partly because it take time for agencies to communicate with one another. Consciousness is difficult to describe because each time we attempt to examine temporary memories, we distort the very record we are trying to interpret.

References:

42. Eysenck, M.W. (Ed.). (1990). Blackwell Dictionary of Cognitive Psychology . Cambridge, MA: Basil Blackwell.

43. Galton, F. (1883). Inquiries into human faculty and its development. London: Macmillan.

44. Minsky, M. (1985). The society of mind. New York, NY: Simon & Schuster.

See Also:

Mandelbrot Set

Content Addressable Memory

In a symbolic system information is stored in an external mechanism. In the example of the computer it is stored in files on the disks. As the information has been encoded in some form of file system in order to retrieve that information one must know the index system of the files. In other words, data can only be accessed by certain attributes. In a connectionist system the data is stored in the activation pattern of the units. Hence, if a processing unit receives excitatory input from one of its connections, each of its other connections will either be excited or inhibited. If these connections represent the attributes of the data then the data may be recalled by any one of its attributes, not just those that are part of an indexing system. As these connections represent the content of the data, this type of memory is called content addressable memory. This type of memory has the advantage of allowing greater flexibility of recall and is more robust. This distributed memory is able to work its way around errors by reconstructing information that may have been lesioned from the system.

References:

45. Bechtel, W., & Abrahamsen, A. (1991). Connectionism and the mind: An introduction to parallel processing in networks. Cambridge, MA: Blackwell.

See Also:

Functional Architecture | Graceful Degradation | Parallel Distributed Processing Models | Spontaneous Generalisation | Symbolic Architecture

Crystallized Intelligence

Crystallized intelligence can be defined as "the extent to which a person has absorbed the content of culture."(Belsky, 1990, p. 125) It is the store of knowledge or information that a given society has accumulated over time.

Crystallized intelligence is measured by most of the verbal subtests of the Wechsler Adult Intelligence Scale (WAIS).

Crystallized intelligence is important to psychologists as it relates to the study of aging. There is ongoing intense debate among psychologists as to whether or not intelligence declines with aging. Horn (1970) hypothesized that because crystallized intelligence is based on learning and experience, it remains relatively stable over time. He claims it may even increase "as the rate at which we acquire or learn new information in the course of living balances out or exceeds the rate at which we forget." (as cited in Belsky, 1990, p. 125) On the other side of the debate, Belsky (1990) claims crystallized intelligence in fact declines with age. Why? Because, "at a certain time of life the cumulative effect of losses - of job, of health, of relationships - cause disengagement from the culture, and so forgetting finally exceeds the rate at which knowledge is acquired." (p. 125)

References:

46. Belsky, J. K. (1990). The psychology of aging theory, research, and interventions. Pacific Grove, CA: Brooks/Cole.

47. Horn, J. (1970). Organization of data on life-span development of human abilities. In R. Goulet and P.B. Baltes (Eds.). Life-span developmental psychology: Research and theory. New York: Academic Press.

See Also:

Fluid Intelligence | WAIS

Cued Recall

This is a component of a memory task in which the subject is asked to recall items that were presented to them on an intial training, or initial presentation list.

However, it is slightly different than the free recall task because the subject is given a hint, or a cue, about the items on the original list. For example, and experimenter may say: "Tell me all the words from the list that were animals".

See Also:

Free Recall | Intrusions | Perseverations

Deductive (Logical) Inference

Inferences are made when a person (or machine) goes beyond available evidence to form a conclusion. With a deductive inference, this conclusion always follows the stated premises. In other words, if the premises are true, then the conclusion is valid. Studies of human efficiency in deductive inference involves conditional reasoning problems which follow the "if A, then B" format.

The task of making deductions consists of three stages. First, a person must understand the meaning of the premises. Next they must be able to formulate a valid conclusion. Thirdly, a person should evaluate their conclusion to tests its validity. Although deductive inference is easy to test or model, the results of this type of inference never increase the semantic information above what is already stated in the premises.

References:

48. Eysenck, M.W. (Ed.). (1990). The Blackwell dictionary of cognitive psychology. Cambridge, MA: Basil Blackwell.

49. Johnson-Laird, P. N. (1993). Human and machine thinking. Hillsdale, NJ : Lawrence Erlbaum Associates.

See Also:

Inductive Inference Dementia

Dementia is a clinical state characterized by loss of function in multiple cognitive domains. The most commonly used criteria for diagnoses of dementia is the DSM-IV (Diagnostic and Statistical Manual for Mental Disorders, American Psychiatric Association). Diagnostic features include :

memory impairment and at least one of the following: aphasia, apraxia, agnosia, disturbances in executive functioning.

In addition, the cognitive impairments must be severe enough to cause impairment in social and occupational functioning.

Importantly, the decline must represent a decline from a previously higher level of functioning.

Finally, the diagnosis of dementia should NOT be made if the cognitive deficits occur exclusively during the course of a delirium.

There are many different types of dementia (approximately 70 to 80). Some of the major disorders causing dementia are:

50. Degenerative diseases (e.g., Alzheimer's Disease, Pick's Disease)

51. Vascular Dementia (e.g., Multi-infarct Dementia)

52. Anoxic Dementia (e.g., Cardiac Arrest)

53. Traumatic Dementia (e.g., Dementia pugilistica [boxer's dementia])

54. Infectious Dementia (e.g., Creutzfeldt-Jakob Disease)

55. Toxic Dementia (e.g., Alcoholic Dementia)

7.9 % of all Canadians 65 years and older meet the criteria for the clinical diagnoses of dementia (Canadian Study on Health and Aging, 1994). Alzheimer's Disease is the major cause of dementia, accounting for 64% of all dementias in Canada for persons 65 and older and 75% of all dementias for persons 85 plus.

References:

4. American Psychiatric Association (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: Author.

5. Canadian study of health and aging: Study methods and prevalence of dementia. (1994). Canadian Medical Association Journal, 150(6).

See Also:

Alzheimer's Disease

Discrete Processing

A model using discrete processing requires that information is passed from one stage to another only after the processing in the first stage is complete. Therefore, the processing time required in a discrete model is additive and equal to the sum of the time taken at each level of processing.

The advantage of this type of model is that it provides a convienent method of understanding the effects of different variables on the performance of a given task.

References:

56. Eysenck, M.W. (Ed.). (1990). The Blackwell Dictionary of Cognitive Psychology. Cambridge, MA: Basil Blackwell.

See Also:

Cascade Processsing

The Disjunction Problem

Any theory of the content of a representation must be able to explain how a representation can misrepresent --how it can represent an object as being something it is not, or as having properties it does not have-- basically how its content can be false of the object represented.

The difficulty is that we need to explain --in a principled, non-circular way-- how the representation can correctly represent some things which cause its activation, yet misrepresent other things which cause its activation. For instance, we9d like to be able to say that my kangaroo representation represents kangaroos. If so, then if a wallaby causes the activation of that representation, then the wallaby is misrepresented; the representation9s content that9s a kangaroo is false of the wallaby.

Unfortunately, to Fodor (1987, 1990) this doesn9t work. The problem is that if the wallaby can also cause the activation of my kangaroo representation, then we seem to have no principled reason for saying that the content of the representation is simply that9s a kangaroo rather than the disjunctive content that9s either a kangaroo or a wallaby. If this is so, then when a wallaby activates my kangaroo representation, this representation doesn9t represent the wallaby as something it is not. This representation has the (disjunctive) content that9s either a kangaroo or it9s a wallaby which, of course, is true of the wallaby.

This content might better be described as 3unspecific2, rather than 3disjunctive2. That is, perhaps the content is something like an unspecific description which applies correctly to all the things which can activate it, such as that9s a large animal with a long tail that gets about by hopping on its hind legs. So to say that some things which activate the representation are correctly represented and others are misrepresented doesn9t work. Even if I9ve only ever seen kangaroos, and have never met a wallaby, the wallaby can be correctly represented by this representation, because the wallaby is also a large animal with a long tail that gets about by hopping on its hind legs.

This is especially a problem for theories which explain content in terms of covariance: some sort of reliable, lawlike, connection between tokenings of the representation and the occurrence of certain types of thing in the world. Such theories have to be able to justify describing the representation9s content 3conservatively2 as Cummins (1990) calls it, rather than 3liberally2; as that9s a kangaroo rather than that9s a large animal with a long tail that gets about by hopping on its hind legs. Cummins summarises various attempts to do this, arguing that covariance theories don9t explain content in a way that allows representations to misrepresent.

Fodor (1990) claims that any theory which purports to account for the content of a representation must solve the disjunction problem. Such an account must be able to explain misrepresentation, by showing what a representation9s content is--exactly-- and also how a representation can be caused to be activated by something to which that content does not apply.

References:

57. Cummins, R. (1989). Meaning and Mental Representation. Cambridge, Mass: MIT Press. A Bradford Book.

58. Fodor, J. (1987). 3Meaning and the World Order2. In Psychosemantics (pp. 97-133). Cambridge Mass.: MIT Press. A Bradford Book.

59. Fodor, J. (1990). 3A Theory of Content I: The Problem2. In A Theory of Content and Other Essays. (pp. 51-88). Cambridge, Massachusetts: MIT Press. A Bradford Book.

See Also:

Semantics | Misrepresentation | Representation

Elaborative Rehearsal

Elaborative rehearsal is a type of rehearsal proposed by Craik and Lockhart (1972) in their Levels of Processing model of memory. In contrast to maintenance rehearsal, which involves simple rote repetition, elaborative rehearsal involves deep sematic processing of a to-be-remembered item resulting in the production of durable memories.

For example, if you were presented with a list of digits for later recall (4920975), grouping the digits together to form a phone number transforms the stimuli from a meaningless string of digits to something that has meaning.

References:

60. Craik, F.I.M., & Lockhart, R.S. (1972). Levels of processing. A framework for memory research. Journal of Verbal Learning and Verbal Behaviour, 11, 671-684.

See Also:

Levels of Processing | Maintenance Rehearsal Enactment

Weick (1988) describes the term enactment as representing the notion that when people act they bring structures and events into existence and set them in action. The process of enactment involves two steps. First, preconceptions are used to set aside portions of the field of experience for further attention, that is, perception is focused on predetermined stimuli. Second, people act within the context of these portions of experience guided by preconceptions in such a way as to reinforce these preconceptions. Hence, attention to certain stimuli will guide subsequent action so that those stimuli are confirmed as important. The result of the process of enactment is the enacted environment (Weick, 1988). This enacted environment comprises "real" objects but the significance, meaning and content of these objects will vary. These objects are not significant unless they are acted upon and incorporated into events, situations and explanations. In this way the enacted environment is a direct result of the preconceptions held by the social actor. An enacted environment is internalised by social actors as the way in which actions have led to certain consequences; it is therefore analogous to the concept of schema and is the source of expectations for future action (Weick, 1988) . An enacted environment is "a map of if-then assertions in which actions are related outcomes" that in turn serve as expectations for future action and focus perception in such way that these preconceived relationships will be supported.

The importance of the notion of enactment is that it provides a direct link between individual cognitive processes and environments. By showing how preconceptions can shape the nature of the environment this concept allows one to argue the importance of schema in the sensemaking process. Schema guide both perception and inference (Fiske & Taylor, 1991) and so will 'enact' environment by assigning significance, meaning and content to objects perceived in the environment.

References:

61. Fiske, S.T., & Taylor, S.E. (1991). Social cognition (2nd ed.). New York: McGraw-Hill.

62. Weick, K. E. (1988). Enacted sensemaking in crisis situations. Journal of Management Studies, 24(4).

Contributed by Julian Andrews

Encoding

Encoding refers to the processess of how items are placed into memory.

See Also:

Working Memory

Encoding Specificity

The encoding specificity principle of memory (Tulving & Thomson, 1973) provides an general theoretical framework for understanding how contextual information affects memory. Specifically, the principle states that memory is improved when information available at encoding is also available at retrieval. For example, the encoding specificity principle would predict that recall for information would be better if subjects were tested in the same room they had studied in versus having studied in one room and tested in a different room (see S.M. Smith, Glenberg, & Bjork, 1978).

References:

63. Smith, S.M., Glenberg, A.M., & Bjork, R.A. (1978). Environmental contest and human memory. Memory and Cognition, 6, 342-353.

64. Tulving, E., & Thomson, D.M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychological Review, 80, 352-373.

See Also:

Encoding | Retrieval

Equilibration

According to Piaget, development is driven by the process of equilibration. Equilibration encompasses assimilation (i.e., people transform incoming information so that it fits within their existing thinking) and accommodation (i.e, people adapt their thinking to incoming information). Piaget suggested that equilibration takes place in three phases.

First children are satisfied with their mode of thought and therefore are in a state of equilibrium.

Then, they become aware of the shortcomings in their existing thinking and are dissatisfied (i.e., are in a state of disequilibration and experience cognitive conflict).

Last, they adopt a more sophisticated mode of thought that eliminates the shortcomings of the old one (i.e., reach a more stable equilibrium).

See Also:

Adaptation | Piaget's Stage Theory of Development

Error Analysis

One of the key goals of cognitive science is to develop theories that are strongly equivalent with respect to to-be-explained systems. This requires that evidence be collected to defend the claim that the model and the to-be-explained system are carrying out the same procedures to compute a function.

One kind of information that could be used to examine this claim is called error analysis. In an error analysis, one could (for two different systems) rank order problems in terms of their difficulty, as revealed by their likelihood to produce mistakes. This is an example of relative complexity evidence. A more detailed approach would be to classify the nature of the errors that each system made. In either case, if the two systems were strongly equivalent, then we would expect them to produce the same rank orderings of difficulty, and to also produce the same qualitative patterns of errors.

References:

65. Pylyshyn, Z.W. (1984). Computation and cognition. Cambridge, MA: MIT Press.

See Also:

Intermediate State Evidence | Protocol Analysis | Relative Complexity Evidence | Strong Equivalence Extension

The extension of the term 'cat' is the class of 'cat'.

What a term means has two components: i) the referent of the term--this is 'class' talk, and is the component of meaning to which 'extension' applies; and ii) the sense of the term, i.e., all of the psychological associations that one has with that term--this is 'concept' talk. This second sense is referred to as the 'intension' of the term.

Examples of the two components follow. The referent of the term 'cat' is all the cats; the sense of the term is related to your experience of cats, their history, their attributes, etc. A classic example is 'the morning star' and 'the evening star'; both of which refer to the same thing, the planet 'Venus', but the sense of 'morning star' and 'evening star' is not the same. You cannot change the terms in a statement including one of them and retain the same truth value.

Other words sometimes used to pick out the distinctions between 'extension' and 'intension' are 'denotation' and 'connotation', respectively. Note the following definition by Cohen and Nagel:

A term [an element of a proposition] may be viewed in two ways, either as a class of objects (which may have only one member), or as a set of attributes or characteristics which determine the objects. The first phase or aspect is called the denotation or extension of the term, while the second is called the connotation or intension. The extension of the term 'philosopher' is 'Socrates', 'Plato', 'Thales', and the like; its intension is 'lover of wisdom', 'intelligent', and so on. (31)

The distinctions in the meaning of a term are important to clarify. Without such distinctions, no discussion of meaning in general can begin. If we wish to construct models and theories of human language and thought--and here talk of meaning necessarily enters--we need to make precise those issues and problems we specifically want to address.

Cohen, M. R. and Nagel, E. (1993). An Introduction to Logic. Indianapolis, Indiana: Hackett Publishing Company.

See Also:

Intension

Fluid Intelligence

Fluid intelligence is tied to biology. It is defined as our "on-the-spot reasoning ability, a skill not basically dependant on our experience." (Belsky, 1990, p. 125) Belsky (1990) indicates this type of intelligence is active when the central nervous system (CNS) is at its physiological peak.

Fluid intelligence is measured by the performance subtasks on the Wechsler Adult Intelligence Scale (WAIS).

Fluid intelligence is important to psychologists as it relates to the study of aging. There is ongoing intense debate among psychologists as to whether or not intelligence declines with aging. Belsky (1990) claims fluid intelligence "reaches a peak in early adulthood and then regularly declines." (p. 125) This is because of the physiological changes that accompany aging. "The development of CNS structures is exceeded by the rate of CNS breakdown." (Horn, 1970 as quoted in Belsky, 1990, p. 125)

References:

66. Belsky, J. K. (1990). The psychology of aging theory, research, and interventions. Pacific Grove, CA: Brooks/Cole.

67. Horn, J. (1970). Organization of data on life-span development of human abilities. In R. Goulet and P.B. Baltes (Eds.). Life-span developmental psychology: Research and theory. New York: Academic Press.

See Also:

Crystallized Intelligence | WAIS The Formality Condition

The semantic properties of a representation are the properties it has due to its relationship with the world; properties such as being true, of being a representation of something, of saying something about some object. On the other hand, the properties that the representation has in itself, are its formal properties. Fodor (1980) defines a representation9s formal properties negatively, by specifying what they are not: 3Formal properties are the ones that can be specified without reference to such semantic properties as, for example, truth reference, and meaning.2 (p.227) Fodor stresses that formal properties are not syntactic properties. A representation can have formal properties, and a process can operate on those formal properties, without that representationhaving a syntax (p227); rotating an image on a screen, for instance this operation is performed on the image9s formal properties, but the image doesn9t even have a syntax..

The point for a computational theory of mind, which takes mental processes to be formal operations on representations, (and thus, to Fodor, taking the mind to be a 3kind of computer2) is that such processes only have access to a representation9s formal properties. Computational processes do not have any access to semantic properties; that is, to a representation's relationships with the world.

Thus the processes that operate on representations cannot operate on the basis of what this is a representation of, or whether it represents that thing correctly or not, but only on the character of the representation itself, its 3shape2 as it were. Thus the Formality Condition incurs what Putnam (1975) calls Methodological Solipsism.

3If mental processes are formal, then they have access only to the formal properties of such representations of the environment as the senses provide. Hence, they have no access to the semantic properties of such representations, including the property of being true, of having referents, or, indeed, the property of being representations of the environment.2 (Fodor (1980), p231, Fodor9s emphasis)

The solution to this methodological solipsism is to pair a computational psychology with what Fodor calls a naturalistic psychology: a theory of the relations between representations and the world, which fix the semantic interpretations of representations9 formal properties. (p233) That is, a representation9s formal properties must somehow mirror the representation9s semantic properties, so that operations can operate on formal properties which can at least be interpreted as saying something about some part of the world (whether or not that interpretation is correct, true, appropriate, etc.).

References:

68. Fodor, J. (1980). Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology. In Representations (pp. 225-253). Cambridge, Massachusetts: MIT Press. A Bradford Book.

69. Putnam, H. (1975). 3The Meaning of Meaning2. In K. Gunderson (Ed.), Minnesota Studies in the Philosophy of Science (pp. 131-193). Minneapolis: University of Minnesota Press.

See Also:

Semantics | Representation

Free Recall

Free recall is a basic paradigm used to study human memory. In a free recall task, a subject is presented a list of to-be-remembered items, one at at time. For example, an experimenter might read a list of 20 words aloud, presenting a new word to the subject every 4 seconds. At the end of the presentation of the list, the subject is asked to recall the items (e.g., by writing down as many items from the list as possible). It is called a free recall task because the subject is free to recall the items in any order that he or she desires.

The free recall task is of interest to cognitive science because it provided some of the basic information used to decompose the mental state term "memory" into simpler subfunctions ("primary memory", "secondary memory"). This is because the results of a free recall task were typically plotted as a serial position curve. This curve exhibited a recency effect and a primacy effect. The behavior of these two effects provided support to the hypothesis that the free recall task called upon both a short-term and a long-term memory.

See Also:

Primacy Effect | Recency Effect | Serial Position Curve | Short Term Memory

Functional Analysis

Functional analysis is a methodology that is used to explain the workings of a complex system. The basic idea is that the system is viewed as computing a function (or, more generally, as solving an information processing problem). Functional analysis assumes that such processing can be explained by decomposing this complex function into a set of simpler functions that are computed by an organized system of subprocessors. The hope is that when this type of decomposition is performed, the subfunctions that are defined will be simpler than the original function, and as a result will be easier to explain.

A very detailed treatment of functional analysis is provided by Cummins (1983). He proposes a three-stage methodology that defines functional analysis. In the first stage, the to-be-explained function is defined. In the second stage, analysis is performed. The to-be-explained function is decomposed into an organized set of simpler functions. This analysis can proceed recursively by decomposing some (or all) of the subfunctions into sub-subfunctions. In the third stage, analysis is stopped by subsuming the bottom level of functions. This means that the operation of each of these operation is explained by appealing to natural laws (e.g., mechanical or biological principles). If functional analysis is applied to an information processing system, then the level of subsumed functions defines the functional architecture for that information processor.

Functional analysis is important to cognitive science because it offers a natural methodology for explaining how information processing is being carried out. For instance, any "black box diagram" offered as a model or theory by a cogntive psychologist represents the result of carrying out the analytic stage of functional analysis. Any proposal about what constitutes the cognitive architecture can be viewed as a hypothesis about the nature of cognitive functions at the level at which these functions are subsumed.

References:

70. Cummins, R. (1983). The nature of psychological explanation. Cambridge, MA: MIT Press.

See Also:

Functional Architecture | Primitive | Ryle's Regress

Functional Architecture

The functional architecture can be viewed as the set of basic information processing capabilities available to an information processing system.

"Specifying the functional architecture of a system is like providing a

manual that defines some programming language. Indeed, defining a

programming language is equivalent to specifying the functional architecture

of a virtual machine" (Pylyshyn, 1984, p. 92).

In other words, if it is assumed that cognition is the result of the brain's "running of a program", then the functional architecture is the language in which that program has been written.

The functional architecture is of interest to cognitive science because if offers an escape from Ryle's Regress (a.k.a. the homunculus problem). The functional architecture is comprised of a set of primitive operations or functions. This means that these basic functions cannot be explained by being further decomposed into less complex ("smaller") subfunctions. Instead, they must be explained by appealing to implementational properties (e.g., for human cognition, properties of the human brain). As a result, the functional architecture represents the point at which the decomposition of mental state terms into other mental state terms via functional analysis can stop. By specifying the functional architecture, one converts the black box descriptions that cognitivists create into explanations.

References:

71. Pylyshyn, Z.W. (1984). Computation and cognition. Cambridge, MA: MIT Press.

See Also:

Functional Analysis | Primitive | Ryle's Regress

Generalization

Klahr & Wallace (1982) felt that Piaget's theory of adaptation was not enough to explain cognitive development. They therefore developed a new theory, and posited that the mechanism behind development was generalization.

Klahr and Wallace divided generalization into three more specific categories: the time line, regularity detection, and redundancy elimination (Siegler, 1991). These three categories are described below.

The Time Line

The time line contains the data on which generalizations are based. In Klahr and Wallace's theory, whenever a system encounters a situation, it records the responses to that situation, the outcomes from those actions, and what new situations arose as a result. This recording of events ensures that the system keeps all the information about an even stored so that it can be referred back to in the future.

Regularity Detection

This process uses the contents of the time line to draw generalizations about experience. The system notes situations that are similar and notes where variations do not change the outcomes of situations.

Redundancy Elimination

This process improves efficiency by identifying processeing steps that are unecessary. In this way, it reaches a generalization that a less-complex sequence can achieve the same goal (Siegler, 1991).

Klahr and Wallace have developed a self-modifying computer simulation that models findings about children's thinking, and can demonstrate these processes in generalization.

References:

72. Klahr, D. (1982). Nonmonotone assessment of monotone development: An information processing analysis. In S. Strauss (Ed.), U-shaped behavioral growth. New York: Academic Press.

73. Siegler, R. (1991). Children's thinking. Englewood Cliffs, NJ: Prentice-Hall.

74. Vasta, R., Haith, M. M., & Miller, S. A. (1995). Child psychology: The modern science. New York, NY: Wiley.

See Also:

Adaptation | Equilibration

Graceful Degradation

In a symbolic system removing part of the system will result in a clear degradation of performance. Removing a symbol token will result in the loss of the information stored in that token. The loss of an operating procedure destroys the systems ability to perform the missing process. The fall in performance is sudden and clearly defined. In a connectionist system performance does fall sharply with either damage to the system or erroneous inputs. Instead, the performance will decline gradually, depending on the nature of the loss and the architecture of the system. This property means that connectionist models still function relatively error free when the system has damage to its connections or units or when the input stimuli is incomplete.

References:

75. Bechtel, W., & Abrahamsen, A. (1991). Connectionism and the mind: An introduction to parallel processing in networks. Cambridge, MA: Blackwell.

See Also:

Content Addressable Memory | Functional Architecture | Parallel Distributed Processing Models | Spontaneous Generalisation | Symbolic Architecture

Hebbian Learning Rule

The Hebbian Learning Rule is a learning rule that specifies how much the weight of the connection between two units should be increased or decreased in proportion to the product of their activation. The rule builds on Hebbs's 1949 learning rule which states that the connections between two neurons might be strengthened if the neurons fire simultaneously. The Hebbian Rule works well as long as all the input patterns are orthogonal or uncorrelated. The requirement of orthogonality places serious limitations on the Hebbian Learning Rule. A more powerful learning rule is the delta rule, which utilizes the discrepancy between the desired and actual output of each output unit to change the weights feeding into it.

References:

76. Bechtel, W., & Abrahamsen, A. (1993). Connectionism and the mind: An introduction to parallel processing in networks. Oxford, UK: Blackwell.

77. Hebb, D.O. (1949). The organization of behavior. New York: Wiley.

78. Rumelhart, D.E., & McClelland, J. L.(1986). Parallel distributed processing: Explorations in the microstructure of cognition, vol. 1: Foundations. Cambridge, MA: MIT Press.

See Also:

Learning Rule

Humor

There are many reasons why people find something humorous, which are reflected in the large number of theories on the subject. Humor has been related to aggression, incongruity, and surprise. The cognitive psychologist's interest in the subject is usually related to the notion that humor stems from a resolution of incongruity.

For example, consider this joke by W.C. Field. "Do you believe in clubs for children?" "Only when kindness fails". Schultz(1974) offered a three step theory of processing. In the first stage, the listener notices the incorrect interpretation of the ambiguous element (clubs = social groups). In the second step, the incorrect element of incongruity is processed ( "only when kindness fails"). In the final stage the hidden meaning of the ambiguous element is perceived (clubs = sticks). The incongruity resolution theory explains the fact that a joke previously encountered will seem less funny on subsequent exposure.

Similarly, Freud (1905, in Minsky 1985) suggested that humorous stories are a way of fooling our internal censors. A joke's power comes from a description that fits two different frames at once. The first meaning must be transparent and innocent, while the second meaning is disguised and reprehensible.

Although most cognitive psychologists have not extended their theorizing to humor, it does have an important cognitive aspect. In particular, cognitive theory helps provide an explanation of why verbal jokes are found amusing by looking at the comprehension processes involved.

References:

79. Kristal, L. (Ed.). (1981). ABC of psychology. London: Multimedia Publications.

80. Minsky, M. (1985). The society of mind. New York, NY: Simon & Schuster.

81. Schultz, T.R. (1974). Order and processing in humor appreciation. Canadian Journal of Psychology, 28, 409-420.

Imagery Debate

The imagery debate centres around the problem of what can be viewed as the primitives of cognition. Primitives serve as the foundation of the algorithmic level of the computational hierarchy. Presumably, it is these primitives which are implemented in the physical substrate of the brain.

The central question related to the imagery debate then is: Do images form the basis of all our higher cognition? If not, what does? Could propositions serve that function? Or both images and propositions? Or something altogether different?

Kosslyn, S. M., Pinker, S., Smith, G., & Shwartz, S. P. (1979). On the demystification of mental imagery. The Behavioral and Brain Sciences, 2, 535-581.Pylyshyn, Z. W. (1981). The imagery debate: Analogue media versus tacit knowledge. Psychological Review, 88, 16-45.Anderson, J. R. (1978). Arguments concerning representations for mental imagery. Psychological Review, 85, 249-277.

Incidental Learning Paradigm

The incidental learning paradigm is an experimental paradigm used to investigate learning without intent. Using this paradigm, several groups of subjects are presented with the same list of items (e.g., 20 words) and are instructed to process them in different ways (different orienting conditions), with each group asked to perform a different activity or orienting task with the list. For example,

count the number of letters in each word (shallow processing)

name a rhyming word for each item (again, shallow processing, but deeper than #1

form an image of each word and rate the vividness of each image (deep processing).

Importantly, subjects are not told that there will be a subsequent test of memory. At the end of the list presentation, subjects are unexpectedly asked to recall as many of the words as possible. Processing information at a deeper level results in superior recall of that information (Eysenck, 1974).

References:

82. Eysenck, M.W. (1974). Age differences in incidental learning. Developmental Psychology, 10, 936-941.

See Also:

Levels of Processing

Induction Learning

Inductive learning is essentially learning by example. The process itself ideally implies some method for drawing conclusions about previously unseen examples once learning is complete. More formally, one might state: Given a set of training examples, develop a hypothesis that is as consistent as possible with the provided data [1]. It is worthy of note that this is an imperfect technique. As Chalmers points out, "an inductive inference with true premises [can] lead to false conclusions" [2]. The example set may be an incomplete representation of the true population, or correct but inappropriate rules may be derived which apply only to the example set.

A simple demonstration of this type of learning is to consider the following set of bit-strings (each digit can only take on the value 0 or 1), each noted as either a positive or negative example of some concept. The task is to infer from this data (or "induce") a rule to account for the given classification:

- 1000101

- 1110100

+ 0101

+ 1111

+ 10010

+ 1100110

- 100

+ 111111

- 00010

- 1

- 1101

+ 101101

+ 1010011

- 11111

- 001011

A rule one could induce from this data is that strings with an even number of 1's are "+", those with an odd number of 1's are "-". Note that this rule would indeed allow us to classify previously unseen strings (i.e. 1001 is "+").

Techniques for modeling the inductive learning process include: Quinlan's decision trees (results from information theory are used to partition data based on maximizing "information content" of a given sub-classification) [3], connectionism (most neural network models rely on training techniques that seek to infer a relationship from examples) and decision list techniques [4], among others.

References

83. Adapted from lectures in a graduate course in representation & reasoning given by Dr. Peter van Beek, Department of Computing Science, University of Alberta.

84. A.F. Chalmers. What is this thing called science?. University of Queensland Press, Australia, 1976.

85. J.R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, 1993.

86. R.L. Rivest. Learning decision lists. Machine Learning. 2(3):229-246, 1987.

See Also

Connectionism| Inductive Inference| Learning Rule| Machine Learning

Inductive (Pragmatic) Inference

Inferences are made when a person (or machine) goes beyond available evidence to form a conclusion. An inductive inference is one which is likely to be true because of the state of the world. Unlike deductive inferences, inductive inferences do yield consclusions that increase the semantic information over and above that found in the initial premises.

However, in the case of inductive inferences, we cannot be sure that our conclusion is a logical result of the premises, but we may be able to assign a likelihood to each conclusion.

Similar to deductive inference, induction can be broken down into three stages. The first stage is to understand the observation or stated information. The second is to form a hypothesis that attempts to describe the above information in relation to t person's general knowledge. The resulting conclusion goes beyond initial information by incorporating one's general knowledge in the result. The third step is to evaluate the validity of the conclusion that was reached.

References:

87. Eysenck, M.W. (Ed.). (1990). The Blackwell dictionary of cognitive psychology. Cambridge, MA: Basil Blackwell.

88. Johnson-Laird, P. N. (1993). Human and machine thinking. Hillsdale, NJ : Lawrence Erlbaum Associates.

See Also:

Deductive Inference

IntensionWhat a term means has two components: i) the referent of the term--this is 'class' talk, and is the component of meaning to which 'extension' applies; and ii) the sense of the term, i.e., all of the psychological associations that one has with that term--this is 'concept' talk. This second sense is referred to as the 'intension' of the term.

Examples of the two components follow. The referent of the term 'cat' is all the cats; the sense of the term is related to your experience of cats, their history, their attributes, etc. A classic example is 'the morning star' and 'the evening star'; both of which refer to the same thing, the planet 'Venus', but the sense of 'morning star' and 'evening star' is not the same. You cannot change the terms in a statement including one of them and retain the same truth value.

Other words sometimes used to pick out the distinctions between 'extension' and 'intension' are 'denotation' and 'connotation', respectively. Note the following definition by Cohen and Nagel:

A term [an element of a proposition] may be viewed in two ways, either as a class of objects (which may have only one member), or as a set of attributes or characteristics which determine the objects. The first phase or aspect is called the denotation or extension of the term, while the second is called the connotation or intension. The extension of the term 'philosopher' is 'Socrates', 'Plato', 'Thales', and the like; its intension is 'lover of wisdom', 'intelligent', and so on. (31)

The distinctions in the meaning of a term are important to clarify. Without such distinctions, no discussion of meaning in general can begin. If we wish to construct models and theories of human language and thought--and here talk of meaning necessarily enters--we need to make precise those issues and problems we specifically want to address.

Cohen, M. R. and Nagel, E. (1993). An Introduction to Logic. Indianapolis, Indiana: Hackett Publishing Company.

See Also:

Extension Intention

Intentionality refers to "aboutness." Beings having intentionality have propositional attitudes, they have beliefs, knowledge, hopes, dreams, desires, etc. about things. Whenever we come across "that" in an utterance or piece of writing, we know that we are dealing with something intentional. (Notice the intentionality of the preceding statement.) If we hear someone say "ouch," "oops," "hey," etc., these expressions do not reveal what sets humans apart from the rest of the animals. Intentionality does; it is considered by most to be a singularly human feature.

This issue is important to the extent that any theory of consciousness, or mind, must answer as to how intentionality is possible.

'Intentional' is not to be confused with 'intensional' spelled with an 's', the latter of which refers to the meaning of a term, (along with 'extensional'). Intentional, intensional, and extensional can be paired loosely in the following way: intentional/propositional, intensional/conceptual, and extensional/perceptual.

See Also:

Intension | Extension

The Intentional Stance

An intentional stance refers to the treating of a system as if it has intentions, irrespective of whether it does. By treating a system as if it is a rational agent one is able to predict the system's behaviour. First, one ascribes beliefs to the system as those the system ought to have given its abilities, history and context. Then one attributes desires to the system as those teh system ought to have given its survival needs and means of fulfilling them. One can then predict the systems behaviour as that a rational system would undertake to further its goals given its beliefs. Dennett argues for three main reasons for taking an intentional stance. First, it fits well with our understandings of the processes of natural selection and evolution in complex environments. Second, it has been shown to be an accurate method of predicting behaviour. Third, it is consistent with our folk psychology of behaviour.

References:

89. Dennett, D.C. (1987). The Intentional Stance Cambridge MA, MIT Press

See Also:

Intermediate State Evidence

One of the key goals of cognitive science is to develop theories that are strongly equivalent with respect to to-be-explained systems. This requires that evidence be collected to defend the claim that the model and the to-be-explained system are carrying out the same procedures to compute a function.

One type of evidence that can be used to support this claim is intermediate state evidence. This involves observations of the intermediate steps, and/or the intermediate states of knowledge, that the two systems pass through as they move from being given a problem to providing an answer.

For example, if one was using a Turing machine as a model, then an immediate source of intermediate state evidence would be what the machine does to its tape with each processing step.

In studying human subjects, intermediate state evidence is not directly available. However, one method that might provide some evidence about these intermediate states is protocol analysis.

References:

90. Pylyshyn, Z.W. (1984). Computation and cognition. Cambridge, MA: MIT Press.

See Also:

Protocol Analysis | Strong Equivalence Intrusion Errors

In a recall portion of a memory task, these are errors that occur when the subject includes items that were not on the original list.

See Also:

Cued Recall | Free Recall

Learning Rule

Learning rules, for a connectionist system, are algorithms or equations which govern changes in the weights of the connections in a network. One of the simplest learning procedures for two- layer networks is the Hebbian Learning Rule, which is based on a rule initially proposed by Hebb in 1949. Hebb's rule states that the simultaneous excitation of two neuron results in a strengthening of the connections between them. More powerful learning rules are learning rules which incorporate an error reduction procedure or error correction procedure (e.g., delta rule, generalized delta rule, back propagation). Learning rules incorporating an error reduction procedure utilize the discrepancy between the desired output pattern and an actual output pattern to change (improve) its weights during training. The learning rule is typically applied repeatedly to the same set of training inputs across a large number of epochs or training loops with error gradually reduced across epochs as the weights are fine-tuned.

References:

91. Bechtel, W., & Abrahamsen, A. (1993). Connectionism and the mind: An introduction to parallel processing in networks. Oxford, UK: Blackwell.

92. Hebb, D.O. (1949). The organization of behavior. New York: Wiley.

93. Rumelhart, D.E., & McClelland, J. L.(1986). Parallel distributed processing: Explorations in the microstructure of cognition, vol. 1: Foundations. Cambridge, MA: MIT Press.

See Also:

Hebbian Learning Rule | Parallel Distributed Processing Models

Levels of Processing

Levels of Processing - an influential theory of memory proposed by Craik and Lockhart (1972) which rejected the idea of the dual store model of memory. This popular model postulated that characteristics of a memory are determined by it's "location" (ie, fragile memory trace in short term store [STS] and a more durable memory trace in the long term store [LTS]. Instead, Craik and Lockhart proposed that information could be processed in a number of different ways and the durability or strength of the memory trace was a direct function of the depth of processing involved. Moreover, depth of processing was postulated to fall on a shallow to deep continuum.

Shallow processing (e.g., processing words based on their phonemic and orthographic components) leads to a fragile memory trace that is susceptible to rapid forgetting. On the other had, deep processing (e.g., semantic or meaning based processing) results in a more durable memory trace.

A typical paradigm employed to investigate the Levels of Processing theory is the incidental learning paradigm. Results reveal superior recall for items processed deeply compared to those items processed at the more shallow level (Eysenck, 1974: Hyde & Jenkins, 1969).

Craik and Lockhart also distinguished between two kinds of rehearsal, maintenance and elaborative rehearsal. Of the two, elaborative rehearsal is the most effective in producing a more durable memory trace.

References:

94. Craik, F.I.M., & Lockhart, R.S. (1972). Levels of processing. A framework for memory research. Journal of Verbal Learning and Verbal Behaviour, 11, 671-684.

95. Eysenck, M.W. (1974). Age differences in incidental learning. Developmental Psychology, 10, 936-941.

96. Hyde, T.S., & Jenkins, J.J. (1969). Differential effects of incidental tasks on the organization of recall of a list of highly associated words. Journal of Experimental Psychology, 82, 472-481.

See Also:

Elaborative Rehearsal | Incidental Learning Paradigm | Maintenance Rehearsal Linguistic Determination

Linguistic determination is the argument that language directly effects that way that people think about and see the world. Linguistic determination is also known as the Whorfian hypothesis or the Sapir-Whorf hypothesis (Sapir, 1968; Whorf, 1956). Whorf provides the example of the Eskimo words for snow. The Eskimo people are inhabitants of the Arctic. Whereas in the English language there is only one word for snow the Eskimo language has many words for snow. Whorf argues that this language for snow allows the Eskimo people to "see" snow differently than speakers of other languages who do not have as many words for snow. That is, Eskimo people see subtle differences in snow that other people do not.

Researchers have studied color perception across different linguistic groups to find support for the Whorfian hypothesis (Berlin & Kay, 1969; Heider, 1972; Heider & Oliver, 1973; Miller & Johnson-Laird, 1976; Rosch, 1974). The evidence indicates that people of all cultures perceive colour in the same way. The tentative conclusion is that language does not determine the way that people think. It is possible that language, whiule not determining the way that people think may influence the way that people think. Exactly how language might influence thought is yet unclear.

Long-Term Potentiation

occurs following the activation of a synapse by high-frequency

stimulation of the presynaptic neuron. (Pinel, 1993, p.515)

Long-Term Potentiation (LTP) was originally discovered in Aplysia. Recently, however, LTP has also been found to occur in the mammalian nervous system, specifically the hippocampus. This is an extremely important finding as it suggests that LTP could be the cellular basis of the neural implementation of learning and memory, especially when combined with t