internal representations: an approach to ai methodology. course seminar by sumesh.m. k
DESCRIPTION
INTERNAL REPRESENTATIONS: An approach to AI methodology. Course Seminar By Sumesh.M. K. (05408008) Course: CS621, Offered by Prof. Pushpak Bhattacharyya, IITB. - PowerPoint PPT PresentationTRANSCRIPT
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
1
INTERNAL REPRESENTATIONS: An approach to AI methodology.
Course Seminar By Sumesh.M. K.(05408008)
Course: CS621, Offered by Prof. Pushpak Bhattacharyya, IITB.
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
2
Abstract: The theory of internal representations which is at the heart of cognitive science has assumed central stage in the contemporary discussions on mind and artificial intelligence. I discuss two problems concerning internal representations as it arise in the artificial intelligence framework. The first problem is-'meaning barrier'- of forming the correct structure of representation which is regarded as 'the central task facing the artificial-intelligence community'. The different models of concept-based perception are suggested as attempts at solving this problem. A machine with a developed concept-based perception can be rightly taken as a 'thinking machine'. The second problem is-'experience barrier' as I would call it-of having perceptual experience. To address this problem an alternative account of the notion of internal representations-'derivations'- and a 'derivational framework' of mind are offered. I translate this approach into the framework of artificial intelligence to see the possibility of an 'experiencing machine', a machine that not only 'thinks' but thinks like human beings. I conclude with some remarks on certain independent evidences in favour of the derivational approach outlined.
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
3
The Notion ‘Artificial Intelligence’
Intelligence
Mind
Problem in the study of mind ; Lack of Data?
Theory of Mind-Black box analogy-Transcendental Arguments
Intuitive suggestions from all intellectual disciplines about Mind
Case: Arts, Paintings, Music, Mathematics, Philosophy etc,
The movie Matrix
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
4
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
5
Rene Magritte about The Human Condition
“I placed in front of a window, seen from a room, a painting representing exactly that part of the landscape which was hidden from view by the painting. Therefore, the tree represented in the painting hid from view the tree situated behind it, outside the room. It existed for the spectator, as it were, simultaneously in his mind, as both inside the room in the painting, and outside in the real landscape. Which is how we see the world: we see it as being outside ourselves even though it is only a mental representation of it that we experience inside ourselves”.
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
6
# Kant's Model of Mind
Perception(passive) & Conception (active)
Chomsky UG
Genie's case
Hubel and Wiesel
Neuroscience
# What about AI? What model of mind is presupposed?
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
7
Where do we place Turing's Question
# The problem of Mental Representations-
High-level perception(concepts begins to play its role)
-The structure of representations?
-How it is formed?
-How it is influenced by context? (PNAS)
-How can perceptions radically reshape themselves when necessary?
-concepts, meaning, understanding?
# Modularity Vs Non-modularity
possibility of representation module (single “correct” representation in all situations, no flexibility)
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
8
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
9
# AI and The problem of Internal Representations
What is the correct structure for representations?
-predicate calculus?frames?scripts?semantic networks?…
Our focus Short-term active representations as these are the direct product of perception.
The problem of relevance: To determine which part of the data are relevant to a given representation, a complex filtering process is required.
The problem of Organization: How are these data put into the correct form for the representation?
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
10
# The traditional approach to AI
- select a preferred type of high-level representational structure.
-select the data assumed to be relevant to the task.(human programmer/hand-code a representation)
The problem of representation is ignored
E.g. face detection
but c.f, Machine learning,speech processing (stops short of modeling at the conceptual level)
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
11
# The "meaning barrier“
-The formation of appropriate representations lies at the heart of the high-level cognitive abilities. The problem of high-level perception forms the central task facing the AI community, the task of understanding how to draw meaning out of the world.
-On the one side of the barrier, models in low level perception, yet not complex enough to be ‘meaningful’.
-On the other side , high level cognitive modeling has started with conceptual representations(predicate logic/nodes in a semantic network etc), meaning is already built-in.
And there is gap between the two.
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
12
# Traditional AI : characterized by an objectivist view of perception,representation etc.
PSSH (Newell & Simon '76) upon which most of the traditional AI enterprise has been built, posits that thinking occurs thru’ the manipulation of symbolic representations, which are composed of atomic symbolic primitives. Such representations are rigid, fixed, binary entities.
# However, recently, Connectionist models whose distributed representations are highly context-dependent (Rumelhart & McClelland ’86). Here no representational primitives in internal processing, as each representation is a vector in a multidimensional space,whose position can adjust flexibly to changes in context.
-Recurrent connections(Elman), Classifier-system models (Holland) etc,.
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
13
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
14
# Case study: the problem of representation is overlooked
Bacon : a program as a model of scientific discovery
SME: a computational model of analogy-making-the Structure Mapping Engine e.g.,.analogy of the solar system.
# To deal with the greater flexibility of human perception and representation, integration of task-oriented processes with high-level perception is necessary.
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
15
micro domains?
E.g.: Chapman's "Sonja" program(1991)
Hofstadter group's Copycat/Tabletop architecture
Shrager (1990)
Further models
Connectionist networks, classifier systems,...
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
16
Summary problem 1
Concepts and perception together make cognition possible. But researchers in AI often try to bypass perception while modeling cognition. A system can not have cognition unless it have the processes that build-up appropriate perceptual representations.
Integrating perceptual processes into a cognitive model leads to flexible internal representations. Recently, many models that encompass this idea have been suggested. These models take AI closer towards the workings of mind.
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
17
Problem 2
Experiential aspect of a mental state:
Two elements: Objective and Subjective
The subjective (slide 23)
Problem for RTM:
Intuitions: Inverted Spectrum
: Brain in a Vat
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
18
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
19
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
20
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
21
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
22
The concept of representation; Derivation
no emphasis on content arbitrary
Systematicity Consumption fluidity
Internal representations are internal states whose functional role is to bear certain specifiable contents. Further we want to respect the associated idea that much of the experiential elements may be grounded not in the rigid activity of inner vehicles, but in complex interactions involving neural, bodily and environmental factors.
E.g.:Christopher Longuert-Higgins Robot
H2O model
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
23
Self-reference
Feed back loop
Godel's theorem
The second problem is how to account for the experiential/phenomenal properties of mental representations.
E.g.,My conscious Tsunami experience -”bluish splashy way for me” has two parts, one represents the approaching giant waves (primary, qualitative content), second derivationally represents the experience/the awareness of it (secondary, subjective content). The idea is, these elementary components integrate to form a unitary conscious state
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
24
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
25
The argument for the derivationality of Phenomenality is :
1) A mental state M of subject S has P when, and only when, S is aware of M;
2) Awareness of X requires mental representation of X; therefore,
3) M has P when, and only when, S has a mental state M*, such that M* derivationally represents M.
The mechanism that make this kind of MR possible, I conjecture, is a kind of 'loop' an interaction between M and M* . Here M reaches up to M* and influences it , while being influenced by M*. Even one mental representation works as a network. In this sense MR does not stand for other (external) objects. It operates by means of the direct causal properties of M and M* and employs feed back loop and forward loop
Any theoretical framework that takes mental representations as arbitrary, systematical, consumable, fluid, self -referential information-bearing structures can be called derivational.
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
26
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
27
I now wish to go back to the example, to see the new tools are of any use to address it. My conscious Tsunami experience -”bluish splashy way for me”, as we have seen, has two components.The first element represents the approaching giant waves (primary, qualitative, yet objective content) and taken as M in the argument for derivationality. The second element, second derivationally represents the experience/the awareness of it (secondary, subjective content)which is taken as M* in the argument which in turn influence M in the manner of a loop.
The idea is, these derivations from M to M* and vice versa form a unitary conscious state
I wish to add some empirical evidence in support of this approach.The recent model of a single neuron as a network suggests that representations are neither IDEAS as Hume conceived it nor static communicative ideas as many representationalists argued.
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
28
Conclusions:
Recognizing the centrality of perceptual process makes AI more in tune with the findings of Cognitive Science. It yields human perception-like flexible representations and actions. Within a domain and within a level these AI representations work as internal derivations. With added features like self-reference, feed back loop etc, it may work exactly as mental derivations.
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
29
Core references: (Besides CS621 notes & references, Homepages and Websites of theorists and teams working in Cognitive Science).
Chomsky, N. (2000) New Horizons in The Study of Language and Mind. Cambridge Univ Press.
(2005) "Three Factors in Language Design" Linguistic Inquiry V 36, Winter '05. pp1-22.
Darwin, Charles (1859) The Origin Of Species.
Fodor, J. A. (1983) The modularity of Mind. Cambridge: Bradford Books, MIT Press.
(1998) Concepts: Where Cognitive Science Went Wrong. Oxford. OUP.
(2000) The Mind Doesn't Work That Way: the scope and limits of computational psychology. Cambridge: Bradford Books, MIT Press.
Haugeland, John (ed) (1997)Mind Design II Philosophy Psychology Artificial Intelligence, Revised and enlarged edition
Bradford Book, The MIT Press Cambridge, Massachusetts.
INTERNAL REPRESENTATIONS: AN APPROACH TO AI METHODOLOGY
30
Hofstadter, D (1979) Godel, Escher, Bach: An Eternal Golden Braid, The Harvester Press, London.
(1995) Fluid Concepts and Creative Analogies-Computer Models of the Fundamental Mechanisms of Thought. Penguin.
Kant, Immanuel (1787) The Critique of Pure Reason.
Nagel, Thomas (1974) "What is it like to be a Bat?" in Philosophical review.
Chua, Hannah Faye., Boland, Julie E., and Nisbett, Richard E. (2005) "Cultural variation in eye movements during scene perception". PNAS August 30, 2005 vol. 102 no. 35 12633.
Turing, Alan M. (1950) "Computing Machinery and Intelligence", Mind V 59,236,pp:433-60