the neural proposition (slides)

211
The Neural Proposition Structures for Cognitive Systems Michael S. P. Miller

Upload: independent

Post on 26-Feb-2023

1 views

Category:

Documents


0 download

TRANSCRIPT

The Neural Proposition Structures for Cognitive Systems

Michael S. P. Miller

Overview

• The Development of Thought

• Assimilation and Accommodation

• Affirmations and Negations

• Neural Propositions

• Inference

• PAM-P2

• Mind Servers

The Development of Thought

The Development of Thought

• The Process of Equilibration

• The Construction of Structures

Terminology

Equilibration = maintaining stability in a system

( Equilibrium + Calibration )

Maintaining Stability

Equilibration is about how to maintain stability in a system when disturbances inevitably occur: viz.

• what mechanisms maintain the stability,

• which structures are modified, and

• how to modify the structures.

Maintaining Stability

“Cognitive equilibriums are closer to those stationary but dynamic states, mentioned by Prigogine, with [environmental] exchanges capable of ‘building and maintaining a functional and structural order in an open system,’ and they resemble above all the static biological equilibriums (homeostasis) or dynamic equilibriums (‘homeorhesis’).”

[Piaget, 1977]

Maintaining Stability

Dynamic Disequilibrium

Near Equilibrium Far from Equilibrium

Static Disequilibrium

The Development of Thought

“The central idea is that knowledge proceeds neither solely from the experience of objects nor from an innate programming performed in the subject, but from successive constructions, the result of constant development of new structures.” – Jean Piaget

Assimilation & Accommodation

Additional Terminology

Element = a system or environmental entity

Scheme = a proposition about elements

Assimilation = adding elements to a scheme

Accommodation = mutating a cloned scheme

Elements: interior (B2, C) and exterior (B”)

Assimilation:

Assimilation:

Accommodation:

Accommodation:

Accommodation:

Accommodation:

A Little More Terminology

Interaction Cycle = a functional dependency established among external and or internal schemes, subsystems, or the entire system (i.e., the totality)

Interaction cycles are also called “schemes of assimilation”.

Interaction Cycles

Physical (Causal) Logical

Random Type I A Type I B

Deliberate Type II A Type II B

Experimental Type III A Type III B

Interaction Cycles

“Like the organisms, cognitive systems are actually both open in the sense that they undergo exchanges with the milieu and closed insofar as they undergo “cycles”. Let us call A,B,C, etc., the parts forming such a cycle and A’, B’, C’, etc., the elements of the milieu required to feed the system.”

[Piaget, 1977]

Elements of an Interaction Cycle

Assimilating Elements

A Stable Cycle (Equilibrium)

New exterior element B” replaces B’

A Disturbed Cycle (Disequilibrium)

If System Rejects B” –> End of Cycle

If System Accommodates B”

The Cycle is Stabilized (Equilibrium)

Affirmations & Negations

Now B → B1 + B2

• B is mutated into B2 in order to accommodate B”. However, The original B (now called B1) is also retained.

• Therefore

B1 = B · non-B2 and

B2 = B · non-B1

• The negations must be made explicit.

Euler & Venn Diagrams

Affirmations and Negations

A = A1 + A2 (Implicit Negations)

A1 = A · non-A2 (Implicit Negations)

A2 = A · non-A1 (Implicit Negations)

A = A1 + non-A1 + A2 + non-A2

(Implicit Negations)

What if “non” was an entity?

• Typically non-A1 and non-A2 are functions (operations) on A1 and A2.

• What if we explicitly made non-A1 and non-A2 entities (instead of functions)?

• Why? So that the negations can also become candidates for assimilation (in a computational system).

A = A1 + non-A1 + A2 + non-A2 (Explicit Negations)

A1 ≠ A · non-A2

(Explicit Negations)

A2 ≠ A · non-A1

(Explicit Negations)

Affirmations and Negations

Moving negation from function to entity

Affirmations and Negations

Affirmations and Negations

More Terminology

Thesis = affirmation

Antithesis = negation

Monad = a single thesis or antithesis

Dyad = a thesis + antithesis pair

Dyad = A Thesis + Antithesis pair

Monad = Any Thesis or Antithesis

Neural Propositions

Monads

Dyads

Schemes

Schemes

Even More Terminology

Reification = hypostatization = entity making

Moneme = a scheme reified by a monad

Dyneme = a scheme reified by a dyad

Monemes & Dynemes

Neural Proposition

m1 m2 m3

R({m1, m2, m3})

Still More Terminology

Differentiation = Mutation = Accommodation

Integration = Recombination = Crossover

Differentiation Integration

Differentiation where A → A1 + A2

Activation Dimensions

Fact Activation Flow

Activation flows laterally from afferent terms to efferent terms, and upwards from terms to reifiers.

Goal Activation Flow

Activation flows laterally from efferent terms to afferent terms, and downwards from reifiers to terms.

Merge Types

Inference

A Sample Taxonomy

Sample Analogy

Sample Analogy

Sample Analogy

Sample Deduction

Sample Deduction

Sample Deduction

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Perception

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Induction

Sample Prediction

Sample Prediction

Sample Prediction

Sample Prediction

Sample Prediction

Sample Prediction

Sample Prediction

Sample Prediction

Sample Prediction

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

Sample Attempts

PAM-P2

PAM-P2

PAM-P2 is a cognitive architecture influenced by

Database Semantics (Hausser) Dynamic Interlaced Hierarchies (Michalski) Behavioral Schemas (Drescher) Ontology Formation (Indurkhya, Pickett & Oates) Temporal Activation (Miller) Microtheories (Lenat) Cognitive System Patterns (Miller) Equilibration (Piaget)

PAM-P2 – Database Semantics

Database semantics views natural language communication as occurring between embodied cognitive agents having: (1) effectors and sensors for both verbal and nonverbal action and recognition, (2) a database, and (3) a communication procedure with two modes, speaker and hearer, to encode or decode the database content into signs of language (Hausser, 2010). The model that PAM-P2 constructs is a database of neural propositions. With the appropriate effectors and sensors, this model can be communicated to other cognitive agents.

PAM-P2 – Dynamic Interlaced Hierarchies

Michalski (1993) considers the possibility of connecting (interlacing) several disassociated shallow concept hierarchies with “traces”. He further considers the effects of dynamically manipulating the trace links connecting the hierarchies, finding that many simple forms of inductive, analogical, and deductive inference occur as a result. PAM-P2 performs similar multi-strategy inference by synthesizing new neural propositions and carefully integrating or differentiating them in specific ways with existing neural propositions in the model. Traces correspond to Entities in PAM-P2.

PAM-P2 – Dynamic Interlaced Hierarchies

Reproduced from Michalski (1993)

PAM-P2 – Behavioral Schemas

The schema system of Drescher (1991) exhibited Jean Piaget’s stages of infant development and introduced the notion of behavioral scheme reifiers, which enabled the schema system to learn the causes of failed behaviors through impediment ascription (or “failure prediction” as Hammond (1989) refers to it). PAM-P2 uses impediment ascription to form “problems” which correct “solutions” during regulation. The neural propositions also utilize Drescher’s notion of a scheme reifier.

PAM-P2 – Ontology Formation

Indurkhya (1992) has explored synthesizing an ontology from sensory datasets. Pickett and Oates have built the Cruncher algorithm (2005) which forms ontologies directly from instance data sets and the UberCruncher (2010) which constructs ontologies via analogy discovery. The inductors of the PAM-P2 system are intended to leverage such components and algorithms to incrementally construct the scheme memory.

PAM-P2 - Microtheories

A micro-theory in CYC [32] is a package of propositions about some aspect of the world. It may reflect a fantasized state of the world in a daydream as in Mueller (1990), an ontology synthesized from sensory datasets as in Indurkhya (1992), or a reasoning context for an assumption based truth maintenance system (ATMS) as in Morris and Nado (1986) or de Kleer (1986). PAM-P2 uses a “viewpoint”, which is a state for reasoning and simulation (though it does not implement a full ATMS as in the Knowledge Engineering Environment or the Automated Reasoning Tool). The viewpoints enable PAM-P2 to perform daydreaming, a form of state space search. The system also utilizes a case based HTN problem solver which performs plan-space search.

PAM-P2 – Cognitive System Patterns

Miller (2012) has reviewed over twenty distinct cognitive systems and has identified several patterns these systems exhibit, including observation, coordination, reminding, reaction, deliberation, motivation, simulation, regulation, compensation. These patterns form the edifice of the PAM-P2 architecture.

PAM-P2 – Inference (Coordination)

PAM-P2 – Memory

PAM-P2 - Neural Propositions

The many influences of PAM-P2 contribute to the idea of the neural proposition—a computational structure highly interlaced with others of its kind, clustered into viewpoints, collectively comprising the heterarchical content of a database, forming a model that supports the representational challenges of cognitive system patterns.

Mind Servers

Mind Types

Individual & Unity Minds – Can measure development

Hive Mind – Can measure evolution

Controls

Single Device

Controls

Multiple Devices

Single Mind w/

Private KBIndividual Mind Unity Mind

Multiple Minds

w/ Shared KBDissociative Mind Hive Mind

The Development of Thought

“The central idea is that knowledge proceeds neither solely from the experience of objects nor from an innate programming performed in the subject, but from successive constructions, the result of constant development of new structures.” – Jean Piaget

Questions

E-mail: [email protected]

Website: piagetmodeler.tumblr.com