the portable chris langan

877

Click here to load reader

Upload: dylancatlow

Post on 22-Jul-2016

605 views

Category:

Documents


138 download

DESCRIPTION

- More than 300,000 words of Langan content.

TRANSCRIPT

The portable Chris Langan.docx

Table of Contents

Papers Self-Reference and Computational Complexity Cheating the Millennium EssaysA Very Brief History of TimeThe Theory of TheoriesAn Interdisciplinary Approach to RealityOn Absolute Truth and KnowledgeIntroduction to the CTMUPhysics and MetaphysicsMegaboard DiscussionsCTMU Q and ACTMU and GodA Dialogic Response to an AtheistDiscussion with CTMU Conference MembersDiscussion on the Ultranet ListDiscussion with a Mega Society Easy MemberDiscussion at the Utne CafeDiscussion between Christopher Michael Langan and Russell Fred Vaughan, Ph.D.Colloquy DiscussionsGod, the Universe, and Theories of EverythingOn Nagarjuna and the Heart SutraAltered States and Psi Phenomena (Part One)Altered States and Psi Phenomena (Part Two)On Society and SocializationISCID DiscussionsCosmogony, Holography and CausalityCritical QuestionsCTMU and the Axiom of ChoiceEvolutionary IntelligenceKarl D. Stephen: Tegmarks Parallel Universes: A Challenge to Intelligent Design?On Progress, Readdressing Reality Theory, and Information in the Holographic UniverseOrganisms Using GA vs. Organisms being built by GAsT-Duality UniverseVirtues of ScientistsMiscellaneous Brains As Models of IntelligenceOn the Paradoxical Connection Between Money and BrainsOn the Differences between People, Birds, and BeesThe Resolution of Economic Paradox: Toward a Theory of Economic RelativityLetter from Chris LanganNoesis Discussion #1Noesis Discussion #2In Clarification of the CTMU and its Applications in NoesisChris Langan to Rick RosnerReply to Chris Langan on Isomorphisms, Models and Objectivity Superscholar Interview A Prologue to Buffoonery A Day in the Life of JoJo Einstein, Street Clown (part 1 and 2)Response to CTMU Critiques Another Crank Comes to Visit: The Cognitive-Theoretic Model of the UniverseAbandon All Hope, Ye Who Enter This ThreadWikepedia Debate

Self-Reference and Computational Complexity

2001 by Christopher Michael Langan

[This paper contains background information on the P?NP problem. Being the first part of a much longer work-in-progress, it is for introductory purposes only. If anyone finds any errors and there are sure to be a few, given the time crunch under which this was written kindly let me know. The bibliography can wait for a future installment (meanwhile, thanks to Stephen Cook, A.K. Dewdney and Robert Rogers).]

Introduction: The Problematic Role of Self-Reference in Mathematics

Self-reference is an ever-popular topic in recreational mathematics, where it is usually characterized as a pathological rarity that must be treated with great care in order to avoid vicious paradoxes like this statement is false. To some extent, this reflects the history of mathematics; scarcely had Bertrand Russell presented his eponymous paradox in set theory, namely "the set of all sets that do not include themselves as elements", than he proposed to outlaw it with a stupefyingly complex theory that boiled down to a blanket prohibition on self-inclusion and its semantic equivalent, self-reference. Since then, mathematicians have regarded themselves as having two choices with regard to self-reference: follow Russells prescription and avoid it, or use it (with great care) for the express purpose of extracting paradoxical inconsistencies from, and thereby disproving, certain especially problematic conjectures.

Mathematical reasoning equates to the direct or indirect derivation of theorems from primitive axioms. Ordinarily, a mathematical conjecture is proven correct by direct (deductive or inductive) derivation within an axiomatic system, or by showing that its negation leads to a contradiction or an absurdity within the system; similarly, a conjecture is proven incorrect by either directly deriving its negation, finding a counterexample, or showing that it leads to a contradiction or an absurdity, again within a single axiomatic system. Where each of these maneuvers is confined to a formal axiomatic system, and where the symbols, formulae and rules of inference of a formal system are taken to refer only to each other, they unavoidably involve self-reference. Indeed, any algebraically closed language treated apart from external meaning is self-referential by definition, and so is any process relying on the properties of such a language. To the extent that mathematics consists of such languages, mathematical reasoning is such a process. It follows that mathematics as a whole is implicitly self-referential in nature.

But certain limits are maintained nonetheless. For example, in order to avoid introducing a forbidden subjective component into what is supposed to be an objective enterprise (the quest for "absolute" mathematical truth), at no point is a mathematician ever supposed to be caught referring to himself or his own thought processes as part of a mathematical proof. Even when deliberately employing self-reference in proving a conjecture, the mathematician must punctiliously externalize and thereby objectivize it, hygienically confining it to a formula or abstract machine model. By confining self-reference to the object level of discourse, the mathematician hopes to ensure the noncircularity of a higher level of discourse from which the predicates true and false can be downwardly assigned with no troublesome rebounda level hopefully inhabited by the mathematician himself. By avoiding self-reference on the "business" level of discourse, he hopes to avoid having his own reasoning falsified "from above" on the basis of what would otherwise be its circularity. In short, the trapper does not propose to be caught in his trap.

Unfortunately, this is a philosophically dubious position. It begins to unravel as we approach the foundations of mathematics, where the object is to trace the roots and account for the very origins of mathematical reasoning. At this stage of explanation, it becomes impossible to avoid the realization that in order to be consistent, mathematics must possess a kind of algebraic closure, and to this extent must be globally self-referential. Concisely, closure equals self- containment with respect to a relation or predicate, and this equates to self-reference. E.g., the self-consistency of a system ultimately equates to the closure of that system with respect to consistency, and this describes a scenario in which every part of the system refers consistently to other parts of the system (and only thereto). At every internal point (mathematical datum) of the system mathematics, the following circularity applies: "mathematics refers consistently to mathematics". So mathematics is distributively self-referential, and if this makes it globally vulnerable to some kind of implacable "meta-mathematical" paradox, all we can do in response is learn to live with the danger. Fortunately, it turns out that we can reason our way out of such doubtsbut only by admitting that self-reference is the name of the game.

When a language self-refers, it stratifies or separates into levels, with reference flowing directionally from metalanguage to language. E.g., each the following statements "this statement is about itself", "the subject of this statement is this statement", and "this formula x is a description of x" is actually a statement and the object of a statement, with statement and object occupying the metalanguage and object levels respectively. The operative rule in such cases is that reference never flows upward from object to statement, but only downward (from metalanguage to object) or laterally (from object to object, by virtue of the expression of these objects within a higher-level metalanguage mediating their mutual influence). This stratification is very important from a proof-theoretic standpoint, as the following example shows.

Theorem: "This statement is false" is false.

Proof: If the statement in quotes is indeed false, then it is true. On the other hand, if it is true, then it is false. This is a contradiction. Since the quoted statement generates a contradiction, it is logically inconsistent and therefore false. (Q.E.D.)

But wait! Unfortunately, if the quoted statement is false, then it is true (as stated in the proof). This would seem to contradict not only the overall statement including the quoted statement, i.e. the "theorem", but the proof as wellunless we have a rule saying that the statement in quotes can refer to neither the overall statement of which it is part, nor to the proof of the overall statement. In that case, it can invalidate only itself, which is exactly what it is taken to be doing, and can do so only within a metalanguage capable of expressing the reflexive self-invalidating relationship. It should be noted that technically, "this statement is false" is invalid on purely formal grounds; it is in fact a forbidden instance of self- reference. But since it is analogous to any statement that implies its own negation in an axiomatic context - and such statements are routinely dealt with in mathematics without immediate concern for their "bad syntax" - its clarity makes it valuable for illustrative purposes.

In the above example, self- reference is confined to a formula that pronounces itself false. Because this formula refers negatively to its own veracity, it is a self-contained paradox attempting to double as its own falsifying metalanguage and thus possessing a whole new level of "falsehood". But aside from true or false, what else could a formula say about itself? Could it pronounce itself, say, unprovable? Lets try it: "This formula is unprovable". If the given formula is in fact unprovable, then it is true (and therefore a theorem). But sadly, we cannot recognize it as such without a proof. On the other hand, suppose it is provable. Then it is false (because its provability contradicts what it states of itself) and yet true (because provable)! It seems that we still have the makings of a paradoxa statement that is "provably unprovable" and therefore absurd.

But what if we now introduce a distinction between levels of proof, calling one level the basic or "language" level and the other (higher) level the "metalanguage" level? Then we would have either a statement that can be metalinguistically proven to be linguistically unprovable, and thus recognizable as a theorem conveying valuable information about the limitations of the basic language, or a statement that cannot be metalinguistically proven to be linguistically unprovable, which, though uninformative, is at least not a paradox. Presto: self-reference without the possibility of paradox! In the year 1931, an Austrian mathematical logician named Kurt Godel actually performed this magic trick for the entertainment and edification of the mathematical world.

Undecidability: the Power of Paradox

As part of his Erlangen Program for 20th century mathematics, the eminent German mathematician David Hilbert had wanted to ascertain that mathematics is "complete" when formalized in a system like the predicate calculusi.e., that all true theorems (and no false ones) can be proven. But after helpfully setting out to show that every true formula expressed in the predicate calculus is provable, Kurt Godel sadly arrived at a different conclusion. As it turns out, not even arithmetic is complete in this sense. In any consistent formal system containing arithmetic, there are true but unprovable statements. Although their truth can be directly apprehended, they cannot be formally derived from a finite set of axioms in a stepwise fashion.

First, Godel posited axioms and rules of inference for deriving new true formulas from true formulas already in hand. These were just the axioms and rules of the first - order predicate calculus, comprising the logic of proof throughout the mathematical world. To these he adjoined the Peano postulates of standard arithmetic, a set of axioms and rules for the natural numbers. These consist of 6 basic axioms incorporating the equality symbol "=", 3 additional axioms to define the meaning of that symbol, and a rule of induction corresponding to the meta-level rule of inference adjoined to the six basic axioms of the predicate calculus. The predicate calculus and the Peano postulates, together comprising 15 axioms and 2 rules of inference, define a powerful and comprehensive system for the expression and proof of arithmetical truth.

After defining his arithmetical system, Godels next step was to enable the formulation of self-referential statements within it. Because arithmetic refers to numbers, this meant assigning numbers to arithmetical formulae. He accomplished this by (1) assigning a natural number or "code number" {1,2,,15} to each of the 15 distinct logical and arithmetical symbols employed in the system; (2) numbering the symbols in each formula from left to right with the consecutive prime "placeholder" numbers 2,3,5,7,; (3) raising each prime placeholder number to a power equal to the code number of the symbol in that place; and (4) multiplying all of the resulting (large but computable) numbers together to get the Godel number of the formula (note that this procedure is reversible; given any Godel number, one may compute the expression it encodes by factoring it into a unique set of prime factors). In addition, Godel numbered proofs and partial derivations, i.e. deductive sequences of r consecutive formulae, with the products of r numbers 2 m 3n 5p , where the bases are the first r primes and the exponents m, n, p, are the Godel numbers of the first, second, third, formulae in the sequence.

In this way, every arithmetical predicate expressible by a formula or sequence of formulae is Godel-numbered, including the predicate "is a proof of (some formula)" and "(this formula) cannot be proven" [equivalent formulations with universal and existential quantifiers: "for all numbers x, x is not the Godel number of a proof of (this formula)"; "there does not exist a numberthat is the Godel number of a proof of (this formula)"]. It is important to realize that these predicates actually correspond to real numeric relationships "isomorphically hybridized" with logical relationships. The biological flavor of this terminology is not accidental, for the isomorphism guarantees that numeric relationships areheritable as their logical isomorphs are derived. When one logical formula is derived by substitution from another, they become numerically related in such a way that a distinctive numeric predicate of the Godel number of the ancestral formula is effectively inherited by the Godel number of the descendant formula. This maintains a basis for consistency, ensuring that the negation of the numeric predicate of a formula cannot be derived from the numeric predicate of the formula itself.

Let us be a bit more specific. Godels idea was to express the logical syntax of arithmetic, which is ordinarily formulated in terms of logical and arithmetical symbols, in terms of pure numeric relationships. To do this, the various logical relationships within and among syntactic formulae must be mirrored by numeric relationships among the (Godel-numeric) images of these formulae under Godels logic-to-number mapping. The key property that allows this mirroring to occur is called representability. An n-ary numeric relation R(x1,,xn) is representable in the first-order Peano arithmetic N iff there is in N a formula A(a1,,an) with n free variables such that for all natural numbers k1,,kn, the following conditions hold:

1. If R(k1,,kn) holds, then |--NA(k1,,kn), i.e. A(k1,,kn) is provable in N

2. If R(k1,,kn) does not hold, then |--N~A(k1,,kn)

In this case, we say that A(a1,,an) represents the numeric relation R. E.g., let R be

the "less than" relation among natural numbers. Then R is representable in N because there is a formula x> CAMU in CamoBefore we explore the conspansive SCSPL model in more detail, it is worthwhile to note that the CTMU can be regarded as a generalization of the major computation-theoretic current in physics, the CAMU. Originally called the Computation-Theoretic Model of the Universe, the CTMU was initially defined on a hierarchical nesting of universal computers, the Nested Simulation Tableau or NeST, which tentatively described spacetime as stratified virtual reality in order to resolve a decision-theoretic paradox put forth by Los Alamos physicist William Newcomb (see Noesis 44, etc.). Newcombs paradox is essentially a paradox of reverse causality with strong implications for the existence of free will, and thus has deep ramifications regarding the nature of time in self-configuring or self-creating systems of the kind that MAP shows it must be. Concisely, it permits reality to freely create itself from within by using its own structure, without benefit of any outside agency residing in any external domain.

Although the CTMU subjects NeST to metalogical constraints not discussed in connection with Newcombs Paradox, NeST-style computational stratification is essential to the structure of conspansive spacetime. The CTMU thus absorbs the greatest strengths of the CAMU those attending quantized distributed computation without absorbing its a priori constraints on scale or sacrificing the invaluable legacy of Relativity. That is, because the extended CTMU definition of spacetime incorporates a self-referential, self-distributed, self-scaling universal automaton, the tensors of GR and its many-dimensional offshoots can exist within its computational matrix.

An important detail must be noted regarding the distinction between the CAMU and CTMU. By its nature, the CTMU replaces ordinary mechanical computation with what might better be called protocomputation. Whereas computation is a process defined with respect to a specific machine model, e.g. a Turing machine, protocomputation is logically "pre-mechanical". That is, before computation can occur, there must (in principle) be a physically realizable machine to host it. But in discussing the origins of the physical universe, the prior existence of a physical machine cannot be assumed. Instead, we must consider a process capable of giving rise to physical reality itself...a process capable of not only implementing a computational syntax, but of serving as its own computational syntax by self-filtration from a realm of syntactic potential. When the word "computation" appears in the CTMU, it is usually to protocomputation that reference is being made.

It is at this point that the theory of languages becomes indispensable. In the theory of computation, a "language" is anything fed to and processed by a computer; thus, if we imagine that reality is in certain respects like a computer simulation, it is a language. But where no computer exists (because there is not yet a universe in which it can exist), there is no "hardware" to process the language, or for that matter the metalanguage simulating the creation of hardware and language themselves. So with respect to the origin of the universe, language and hardware must somehow emerge as one; instead of engaging in a chicken-or-egg regress involving their recursive relationship, we must consider a self-contained, dual-aspect entity functioning simultaneously as both. By definition, this entity is a Self-Configuring Self-Processing Language or SCSPL. Whereas ordinary computation involves a language, protocomputation involves SCSPL.

Protocomputation has a projective character consistent with the SCSPL paradigm. Just as all possible formations in a language - the set of all possible strings - can be generated from a single distributed syntax, and all grammatical transformations of a given string can be generated from a single copy thereof, all predicates involving a common syntactic component are generated from the integral component itself. Rather than saying that the common component is distributed over many values of some differential predicate - e.g., that some distributed feature of programming is distributed over many processors - we can say (to some extent equivalently) that many values of the differential predicate - e.g. spatial location - are internally or endomorphically projected within the common component, with respect to which they are "in superposition". After all, difference or multiplicity is a logical relation, and logical relations possess logical coherence or unity; where the relation has logical priority over the reland, unity has priority over multiplicity. So instead of putting multiplicity before unity and pluralism ahead of monism, CTMU protocomputation, under the mandate of a third CTMU principle called Multiplex Unity or MU, puts the horse sensibly ahead of the cart.

To return to one of the central themes of this article, SCSPL and protocomputation are metaphysical concepts. Physics is unnecessary to explain them, but they are necessary to explain physics. So again, what we are describing here is a metaphysical extension of the language of physics. Without such an extension linking the physical universe to the ontological substrate from which it springs - explaining what physical reality is, where it came from, and how and why it exists - the explanatory regress of physical science would ultimately lead to the inexplicable and thus to the meaningless. Spacetime Requantization and the Cosmological ConstantThe CTMU, and to a lesser extent GR itself, posits certain limitations on exterior measurement. GR utilizes (so-called) intrinsic spacetime curvature in order to avoid the necessity of explaining an external metaphysical domain from which spacetime can be measured, while MAP simply states, in a more sophisticated way consistent with infocognitive spacetime structure as prescribed by M=R and MU, that this is a matter of logical necessity (see Noesis/ECE 139, pp. 3-10). Concisely, if there were such an exterior domain, then it would be an autologous extrapolation of the Human Cognitive Syntax (HCS) that should properly be included in the spacetime to be measured. [As previously explained, the HCS, a synopsis of the most general theoretical language available to the human mind (cognition), is a supertautological formulation of reality as recognized by the HCS. Where CTMU spacetime consists of HCS infocognition distributed over itself in a way isomorphic to NeST i.e., of a stratified NeST computer whose levels have infocognitive HCS structure the HCS spans the laws of mind and nature. If something cannot be mapped to HCS categories by acts of cognition, perception or reference, then it is HCS-unrecognizable and excluded from HCS reality due to nonhomomorphism; conversely, if it can be mapped to the HCS in a physically-relevant way, then it is real and must be explained by reality theory.]

Accordingly, the universe as a whole must be treated as a static domain whose self and contents cannot expand, but only seem to expand because they are undergoing internal rescaling as a function of SCSPL grammar. The universe is not actually expanding in any absolute, externally-measurable sense; rather, its contents are shrinking relative to it, and to maintain local geometric and dynamical consistency, it appears to expand relative to them. Already introduced as conspansion (contraction qua expansion), this process reduces physical change to a form of "grammatical substitution" in which the geometrodynamic state of a spatial relation is differentially expressed within an ambient cognitive image of its previous state. By running this scenario backwards and regressing through time, we eventually arrive at the source of geometrodynamic and quantum-theoretic reality: a primeval conspansive domain consisting of pure physical potential embodied in the self-distributed "infocognitive syntax" of the physical universei.e., the laws of physics, which in turn reside in the more general HCS.

Conspansion consists of two complementary processes, requantization and inner expansion. Requantization downsizes the content of Plancks constant by applying a quantized scaling factor to successive layers of space corresponding to levels of distributed parallel computation. This inverse scaling factor 1/R is just the reciprocal of the cosmological scaling factor R, the ratio of the current apparent size dn(U) of the expanding universe to its original (Higgs condensation) size d0(U)=1. Meanwhile, inner expansion outwardly distributes the images of past events at the speed of light within progressively-requantized layers. As layers are rescaled, the rate of inner expansion, and the speed and wavelength of light, change with respect to d0(U) so that relationships among basic physical processes do not changei.e., so as to effect nomological covariance. The thrust is to relativize space and time measurements so that spatial relations have different diameters and rates of diametric change from different spacetime vantages. This merely continues a long tradition in physics; just as Galileo relativized motion and Einstein relativized distances and durations to explain gravity, this is a relativization for conspansive antigravity (see Appendix B).

Conspansion is not just a physical operation, but a logical one as well. Because physical objects unambiguously maintain their identities and physical properties as spacetime evolves, spacetime must directly obey the rules of 2VL (2-valued logic distinguishing what is true from what is false). Spacetime evolution can thus be straightforwardly depicted by Venn diagrams in which the truth attribute, a high-order metapredicate of any physical predicate, corresponds to topological inclusion in a spatial domain corresponding to specific physical attributes. I.e., to be true, an effect must be not only logically but topologically contained by the cause; to inherit properties determined by an antecedent event, objects involved in consequent events must appear within its logical and spatiotemporal image. In short, logic equals spacetime topology.

This 2VL rule, which governs the relationship between the Space-Time-Object and Logico-Mathematical subsyntaxes of the HCS, follows from the dual relationship between set theory and semantics, whereby predicating membership in a set corresponds to attributing a property defined on or defining the set. The property is a qualitative space topologically containing that to which it is logically attributed. Since the laws of nature could not apply if the sets that contain their arguments and the properties that serve as their parameters were not mutually present at the place and time of application, and since QM blurs the point of application into a region of distributive spacetime potential, events governed by natural laws must occur within a region of spacetime over which their parameters are distributed.

Conspansive domains interpenetrate against the background of past events at the inner expansion rate c, defined as the maximum ratio of distance to duration by the current scaling, and recollapse through quantum interaction. Conspansion thus defines a kind of absolute time metering and safeguarding causality. Interpenetration of conspansive domains, which involves a special logical operation called unisection (distributed intersection) combining aspects of the set-theoretic operations union and intersection, creates an infocognitive relation of sufficiently high order to effect quantum collapse. Output is selectively determined by ESP interference and reinforcement within and among metrical layers.

Because self-configurative spacetime grammar is conspansive by necessity, the universe is necessarily subject to a requantizative accelerative force that causes its apparent expansion. The force in question, which Einstein symbolized by the cosmological constant lambda, is all but inexplicable in any nonconspansive model; that no such model can cogently explain it is why he later relented and described lambda as the greatest blunder of his career. By contrast, the CTMU requires it as a necessary mechanism of SCSPL grammar. Thus, recent experimental evidence in particular, recently-acquired data on high-redshift Type Ia supernovae that seem to imply the existence of such a force may be regarded as powerful (if still tentative) empirical confirmation of the CTMU. Metrical LayeringIn a conspansive universe, the spacetime metric undergoes constant rescaling. Whereas Einstein required a generalization of Cartesian space embodying higher-order geometric properties like spacetime curvature, conspansion requires a yet higher order of generalization in which even relativistic properties, e.g. spacetime curvature inhering in the gravitational field, can be progressively rescaled. Where physical fields of force control or program dynamical geometry, and programming is logically stratified as in NeST, fields become layered stacks of parallel distributive programming that decompose into field strata (conspansive layers) related by an intrinsic requantization function inhering in, and logically inherited from, the most primitive and connective layer of the stack. This "storage process" by which infocognitive spacetime records its logical history is called metrical layering (note that since storage is effected by inner-expansive domains which are internally atemporal, this is to some extent a misnomer reflecting weaknesses in standard models of computation).

The metrical layering concept does not involve complicated reasoning. It suffices to note that distributed (as in event images are outwardly distributed in layers of parallel computation by inner expansion) effectively means of 0 intrinsic diameter with respect to the distributed attribute. If an attribute corresponding to a logical relation of any order is distributed over a mathematical or physical domain, then interior points of the domain are undifferentiated with respect to it, and it need not be transmitted among them. Where space and time exist only with respect to logical distinctions among attributes, metrical differentiation can occur within inner-expansive domains (IEDs) only upon the introduction of consequent attributes relative to which position is redefined in an overlying metrical layer, and what we usually call the metric is a function of the total relationship among all layers.

The spacetime metric thus amounts to a Venn-diagrammatic conspansive history in which every conspansive domain (lightcone cross section, Venn sphere) has virtual 0 diameter with respect to distributed attributes, despite apparent nonzero diameter with respect to metrical relations among subsequent events. What appears to be nonlocal transmission of information can thus seem to occur. Nevertheless, the CTMU is a localistic theory in every sense of the word; information is never exchanged faster than conspansion, i.e. faster than light (the CTMUs unique explanation of quantum nonlocality within a localistic model is what entitles it to call itself a consistent extension of relativity theory, to which the locality principle is fundamental).

Metrical layering lets neo-Cartesian spacetime interface with predicate logic in such a way that in addition to the set of localistic spacetime intervals riding atop the stack (and subject to relativistic variation in space and time measurements), there exists an underlying predicate logic of spatiotemporal contents obeying a different kind of metric. Spacetime thus becomes a logical construct reflecting the logical evolution of that which it models, thereby extending the Lorentz-Minkowski-Einstein generalization of Cartesian space. Graphically, the CTMU places a logical, stratified computational construction on spacetime, implants a conspansive requantization function in its deepest, most distributive layer of logic (or highest, most parallel level of computation), and rotates the spacetime diagram depicting the dynamical history of the universe by 90 along the space axes. Thus, one perceives the models evolution as a conspansive overlay of physically-parametrized Venn diagrams directly through the time (SCSPL grammar) axis rather than through an extraneous z axis artificially separating theorist from diagram. The cognition of the modeler his or her perceptual internalization of the model is thereby identified with cosmic time, and infocognitive closure occurs as the model absorbs the modeler in the act of absorbing the model.

To make things even simpler: the CTMU equates reality to logic, logic to mind, and (by transitivity of equality) reality to mind. Then it makes a big Venn diagram out of all three, assigns appropriate logical and mathematical functions to the diagram, and deduces implications in light of empirical data. A little reflection reveals that it would be hard to imagine a simpler or more logical theory of reality. The CTMU and Quantum Theory The microscopic implications of conspansion are in remarkable accord with basic physical criteria. In a self-distributed (perfectly self-similar) universe, every event should mirror the event that creates the universe itself. In terms of an implosive inversion of the standard (Big Bang) model, this means that every event should to some extent mirror the primal event consisting of a condensation of Higgs energy distributing elementary particles and their quantum attributes, including mass and relative velocity, throughout the universe. To borrow from evolutionary biology, spacetime ontogeny recapitulates cosmic phylogeny; every part of the universe should repeat the formative process of the universe itself.

Thus, just as the initial collapse of the quantum wavefunction (QWF) of the causally self-contained universe is internal to the universe, the requantizative occurrence of each subsequent event is topologically internal to that event, and the cause spatially contains the effect. The implications regarding quantum nonlocality are clear. No longer must information propagate at superluminal velocity between spin-correlated particles; instead, the information required for (e.g.) spin conservation is distributed over their joint ancestral IEDthe virtual 0-diameter spatiotemporal image of the event that spawned both particles as a correlated ensemble. The internal parallelism of this domain the fact that neither distance nor duration can bind within it short-circuits spatiotemporal transmission on a logical level. A kind of logical superconductor, the domain offers no resistance across the gap between correlated particles; in fact, the gap does not exist! Computations on the domains distributive logical relations are as perfectly self-distributed as the relations themselves.

Equivalently, any valid logical description of spacetime has a property called hology, whereby the logical structure of the NeST universal automaton that is, logic in its entirety - distributes over spacetime at all scales along with the automaton itself. Notice the etymological resemblance of hology to holography, a term used by physicist David Bohm to describe his own primitive nonlocal interpretation of QM. The difference: while Bohms Pilot Wave Theory was unclear on the exact nature of the "implicate order" forced by quantum nonlocality on the universe - an implicate order inevitably associated with conspansion - the CTMU answers this question in a way that satisfies Bell's theorem with no messy dichotomy between classical and quantum reality. Indeed, the CTMU is a true localistic theory in which nothing outruns the conspansive mechanism of light propagation.

The implications of conspansion for quantum physics as a whole, including wavefunction collapse and entanglement, are similarly obvious. No less gratifying is the fact that the nondeterministic computations posited in abstract computer science are largely indistinguishable from what occurs in QWF collapse, where just one possibility out of many is inexplicably realized (while the CTMU offers an explanation called the Extended Superposition Principle or ESP, standard physics contains no comparable principle). In conspansive spacetime, time itself becomes a process of wave-particle dualization mirroring the expansive and collapsative stages of SCSPL grammar, embodying the recursive syntactic relationship of space, time and object.

There is no alternative to conspansion as an explanation of quantum nonlocality. Any nonconspansive, classically-oriented explanation would require that one of the following three principles be broken: the principle of realism, which holds that patterns among phenomena exist independently of particular observations; the principle of induction, whereby such patterns are imputed to orderly causes; and the principle of locality, which says that nothing travels faster than light. The CTMU, on the other hand, preserves these principles by distributing generalized observation over reality in the form of generalized cognition; making classical causation a stable function of distributed SCSPL grammar; and ensuring by its structure that no law of physics requires faster-than-light communication. So if basic tenets of science are to be upheld, Bells theorem must be taken to imply the CTMU.

As previously described, if the conspanding universe were projected in an internal plane, its evolution would look like ripples (infocognitive events) spreading outward on the surface of a pond, with new ripples starting in the intersects of their immediate ancestors. Just as in the pond, old ripples continue to spread outward in ever-deeper layers, carrying their virtual 0 diameters along with them. This is why we can collapse the past history of a cosmic particle by observing it in the present, and why, as surely as Newcombs demon, we can determine the past through regressive metric layers corresponding to a rising sequence of NeST strata leading to the stratum corresponding to the particles last determinant event. The deeper and farther back in time we regress, the higher and more comprehensive the level of NeST that we reach, until finally, like John Wheeler himself, we achieve observer participation in the highest, most parallelized level of NeST...the level corresponding to the very birth of reality. Appendix AAnalysis is based on the concept of the derivative, an "instantaneous (rate of) change". Because an "instant" is durationless (of 0 extent) while a "change" is not, this is an oxymoron. Cauchy and Weierstrass tried to resolve this paradox with the concept of "limits"; they failed. This led to the discovery of nonstandard analysis by Abraham Robinson. The CTMU incorporates a conspansive extension of nonstandard analysis in which infinitesimal elements of the hyperreal numbers of NSA are interpreted as having internal structure, i.e. as having nonzero internal extent. Because they are defined as being indistinguishable from 0 in the real numbers Rn, i.e. the real subset of the hyperreals Hn, this permits us to speak of an "instantaneous rate of change"; while the "instant" in question is of 0 external extent in Rn, it is of nonzero internal extent in Hn. Thus, in taking the derivative of (e.g.) x2, both sides of the equation

Dy/Dx = 2x + Dx

(where D = "delta" = a generic increment) are nonzero, simultaneous and in balance. That is, we can take Dx to 0 in Rn and drop it on the right with no loss of precision while avoiding a division by 0 on the left. More generally, the generic equation

limDxH0RDy/Dx = limDxH0R[f(x +Dx) - f(x)]/Dx

no longer involves a forbidden "division by 0"; the division takes place in H, while the zeroing-out of Dx takes place in R. H and R, respectively "inside" and "outside" the limit and thus associated with the limit and the approach thereto, are model-theoretically identified with the two phases of the conspansion process L-sim and L-out, as conventionally related by wave-particle duality. This leads to the CTMU "Sum Over Futures" (SOF) interpretation of quantum mechanics, incorporating an Extended Superposition Principle (ESP) under the guidance of the CTMU Telic Principle, which asserts that the universe is intrinsically self-configuring.

In this new CTMU extension of nonstandard analysis, the universe can have an undefined ("virtually 0") external extent while internally being a "conspansively differentiable manifold". This, of course, describes a true intrinsic geometry incorporating intrinsic time as well as intrinsic space; so much for relativity theory. In providing a unified foundation for mathematics, the CTMU incorporates complementary extensions of logic, set theory and algebra. Because physics is a blend of perception (observation and experiment) and mathematics, providing mathematics with a unified foundation (by interpreting it in a unified physical reality) also provides physics with a unified foundation (by interpreting it in a unified mathematical reality). Thus, by conspansive duality, math and physics are recursively united in a dual-aspect reality wherein they fill mutually foundational roles.

[If you want to know more about how the CTMU is derived using logic and set theory, check out these on-line papers:

- On Absolute Truth

- Introduction to the CTMU

- CTMU: A New Kind of Reality Theory (pdf)

I'm currently working on additional papers.] Appendix BBecause the value of R can only be theoretically approximated, using R or even R-1 to describe requantization makes it appear that we are simply using one theory to justify another. But the R-to-R-1 inversion comes with an addition of logical structure, and it is this additional structure that enables us to define a high-level physical process, conspansion, that opposes gravity and explains accelerating redshift. Conspansive requantization is uniform at all scales and can be seen as a function of the entire universe or of individual quanta; every part of the universe is grammatically substituted, or injectively mapped, into an image of its former selfan image endowed with computational functionability. To understand this, we must take a look at standard cosmology.

Standard cosmology views cosmic expansion in terms of a model called ERSU, the Expanding Rubber Sheet Universe. For present purposes, it is sufficient to consider a simplified 2-spheric ERSU whose objects and observers are confined to its expanding 2-dimensional surface. In ERSU, the sizes of material objects remain constant while space expands like an inflating balloon (if objects grew at the rate of space itself, expansion could not be detected). At the same time, spatial distances among comoving objects free of peculiar motions remain fixed with respect any global comoving coordinate system; in this sense, the mutual rescaling of matter and space is symmetric. But either way, the space occupied by an object is considered to stretch without the object itself being stretched.

Aside from being paradoxical on its face, this violates the basic premise of the pure geometrodynamic view of physical reality, which ultimately implies that matter is space in motion relative to itself. If we nevertheless adhere to ERSU and the Standard Model, the expansion rate (prior to gravitational opposition) is constant when expressed in terms of material dimensions, i.e., with respect to the original scale of the universe relative to which objects remain constant in size. For example, if ERSU expansion were to be viewed as an outward layering process in which the top layer is now, the factor of linear expansion relating successive layers would be the quotient of their circumferences. Because object size is static, so is the cosmic time scale when expressed in terms of basic physical processes; at any stage of cosmic evolution, time is scaled exactly as it was in the beginning.

The idea behind the CTMU is to use advanced logic, algebra and computation theory to give spacetime a stratified computational or cognitive structure that lets ERSU be inverted and ERSU paradoxes resolved. To glimpse how this is done, just look at the ERSU balloon from the inside instead of the outside. Now imagine that its size remains constant as thin, transparent layers of parallel distributed computation grow inward, and that as objects are carried towards the center by each newly-created layer, they are proportionately resized. Instead of the universe expanding relative to objects whose sizes remain constant, the size of the universe remains constant and objects do the shrinkingalong with any time scale expressed in terms of basic physical processes defined on those objects. Now imagine that as objects and time scales remain in their shrunken state, layers become infinitesimally thin and recede outward, with newer levels of space becoming denser relative to older ones and older levels becoming stretched relative to newer ones. In the older layers, light which propagates in the form of a distributed parallel computation retroactively slows down as it is forced to travel through more densely-quantized overlying layers.

To let ourselves keep easy track of the distinction, we will give the ERSU and inverted-ERSU models opposite spellings. I.e., inverted-ERSU will become USRE. This turns out to be meaningful as well as convenient, for there happens to be an apt descriptive phrase for which USRE is acronymic: the Universe as a Self-Representational Entity. This phrase is consistent with the idea that the universe is a self-creative, internally-iterated computational endomorphism. It is important to be clear on the relationship between space and time in USRE. The laws of physics are generally expressed as differential equations describing physical processes in terms of other physical processes incorporating material dimensions. When time appears in such an equation, its units are understood to correspond to basic physical processes defined on the sizes of physical objects. Thus, any rescaling of objects must be accompanied by an appropriate rescaling of time if the laws of physics are to be preserved. Where the material contents of spacetime behave in perfect accord with the medium they occupy, they contract as spacetime is requantized, and in order for the laws of physics to remain constant, time must contract apace.

E.g., if at any point it takes n time units for light to cross the diameter of a proton, it must take the same number of units at any later juncture. If the proton contracts in the interval, the time scale must contract accordingly, and the speed and wavelength of newly-emitted light must diminish relative to former values to maintain the proper distribution of frequencies. But meanwhile, light already in transit slows down due to the distributed stretching of its deeper layer of space, i.e., the superimposition of more densely-quantized layers. Since its wavelength is fixed with respect to its own comoving scale (and that of the universe as a whole), wavelength rises and frequency falls relative to newer, denser scales.

Complementary recalibration of space and time scales accounts for cosmic redshift in the USRE model. But on a deeper level, the explanation lies in the nature of space and time themselves. In ERSU, time acts externally on space, stretching and deforming it against an unspecified background and transferring its content from point to point by virtual osmosis. But in USRE, time and motion are implemented wholly within the spatial locales to which they apply. Thus, if cosmic redshift data indicate that expansion accelerates in ERSU, the inverse USRE formulation says that spacetime requantization accelerates with respect to the iteration of a constant fractional multiplierand that meanwhile, inner expansion undergoes a complementary "deceleration" relative to the invariant size of the universe. In this way, the two phases of conspansion work together to preserve the laws of nature.

The crux: as ERSU expands and the cosmological scaling factor R rises, the USRE inverse scaling factor 1/R falls (this factor is expressed elsewhere in a time-independent form r). As ERSU swells and light waves get longer and lower in frequency, USRE quanta shrink with like results. In either model, the speed of light falls with respect to any global comoving coordinate system; cn/c0 = R0/Rn = Rn-1/R0-1 (the idea that c is an absolute constant in ERSU is oversimplistic; like material dimensions, the speed of light can be seen to change with respect to comoving space in cosmological time). But only in USRE does the whole process become a distributed logico-mathematical endomorphism effected in situ by the universe itselfa true local implementation of physical law rather than a merely localistic transfer of content based on a disjunction of space and logic. The point is to preserve valid ERSU relationships while changing their interpretations so as to resolve paradoxes of ERSU cosmology and physics.

In Noesis/ECE 139, it was remarked that if the universe were projected on an internal plane, spacetime evolution would resemble spreading ripples on the surface of a pond, with new ripples starting in the intersects of old ones. Ripples represent events, or nomological (SCSPL-syntactic) combinations of material objects implicit as ensembles of distributed properties (quantum numbers). Now we see that outer (subsurface) ripples become internally dilated as distances shorten and time accelerates within new ripples generated on the surface.

CTMU monism says that the universe consists of one dual-aspect substance, infocognition, created by internal feedback within an even more basic (one-aspect) substance called telesis. That everything in the universe can manifest itself as either information or cognition (and on combined scales, as both) can easily be confirmed by the human experience of personal consciousness, in which the self exists as information to its own cognitioni.e., as an object or relation subject to its own temporal processing. If certain irrelevant constraints distinguishing a human brain from other kinds of object are dropped, information and cognition become identical to spatial relations and time.

In a composite object (like a brain) consisting of multiple parts, the dual aspects of infocognition become crowded together in spacetime. But in the quantum realm, this monic duality takes the form of an alternation basic to the evolution of spacetime itself. This alternation usually goes by the name of wave-particle duality, and refers to the inner-expansive and collapsative phases of the quantum wave function. Where ripples represent the expansive (or cognitive) phase, and their collapsation into new events determines the informational phase, the above reasoning can be expressed as follows: as the infocognitive universe evolves, the absolute rate of spatiotemporal cognition cn at time n, as measured in absolute (conserved) units of spacetime, is inversely proportional to the absolute information density Rn/R0 of typical physical systems...i.e., to the concentration of locally-processed physical information. As light slows down, more SCSPL-grammatical (generalized cognitive) steps are performed per unit of absolute distance traversed. So with respect to meaningful content, the universe remains steady in the process of self-creation.

CTMU Q and A

Q: Chris, I'm not a mathematician or physicist by any stretch, but I am a curious person and would like to know more about the CTMU (Cognitive-Theoretic Model of the Universe). I am particularly interested in the theological aspects. Can you please explain what the CTMU is all about in language that even I can understand?A: Thanks for your interest, but the truth is the CTMU isn't all that difficult for even a layperson to understand. So sit back, relax, kick off your shoes and open your mind

Scientific theories are mental constructs that have objective reality as their content. According to the scientific method, science puts objective content first, letting theories be determined by observation. But the phrase "a theory of reality" contains two key nouns,theory and reality, and science is really about both. Because all theories have certain necessary logical properties that are abstract and mathematical, and therefore independent of observation - it is these very properties that let us recognize and understand our world in conceptual terms - we could just as well start with these properties and see what they might tell us about objective reality. Just as scientific observation makes demands on theories, the logic of theories makes demands on scientific observation, and these demands tell us in a general way what we may observe about the universe.

In other words, a comprehensive theory of reality is not just about observation, but about theories and their logical requirements. Since theories are mental constructs, and mental means "of the mind", this can be rephrased as follows: mind and reality are linked in mutual dependence at the most basic level of understanding. This linkage of mind and reality is what a TOE (Theory of Everything) is really about. The CTMU is such a theory; instead of being a mathematical description of specific observations (like all established scientific theories), it is a "metatheory" about the general relationship between theories and observationsi.e., about science or knowledge itself. Thus, it can credibly lay claim to the title of TOE.

Mind and reality - the abstract and the concrete, the subjective and the objective, the internal and the external - are linked together in a certain way, and this linkage is the real substance of "reality theory". Just as scientific observation determines theories, the logical requirements of theories to some extent determine scientific observation. Since reality always has the ability to surprise us, the task of scientific observation can never be completed with absolute certainty, and this means that a comprehensive theory of reality cannot be based on scientific observation alone. Instead, it must be based on the process of making scientific observations in general, and this process is based on the relationship of mind and reality. So the CTMU is essentially a theory of the relationship between mind and reality.

In explaining this relationship, the CTMU shows that reality possesses a complex property akin to self-awareness. That is, just as the mind is real, reality is in some respects like a mind. But when we attempt to answer the obvious question "whose mind?", the answer turns out to be a mathematical and scientific definition of God. This implies that we all exist in what can be called "the Mind of God", and that our individual minds are parts of God's Mind. They are not as powerful as God's Mind, for they are only parts thereof; yet, they are directly connected to the greatest source of knowledge and power that exists. This connection of our minds to the Mind of God, which is like the connection of parts to a whole, is what we sometimes call the soul or spirit, and it is the most crucial and essential part of being human.

Thus, the attempt to formulate a comprehensive theory of reality, the CTMU, finally leads to spiritual understanding, producing a basis for the unification of science and theology. The traditional Cartesian divider between body and mind, science and spirituality, is penetrated by logical reasoning of a higher order than ordinary scientific reasoning, but no less scientific than any other kind of mathematical truth. Accordingly, it serves as the long-awaited gateway between science and humanism, a bridge of reason over what has long seemed an impassable gulf.

Q: Hey Chris, what's your take on the theory of Max Tegmark, physicist at the Institute for Advanced Study at Princeton. He has a paper on the web which postulates that universes exist physically for all conceivable mathematical structures. Is it as "wacky" as he postulates?A: Since Max claims to be on the fast track to a TOE of his own, I just thought I'd offer a few remarks about his approach, and point out a few of the ways in which it differs from that of the CTMU.

Many of us are familiar with the Anthropic Principle of cosmology (the AP) and Everett's Many Worlds (MW) interpretation of quantum theory. These ideas have something in common: each is an attempt to make a philosophical problem disappear by what amounts to Wittgensteinian semantic adjustment, i.e., by a convenient redefinition of certain key ingredients. Specifically, MW attempts to circumvent the quantum measurement problem - the decoherence of the quantum wave function - by redefining every quantum event as a divergence of universes, shifting the question "what happens to the unrealized possible results of a measurement when one possibility is exclusively actualized?" to "why can we not perceive the actualizations of these other possible results?", while the AP shifts the question "why does the universe exist?" to "why is this particular universe perceived to exist?" Both MW and the AP thus shift attention away from objective reality by focusing on the subjective perception of objective reality, thereby invoking the distinction between subjectivity and objectivity (what usually goes unstated is that mainstream physical and mathematical science have traditionally recognized only the objective side of this distinction, sweeping the other side under the rug whenever possible).

Perhaps intuiting the MW-AP connection, Max Tegmark (formerly at the Institute of Advanced Studies at Princeton) has effectively combined these two ideas and tried to shift the focus back into the objective domain. First, noting that MW is usually considered to involve only those universes that share our laws of physics (but which differ in the initial and subsequent conditions to which those laws are applied), Tegmark extends MW to include other universes with other sets of physical laws, noting that since these sets of laws are mathematical in nature, they must correspond to mathematical structures...abstract structures that we can investigate right here in this universe. And that, he says, may explain why this universe is perceived to exist: the conditions for the evolution of perceptual entities may simply be the mean of a distribution generated by distinct physical nomologies corresponding to these mathematical structures. In other words, the conditions for the existence of "self-aware substructures" (perceptual life forms) may simply be the most likely conditions within the distribution of all possible universes. And since the latter distribution corresponds to the set of mathematical structures in this universe, the hypothesis can be tested right here by mathematical physicists.

Of course, Tegmark's attempt at a TOE leaves unanswered a number of deep philosophical questions. First, what good does it do to "anthropically" explain this universe in terms of an MW metauniverse unless one can explain where the metauniverse came from? What is supposed to prevent an informationally barren infinite regress of universes within metauniverses within meta-metauniverses..., and so on? Second, what good is such a theory unless it contains the means to resolve outstanding paradoxes bedeviling physics and cosmology - paradoxes like quantum nonlocality, ex nihilo cosmology, the arrow of time, and so forth? Third, what is the true relationship between mathematics and physics, that one can simply identify sets of physical laws with mathematical structures? It's fine to say that physics comes from mathematics, but then where does mathematics come from? Fourth, where are the mathematical tools for dealing with the apparently ultra-complex problem of computing the probability distribution of universes from the set of all mathematical structures, including those yet to be discovered? Fifth, what is the real relationship between subjective and objective reality, on which distinction both Many Worlds and the Anthropic Principle are ultimately based? (Et cetera.)

Since one could go on for pages, it seems a little premature to be calling Tegmark's theory a TOE (or even a reasonable TOE precursor). And although I 'm not saying that his theory contains nothing of value, I'm a bit puzzled by the absence of any mention of certain obvious mathematical ingredients. For example, topos theory deals with topoi, or so-called "mathematical universes" consisting of mathematical categories (mapping algebras) equipped not only with the objects and morphisms possessed by categories in general, but special logics permitting the assignment of truth values to various superficially nonalgebraic (e.g. "physical") expressions involving the objects. Why would any "TOE" purporting to equate physical universes to mathematical structures omit at least cursory mention of an existing theory that seems to be tailor-made for just such a hypothesis? This in itself suggests a certain amount of oversight. Tegmark may have a few good ideas knocking around upstairs, but on the basis of what his theory omits, one can't avoid the impression that he's merely skirting the boundary of a real TOE.

In contrast, the CTMU deals directly with the outstanding paradoxes and fundamental interrelationship of mathematics and physics. Unlike other TOEs, the CTMU does not purport to be a "complete" theory; there are too many physical details and undecidable mathematical theorems to be accounted for (enough to occupy whole future generations of mathematicians and scientists), and merely stating a hypothetical relationship among families of subatomic particles is only a small part of the explanatory task before us. Instead, the CTMU is merely designed to be consistent and comprehensive at a high level of generality, a level above that at which most other TOEs are prematurely aimed.

The good news is that a new model of physical spacetime, and thus a whole new context for addressing the usual round of quantum cosmological problems, has emerged from the CTMU's direct attack on deeper philosophical issues.

Q: Einstein says that gravity is a result of "mass-energy" causing a curvature in the four dimensional space time continuum. At the planck scale, (10^(-33)) centimeters, is space still continuous?, or is space discontinuous? I have read books saying space time may have holes or breaks in continuity. Are these holes related in any way to "gravitons", or reverse time causality? (Question from Russell Rierson)

A: A mathematical space is continuous if it has a metric that withstands infinitesimal subdivision. To understand what this means, one must know what a "metric" is. Simplistically, a metric is just a general "distance" relationship defined on a space as follows: if a and b are two points in a space, and c is an arbitrary third point, then the distance between a and b is always less than or equal to the sum of the distances between a and c, and b and c. That is, where d(x,y) is the distance between two points x and y,

d(a,b) > I agree that we can't know the world without data. > The idea that this mental structure shapes the phenomenal world has been known since Kant; > the CTMU simply pushes Kant's ideas to their logical conclusion, dealing directly with the relationship between mind and reality. > Photons have a general abstract existence as well as a specific physical one. This is what happens when you embed physical reality in a space of abstractions in order to theorize about it;... > please provide a little remedial instruction:)

CML: If we form a theory of some aspect of reality, it may or may not be correct (i.e., it will be "more or less exact"). But even if not, we can be sure that the correct theory will conform to our mental structures, since otherwise we won't be able to create a conceptual isomorphism with it (and that's what a theory is). A "syntax" is to be interpreted as any set of structural and functional constraints applied to any system. A syntax takes the form of general information and is implemented by generalized cognition.

RFV: >> Since there's no way out of this requirement, ... > However, although Hoffmans' research sounds familiar, I'd appreciate a web source (if available). > ... you're talking about a generalized form of mentation (not just the mentation of a human brain). Since a photon and a brain both transform information, albeit on different levels of complexity, they are both "mental" in nature. > There's no scientific basis on which you can do that; it amounts to anthropocentrism. What we seek in any scientific context is a distributed formulation of reality that applies inside the brain as well as without it, ... > The idea that this mental structure shapes the phenomenal world has been known since Kant; >>> Photons have a general abstract existence as well as a specific physical one. >>> real objects necessarily conform to more or less exact configurations of an abstract distributed syntax. > A "syntax" is to be interpreted as any set of structural and functional constraints applied to any system. > A photon is a configuration of the syntax of reality; as already explained, that's assured. It's also assured that a photon is at once abstract and real. Again, the combined abstract/real nature of a photon is not provisional, > Hey, I'm 3,000 miles away! But I appreciate the invite... >>> you're talking about a generalized form of mentation (not just the mentation of a human brain). Since a photon and a brain both transform information, albeit on different levels of complexity, they are both "mental" in nature. > (due to the CTMU conspansive model of spacetime, the quotation marks are necessary). Atomic states and position coordinates are informational. >> What we seek in any scientific context is a distributed formulation of reality that applies inside the brain as well as without it, no stable perceptual invariants.

To have certainty at arbitrary levels of specificity, we'd need to know all the axioms of the overall system...i.e., a full description of the global systemic tautology allowing perception. Obviously, any nontautological subsystem is "open" to undecidable data and therefore uncertain. In the case of reality at large, this uncertainty - which, by the way, is associated with quantum uncertainty - owes to the fact that increasingly specific axioms are even now being created by sentient agents. That's why reality, AKA SCSPL, is said to be "self-configuring". That's what intrinsic cosmogony is ultimately all about.

Sorry if I lost you up there, but explaining this is hard.

RFV: >> That's why we can be so sure of the 2VL component of reality syntax. Theories generally aren't so ironclad. The discernability and stable identity of objects within the universe can be shown to imply that the universe is informationally closed in the following sense: it is possible to distinguish that which it contains (reality, perceptions) from that which it does not contain (unreality). >>>>> the CTMU simply pushes Kant's ideas to their logical conclusion, dealing directly with the relationship between mind and reality. > But a more general (and therefore more scientific, > If you're right, then Wheeler's title for his thesis ("Observer Participation thesis") would seem to indicate his agreement with me about what constitutes an "observer", namely anything able to participate in an irreversible change. > So the categorical imperative relating the subjective and objective aspects of reality everywhere coincides with experience; >>>> real objects necessarily conform to more or less exact configurations of an abstract distributed syntax. That part of a theory that's wrong is empty, but the part that's right has real content. The theory tautologically corresponds to just that part of reality for which it's true. In fact, logicians already know that insofar as they can often either extend or qualify a troubled theory so as to adjust its referential quantifier (think Lowenheim-Skolem, Duhem-Quine, etc.) and thereby restore its validity (think classical physics as the low-velocity "limit" of relativity physics), the idea of an "absolutely false" theory is not what it seems. The "abstractions" being employed by the two types of computer are always expressible in terms of the penultimate abstraction, logic. Nothing I've said so far is incompatible with this fact.