genetic specification of recurrent neural networks: initial thoughts

Download Genetic Specification of Recurrent Neural Networks: Initial Thoughts

Post on 12-Jan-2016

29 views

Category:

Documents

0 download

Embed Size (px)

DESCRIPTION

Genetic Specification of Recurrent Neural Networks: Initial Thoughts. World Congress on Computational Intelligence 2006, Vancouver Bill Howell, Natural Resources Canada, Ottawa. I. Introduction Genes and ANNs. Historically: - PowerPoint PPT Presentation

TRANSCRIPT

  • I. Introduction Genes and ANNsHistorically:Biological inspiration of artificial neural networks, right from the beginningOngoing mutual influenceOutline:I. Introduction Genes and ANNs II. Inspiration for DNA-ANNsIII. What might we hope to achieve with DNA-ANNs?IV. Recommendations and Star-GazingV. Conclusions

  • I. Introduction Genes and ANNsComputational Neuro-Genetic ModellingN. Kasabov, L. Benuskova, S. Wysoski, IJCNN04 & 05 Modeling gene networks for spiking neuronsR. Storjohann & G. Marcus, IJCNN05 Neurogene Integrated simulation of gene regulation, neural activity and neurodevelopmentJ.P. Thivierge & G. Marcus WCCI06 Genetics, growth, & environment

  • I. Introduction Genes and ANNs ANN ChallengesBetter, much faster learning algorithmsinitial specification and evolution of complex architecturesplasticity versus memoryrobustness versus optimality

    Pre-loading: data -> functions -> knowledge -> behavioursresponses of: virus, bacteria, microbes, plantsinstinct of: animals, man

  • I. Introduction Genes and ANNs Recurrent Neural Nets (RNNs)Due to their recurrent connections, RNNs are a more powerful and general form of ANNProblems for which we typically use RNNs are very challenging: modeling, control, approximate dynamic programmingInterpretations of final structure & weights even more challenging than most other ANNsChaotic NNs ...?

  • II. Inspiration for DNA-ANNs Genetic non-protein-coding DNA and RNA Brain models Artificial Neural Networks trends

  • II. Inspiration for DNA-ANNs non-protein-coding RNA (npcDNA) Mattick, John S. (UQueensland) "The hidden genetic program of complex organisms" Scientific American, Oct04 pp60-67. See http://imbuq.edu.au/groups/mattickDNA geneexon introntranscriptionsplicingPrimary RNA transcriptIntronic RNAAssembled exonic RNADegraded and recycledProcessingTranslationmRNAProteinProcessingOther functionsOther functionsOther functionsMicroRNAs and othersNoncoding RNAGene regulationGene regulationTraditional concept of genes

  • II. Inspiration for DNA-ANNs Mattick: Cambrian Complexity Explosion

    Mattick, John S. (UQueensland) "The hidden genetic program of complex organisms" Scientific American, Oct04 pp60-67. See http://imbuq.edu.au/groups/mattickMulticellular worldUnicellular worldSingle-celled eukaryotesOrigin of new regulatory system? EubacteriaAnimals Plants FungiTime (millions of years ago)Complexity4,000 3,000 2,000 1,000 PresentArchae

  • II. Inspiration for DNA-ANNs Finite automaton from DNA mechanisms

    Shapiro & Benenson Bringing DNA computers to life Scientific American, May06, pp45-51Software strand 1Software strand 2Disease-associated mRNAActive yes-yessoftware molecule Protector strand FoklDiagnostic moleculeGene 1Inactive drugGene 2Gene 3Gene 4

  • II. Inspiration for DNA-ANNs March of the Penguins S. Pinker, The language instinct: how the mind creates language, New York: William Morrow & Company, 1994, Perenniel Classics edition, 2000JUST instinct?

  • II. Inspiration for DNA-RNNs Models of the BrainSensory systemsMotor MemoryCognition, planningBehaviours

  • II. Inspiration for DNA-ANNs Trends with ANNsLocal, incremental learning approaches neural gas models, evolving connectionist systemsMulti-phase ANN architectures extreme learning machines, echo state networksEnsemble solutions and hierarchies, networksSignal processing & information theoreticsRecurrent Neural Networks (RNNs)Evolution of ANNs

  • III. What might we hope to achieve with DNA-ANNs?Starting with the right answer!Higher levels of abstractionRapid and effective:learning (generalisations)evolution (restructure for strategies)Resource utilisation reuse of "modules"Control & ADP faster, more reliable, more robust

  • III. What might we hope to achieve with DNA-ANNs? Starting with the right answerTrivial solution give me the answer and I'll solve the problem ultra fast!Measures of problem similarity - perhaps at higher levels of abstraction, especially when data appears dissimilar (reminiscent of generality of signal processing)

  • III. What might we hope to achieve with DNA-ANNs? Higher levels of abstractionProblems - decompose & modularise For example, ANNs can regenerate learned images from noisy data. Can a similar feat be accomplished for problem decomposition/ modularisation at abstract levels to help evolve ensembles of ANNs?Okham's razor (simplest models that explains the data) - may NOT always be a good approach with complex systems!?Meaning/ logic as emergent properties

  • III. What might we hope to achieve with DNA-ANNs? Rapidity, ResourcesRapid, effective, safe : training ->learning ->evolvingfit generalizestrategize

    Resource utilisation re-utilize "functional and connecting modules, functional overloading, multiple simultaneous hypothesis

  • III. What might we hope to achieve with DNA-ANNs? Non-linear dynamical systems Modeling and ControlPerhaps the biggest payback for DNA-ANNs would be their application to the special, but important, case of RNNs.

  • IV. Recommendations and Star-GazingQuestion:

    Current algorithms for learning and evolving are they adequate for more complex hierarchies and ensembles of ANNs, and for more abstract capabilities?To some extent - yes?I suspect that we are also looking for additional formulations, and that to some extent their initial development may depend on having a set powerful, predictable and robust "modules as a starting point. Two examples of what this might connect to: Local and global brain models elegant, powerful ways of building systemsClassical AI and symbolic logic are an extreme example of new learning formulations for higher-level-abstraction ANNs

  • IV. Recommendations and Star-Gazing Artificial Neural Networks (ANNs)Existing CI capabilities are a basis:Start with a small-world universal function approximation" collection of ANN and RNN modules (custom built or selected from a variety of problem solutions)Develop "generic interfaces" between combinations of two or more modules, or modules of modules Develop "problem formulation/ classification" capabilities (rules, evolutionary strategies etc)ANN phase changes (crystalline -> gaseous)Develop learning / evolving strategies that can do points 1 to 4 aboveChaos perhaps scramble through state-space, but DONT get locked in to pre-existing structures

  • IV. Recommendations and Star-Gazing Recurrent Neural NetworksMy feeling is that because of their great power and the difficulty of rapidly training them, RNNs offer a challenge whereby DNA-RNNs may show tangible benefits that have a qualitative benefit beyond merely speeding up training and providing good generalisation. Question: Will the "genetic specification" of DNA-RNNs beat hand-crafted libraries (likely the starting point)?Play with & observe DNA-RNNs

    WCCI06 Vancouver, 27Jul06, Bill Howellslide * of 21

    ConclusionsBiological computational/ processing capabilities have always been the holy grail of advanced computing.As we advance, that brings us to an awareness of the next level of concepts. This process may go on for a long time....

    Right now, the genetics revolution is suggestive of DNA-RNNs.

    While we have long been aware of instinctive behaviour, perhaps we don't appreciate it well enough, as perhaps we believe that the parents teach their children much of what they need to know. But the examples of insects, fish and reptiles which hatch from eggs, often without the benefit of adult supervision clearly show that even for complex behaviours well

    Dr. Ian ?name? of Environment Canada sees no evidence of "multi-generational re-emergence" of behaviours in polar bears. Semi-starved polar bears will wade through rivers of running char (fish), without thinking of catching them for food. This may be influenced by their normal behaviour, whereby they fast when not on sea ice. The trivial solutions to any problem are: a) I don't care; and b) give me the answer and I'll find the solution to the problem ultra-fast!!A good first gues can make a lot of difference.But what if you start with thousands of reasonable first guesses?What if you can recognize similarities to other problems, or hybrids between problems and are able to take advantage of that to formulate and evolve possible solutions?

    Small world universal function approximation for given problem domains compact set of tools that can quickly and easily be structured and applied to solve problem classes.

    False confidence in good fits prediction versus identification (physical models).

    Challenge tools must enable higher order learning/ evolution/ abstraction, and NOT emprison and limit the system (Marcus & genetics for the brain).

    Ockham's razor in my paper this is mentioned but I have a word of caution. Taking the simplest model for the data and knowledge at hand seems to be somewhat dangerous when dealing with the brain. I guess the trick is that it may help to start simple, but with the epectation that the end result will likely be far more complex, powrful, and more beautiful than first believed.

    Meaning and logic as emergent properties: Ensembles of semi-standardized ANNs, in somewhat well-known configurations, may intrinsically express meaning and models of complex systems better than the detailed, complex expressions that they embody. That may arise because we will get to know how these modles behave and interact, and by knowing what ti expect in a general sense, we might better interpret the overall syst

Recommended

View more >