lisa: auf dem weg zur sicheren assistenzrobotik

10
CoTeSys — Cognition for Technical Systems Martin Buss * , Michael Beetz # , Dirk Wollherr * * Institute of Automatic Control Engineering (LSR), Faculty of Electrical Engineering and Information Technology # Intelligent Autonomous Systems, Department of Informatics Technische Universit¨ at M ¨ unchen D-80290 M ¨ unchen, Germany www.cotesys.org (E-mail: [email protected], [email protected], [email protected]) Abstract The COTESYS cluster of excellence 1 investigates cogni- tion for technical systems such as vehicles, robots, and factories. Cognitive technical systems (CTS) are infor- mation processing systems equipped with artificial sen- sors and actuators, integrated and embedded into physical systems, and acting in a physical world. They differ from other technical systems as they perform cognitive con- trol and have cognitive capabilities. Cognitive control or- chestrates reflexive and habitual behavior in accord with longterm intentions. Cognitive capabilities such as per- ception, reasoning, learning, and planning turn technical systems into systems that “know what they are doing”. The cognitive capabilities will result in systems of higher reliability, flexibility, adaptivity, and better performance. They will be easier to interact and cooperate with. 1 Motivation and Basic Approach People deal easily with everyday situations, uncertain- ties, and changes — abilities, which technical systems currently lack. Unlike artificial systems, humans develop and learn how to extract and incorporate new informa- tion from the environment. Animals have survived in our complex world by developing brains and adequate infor- mation processing strategies. Brains cannot compete with computers on tasks requiring raw computational power [1] 2 . However, they are extremely well-suited to deal with ill-structured problems that involve a high degree of unpredictability, uncertainty, and fuzziness. They can easily cope with an abundance of complex sensory stim- uli that have to be transformed into appropriate sequences 1 CoTeSys is funded by the German Research Council DFG as a research cluster of excellence within the “excellence inititive” from 2006-2011. CoTeSys partner institutions are: Technische Univer- sit¨ at M¨ unchen (TUM), Ludwig-Maximilians-Universit¨ at (LMU), Uni- versit¨ at der Bundeswehr (UBM), Deutsches Zentrum f¨ ur Luft- und Raumfahrt (DLR), and Max-Planck-Institute for Neurobiology (MPI), all in M ¨ unchen. 2 References on this subject are too numerous to cover the state of art in this paper. Given references are only samples and have not claim for completeness of motor actions [2, 3] 3 . Since brains of humans and non-human primates have successfully developed superior information processing mechanisms, COTESYS studies and analyzes cognition in (not necessarily human) natural systems and transfers the respective insights into the design and implementation of cognitive control systems for technical systems. However, these cognitive abilities are essential skills for “reasonable” collaboration. The robot must be capable to understand human actions and intentions, and quickly conceive a plan to support the human. Collaboration be- tween human and robot are considered reasonable, if a given task is accomplished faster or in a less strenuous way when co-operating rather than acting alone. One ex- ample, where efficient collaboration becomes vital, are welfare scenarios, where robots are to take over tasks without further guidance in order to disburden service personnel and allow more human-human interaction. To this end, cognitive scientists investigate the neurobi- ological and neurocognitive foundations of cognition in humans and animals and develop computational models of cognitive capabilities that explain their empirical find- ings. These computational models will then be studied by the COTESYS engineers and computer scientists with re- spect to their applicability to artificial cognitive systems and empirically evaluated in the context of the COTESYS demonstrators, including humanoid robots, autonomous vehicles, and cognitive factories. COTESYS structures interdisciplinary research on cogni- tion in three closely intertwined research threads, which perform fundamental research and empirically study and implement cognitive models in the context of the demon- stration testbeds, see Figure 1: 3 The Cognitive Systems Project, www.foresight.gov.uk, aims to pro- vide a vision for the future development of cognitive systems through an exploration of recent advances in neuroscience and computer science. The Society of Neuroscience’s home page, http://apu.sfn.org, contains much useful information on the latest developments in the subject. It hosts Brain Facts, an accessible primer on the brain and nervous sys- tem.

Upload: independent

Post on 12-May-2023

1 views

Category:

Documents


0 download

TRANSCRIPT

CoTeSys — Cognition for Technical Systems

Martin Buss∗, Michael Beetz#, Dirk Wollherr∗

∗ Institute of Automatic Control Engineering (LSR), Faculty of Electrical Engineering and Information Technology# Intelligent Autonomous Systems, Department of Informatics

Technische Universitat MunchenD-80290 Munchen, Germany www.cotesys.org

(E-mail: [email protected], [email protected], [email protected])

Abstract

The COTESYS cluster of excellence1 investigates cogni-tion for technical systems such as vehicles, robots, andfactories. Cognitive technical systems (CTS) are infor-mation processing systems equipped with artificial sen-sors and actuators, integrated and embedded into physicalsystems, and acting in a physical world. They differ fromother technical systems as they perform cognitive con-trol and have cognitive capabilities. Cognitive control or-chestrates reflexive and habitual behavior in accord withlongterm intentions. Cognitive capabilities such as per-ception, reasoning, learning, and planning turn technicalsystems into systems that “know what they are doing”.The cognitive capabilities will result in systems of higherreliability, flexibility, adaptivity, and better performance.They will be easier to interact and cooperate with.

1 Motivation and Basic Approach

People deal easily with everyday situations, uncertain-ties, and changes — abilities, which technical systemscurrently lack. Unlike artificial systems, humans developand learn how to extract and incorporate new informa-tion from the environment. Animals have survived in ourcomplex world by developing brains and adequate infor-mation processing strategies. Brains cannot compete withcomputers on tasks requiring raw computational power[1]2. However, they are extremely well-suited to dealwith ill-structured problems that involve a high degreeof unpredictability, uncertainty, and fuzziness. They caneasily cope with an abundance of complex sensory stim-uli that have to be transformed into appropriate sequences

1CoTeSys is funded by the German Research Council DFG as aresearch cluster of excellence within the “excellence inititive” from2006-2011. CoTeSys partner institutions are: Technische Univer-sitat Munchen (TUM), Ludwig-Maximilians-Universitat (LMU), Uni-versitat der Bundeswehr (UBM), Deutsches Zentrum fur Luft- undRaumfahrt (DLR), and Max-Planck-Institute for Neurobiology (MPI),all in Munchen.

2References on this subject are too numerous to cover the state ofart in this paper. Given references are only samples and have not claimfor completeness

of motor actions [2, 3]3.

Since brains of humans and non-human primates havesuccessfully developed superior information processingmechanisms, COTESYS studies and analyzes cognition in(not necessarily human) natural systems and transfers therespective insights into the design and implementation ofcognitive control systems for technical systems.

However, these cognitive abilities are essential skills for“reasonable” collaboration. The robot must be capableto understand human actions and intentions, and quicklyconceive a plan to support the human. Collaboration be-tween human and robot are considered reasonable, if agiven task is accomplished faster or in a less strenuousway when co-operating rather than acting alone. One ex-ample, where efficient collaboration becomes vital, arewelfare scenarios, where robots are to take over taskswithout further guidance in order to disburden servicepersonnel and allow more human-human interaction.

To this end, cognitive scientists investigate the neurobi-ological and neurocognitive foundations of cognition inhumans and animals and develop computational modelsof cognitive capabilities that explain their empirical find-ings. These computational models will then be studied bythe COTESYS engineers and computer scientists with re-spect to their applicability to artificial cognitive systemsand empirically evaluated in the context of the COTESYS

demonstrators, including humanoid robots, autonomousvehicles, and cognitive factories.

COTESYS structures interdisciplinary research on cogni-tion in three closely intertwined research threads, whichperform fundamental research and empirically study andimplement cognitive models in the context of the demon-stration testbeds, see Figure 1:

3The Cognitive Systems Project, www.foresight.gov.uk, aims to pro-vide a vision for the future development of cognitive systems through anexploration of recent advances in neuroscience and computer science.The Society of Neuroscience’s home page, http://apu.sfn.org, containsmuch useful information on the latest developments in the subject. Ithosts Brain Facts, an accessible primer on the brain and nervous sys-tem.

Figure 1: COTESYS research strategy: Three research disci-plines (cognitive and life sciences, information pro-cessing and mathematical sciences, and engineeringsciences) work synergetically together to explorecognition for technical systems. Research is struc-tured into three groups of research areas: cognitivefoundations, cognitive mechanisms, and demonstra-tion scenarios. Cognitive mechanisms to be realizedinclude perception, reasoning and learning, actionselection and planning, and joint human/robot ac-tion.

Systemic Neuroscience, Cognitive Science, and Neu-rocognitive Psychology — Develop computationalmodels of cognitive control, perception, and motor ac-tion based on experimental studies at the behavioral andbrain level.

Information processing technology — Investigate anddevelop algorithms and software systems to realize cog-nitive capabilities. Particularly relevant are modern meth-ods from Control and Information Theory, Artificial In-telligence including learning, perception, and symbolicreasoning.

Engineering technologies — The areas of mecha-tronics, sensing technology, sensor fusion, smart sen-sor networks, control rules, controllability, stability,model/knowledge representation, and reasoning are im-portant to implement robust cognitive abilities in techni-cal systems with guaranteed performance constraints.

In recent years, these disciplines studying cognitive sys-tems have crossfertilized each other in various ways.4

Researchers studying human sensorimotor control havefound convincing empirical evidence for the use of Bayesestimation and cost function enabled control mechanismsin natural movement control [4]. Bayesian networksand the associated reasoning and learning mechanisms

4Indeed, Mitchell has pointed out in a recent presidential address atthe National Conference on Artificial Intelligence the next revolution isexpected to be caused by the synergetic cooperation of the computingand the cognitive sciences.

have inspired research in cognitive psychology in par-ticular the formation of causal theory with young chil-dren. Functional MRI images of rat brains have shownneural activation patterns of place cells similar to multi-modal probability distributions in robot localization usingBayesian filters [5].

The conclusions that COTESYS draws from these exam-ples are that (1) successful computational mechanismsin artificial cognitive systems tend to have counterpartswith similar functionality in natural cognitive systems;and (2) new consolidated findings about the structure andfunctional organization of perception and motion controlin natural cognitive systems show us much better waysof organizing and specifying computational tasks in arti-ficial cognitive systems.

However, cognition for technical systems is not the mererational reconstruction of natural cognitive systems. Nat-ural cognitive systems are impressively well adapted tothe computational infrastructure and the perception andaction capabilities of the systems they control. Technicalcognitive systems have computational means, perceptionand action capabilities with very different characteristics.

Learning and motor control for reaching and graspingprovide a good case in point [6]. While motor controlin natural systems takes up to 100ms to receive motionfeedback, high end industrial manipulators execute feed-back loops at 1000Hz with a delay of 0.5ms. In con-trast to robot arms, control signals for muscles are noisyand muscles take substantial amounts of time to producethe required force. On the other hand, antagonistic mus-cle groups support the achievement of equilibrium states.Thus, where in natural systems predictive models of mo-tion are required because of the large delay of feedbacksignals, robot arms can perform the same kind of mo-tions better by using fast feedback loops without resortingto prediction [7, 8]. Given these differences, we cannotexpect that generally all information processing mecha-nisms optimized for the perceptual apparatus, the brain,and the limbs of humans or non-human primates will ap-ply, without modification, to the control of CTSs.

2 The Cognitive Perception/Action Loop

COTESYS investigates the cognition for technical systemsin terms of the cognition-based perception-action closedloop. Figure 2 depicts the system architecture of a cogni-tive system with multi-sensor perception of the environ-ment, cognition (learning, knowledge, action planning),and action in the environment by actuators. All researchwithin COTESYS is dedicated to real-time performance ofthis control loop, in the real world. On the higher cogni-

tive level, the crucial components comprise environmentmodels, learning and knowledge management, all in real-time and tightly connected to physical action. The mid-and long-term research goals in COTESYS are to signif-icantly increase the functional sophistication for robustand rich performance of the perception-action loop.

Sensors

Learning &

reasoning

Knowledge &

models

Production Process

Human

Planning &

Cognitive Control

ActionActuators

Cognitive system architecture

Environment /

Perception

Figure 2: The cognitive system architecture: The perception-action closed loop.

The mapping of the technical system operation onto theperception-action cycle depicted in Figure 2 might sug-gest that we functionally decompose cognition into mod-ules where one module performs motor action, anotherone reasoning, and so on. In order to achieve the neededsynergies, the coupling of the different cognitive capabil-ities must be much more intense and interconnected asdepicted in Figure 3. For example, the system can learnto plan and plan to learn. It can learn to plan more re-liably and efficiently and also plan in order to acquireinformative experiences to learn from. Or, perception isintegrated into action to perform tasks that require hand-eye coordination. Futher, perception often requires actionto obtain information that cannot be gathered passively.This COTESYS view on the tight coupling of the individ-ual cognitive capabilities is important because it impliesthe requirement of close cooperation between COTESYS’sdifferent research areas.

Learning/Reasoning

Knowledge/Models

PerceptionPlanning/Action

Figure 3: The cognitive system architecture: The interplay ofthe cognitive capabilities.

COTESYS investigates the perception-action loop withina highly interdisciplinary research endeavor starting with

discipline-specific views of the loop components in or-der to obtain a common understanding of key concepts,such as perception, (motor) action, knowledge and mod-els, learning, reasoning, and planning.

Perception is the acquisition of information about the en-vironment and the body of an actor. In cognitive sciencemodels, part of the information received by the recep-tors is processed at higher levels in order to produce task-relevant information. This is done by recognizing, clas-sifying, and locating objects, observing relevant events,recognizing the essence of scenes and intentional activi-ties, retrieving context information, and recognizing andassessing situations [9, 10]. In control theory, perceptionstrongly correlates with the concept of observation — theidentification of system states that are needed to generatethe right control signals. Artificial intelligence, a subfieldof computer science, is primarily concerned with percep-tion and action; perception is often framed as a proba-bilistic estimation problem and the estimated states areoften transformed into symbolic representations that en-able the systems to communicate and reason about whatthey perceive [11].

(Motor) Action is the process of generating behavior tochange the world and to achieve some objectives of theacting entity. To produce action, primate brains use aquasi-hierarchy ranging from elementary motor elementsat lower cortical levels to complex “action” sequencesand plans at higher levels [12, 13]. Natural cognitivesystems use internal forward models to predict the con-sequences of motor signals to account for delays in thecomputation process and filtering out uninformative in-coming sensory information [14]. This cognitive scienceview can be contrasted to control theory, where behav-ior is specified in terms of control rules. Control rulesfor feedback control are derived from accurate mathe-matical dynamical system models. The design of controlrules aims at control systems that are controllable, stable,and robust and can thereby provably satisfy given perfor-mance requirements. Action theories in artificial intel-ligence typically abstract from many dynamical aspectsof actions and behavior in order to handle more complextasks [15]. Powerful computational models have been de-veloped to rationally select the best actions (based on de-cision theory criteria), to learn skills and action selectionstrategies from experience, and to perform action awarecontrol [16, 17].

Knowledge (Models) in cognitive science is conceivedto consist of both declarative and procedural knowl-edge [18, 19]. Declarative knowledge is recognizing andunderstanding factual information known about objects,ideas, and events in the environment. It also contains theinter-relationsships between objects, events, and entities

in the environment. Procedural knowledge is informationregarding how to execute a sequence of operations. Incognitive science various models have been proposed aspart of computational models of motor control and learn-ing to explain behavior of human and primate behaviorin empirical studies. Most prominent are the forward andbackward models of actions for the prediction of the ac-tions’ effects and sensory consequences and for the opti-mization of skills [20–22]. Graphical models have beenproposed to explain the acquisition of causal knowledgewith younger children [23]. In control systems, vari-ous mathematical models, such as differential equationsor automata that capture the evolution of dynamical sys-tems, are used. Research in artificial intelligence has pro-duced powerful representations for joint probability dis-tributions and symbolic knowledge representation mech-anisms. It has developed the mechanisms to endow CTSswith encyclopedic and common sense knowledge.

Learning is the process of acquiring information, and, re-spectively, the reorganization of information that resultsin new knowledge [24]. The learned knowledge can re-late to skills, attitudes, and values and can be acquiredthrough study, experience, or being taught, the cogni-tive science view. Learning causes a change of behav-ior that is persistent, measurable, and specified. It is aprocess that depends on experience and leads to long-term changes in behavior. In control theory, adaptive con-trol investigates control algorithms in which one or moreof the parameters varies in real time, to allow the con-troller to remain effective in varying process conditions.Another key learning mechanism is the identification ofparameters in mathematical models. In artificial intelli-gence, a large variety of information processing methodsfor learning have been developed [25]. These mecha-nisms include classification learners, such as decision treelearners or support vector machines, function approxima-tors, such as artificial neural networks, sequence learningalgorithms, and reinforcement learners that determine op-timal action selection strategies for uncertain situations.The learning algorithms are complemented by more gen-eral approaches such as data mining and integrated learn-ing systems (see the research programmes of the DARPAIPTO office http://www.darpa.mil/ipto/).

Reasoning is a cognitive process by which an individ-ual or system may infer a conclusion from an assort-ment of evidence, or from statements of principles [26].In the cognitive sciences reasoning processes are typi-cally studied in the context of complex problem solvingtasks, such as solving student problems, using protocolanalysis methods (“think aloud”) [27]. In the engineer-ing sciences specific reasoning mechanisms for predic-tion tasks, such as Bayesian filtering, are employed andstudied [28]. Other reasoning tasks are solved in the sys-

tem design phase by the system engineers, where controlrules are proven to be stable. The resulting systems haveno need for execution time reasoning, because of theirguaranteed behavior envelope. Artificial intelligence hasdeveloped a variety of reasoning mechanisms, includ-ing causal, temporal, spatial, and teleological reasoning,which enables CTSs to solve dynamically changing, in-terfering, and more complex tasks.

Planning is a process of generating (possibly partial) rep-resentations of future behavior, prior to the use of suchplans, to constrain or control current behavior. It com-prises reasoning about the future in order to generate, re-vise, or optimize the intended course of action. In the arti-ficial intelligence view plans are considered to be controlprograms that can be executed, be reasoned about, and bemanipulated [29].

3 The Integrated System Approach to CTSs

The demonstrators are of key importance for theCOTESYS cluster. Demonstrators and demonstration sce-narios are designed to challenge fundamental as well asapplied research in the individual areas. They define themilestones for the integration of cognition into technicalsystems.

The COTESYS researchers integrate the developed com-putational mechanisms into complete control systems andembed them within the demonstrators. The research areasspecify the kinds of experiments they intend to perform inthe context of the demonstrators. They also specify met-rics to evaluate the progress. Thus, the demonstrators be-come cross area research drivers that enforce researchersto collaborate and produce software components that suc-cessfully function in integrated cognitive systems. Thedemonstrators also transfer basic research efforts into ap-plied ones thereby promoting cooperation with industry.

The focus on demonstrators and integrated system re-search is also important as a research paradigm [30, 31].The cognitive capabilities of CTSs enable them to rea-son about the use of their information processing mecha-nisms: they can check results, debug them, and apply bet-ter suited mechanisms if default methods fail. Therefore,their information processing mechanisms do not need tobe hard coded completely. They should still be correctand complete but through dynamic adaptation rather thanstatic coding. This is important because in all but thesimplest cases completeness and correctness come at thecost of those problems becoming unsolvable — compu-tationally intractable at best. For example, computing ascene description from a given camera image is an ill-structured problem, checking the validity of statements in

a given logical theory is undecidable, computing a planfor achieving a set of goals is intractable for all but themost trivial action representations [32].

We will explain the interaction between demonstrator re-search and the other research areas using the cognitivefactory as an example. The same kinds of interactionsbetween demonstration scenarios and the other researchareas will be realized by the cognitive vehicle and thecognitive humanoid robot demonstration scenarios.

The Cognitive Factory – as an Example for the Inter-action between the Demonstrators and the other Re-search Areas. The steadily increasing demand for masscustomization, decreasing product life cycles, and globalcompetition require production systems with an unprece-dented level of flexibility, reliability, and efficiency. Theequipment of production systems with cognitive capabil-ities is essential to satisfy these requirements, which mustbe addressed to strengthen the high-end production in de-veloped economies.

Figure 4: The cognitive machine shop demonstration scenario.

COTESYS will investigate a real world production sce-nario as its primary demonstration target for cognitivetechnologies in factory automation. An example produc-tion chain includes an industrial robot, autonomously co-operating robots, fixtures, and conveyors to handle andprocess these parts. In addition, it contains an assemblystation where human workers and robotic manipulatorsjointly perform complex and dynamically changing tasksof assembling the parts.

The demonstrator challenges the cognitive capabilities oftechnical systems in important ways by posing two keyresearch questions:

1. How do performance, flexibility, reliability, andself-adaptability of flexible manufacturing systemsfurther improve if augmented by cognitive capabil-ities?

2. On what types production techniques (mass pro-duction, rapid prototyping, individualized produc-tion) does cognitive control have the highest poten-tial impact and how can this be achieved?

Early experiments suggest that the most promising pro-duction technologies for the application of cognitive tech-nologies are rapid prototyping and individualized produc-tion. In these production contexts cognitive technolo-gies allow for the automatic and flexible interleaving ofmultiple and heterogeneous production processes that canbe performed simultaneously. State-of-the-art productionplans are replaced by plans that are very similar to thosecontrolling autonomous mobile robots [33, 34]: theyspecify percept-driven, concurrent, and context-specificbehavior including failure monitoring and in particularrecovery instead of the more constrained specification ofproduction without such functions.

We also expect that cognitive technology will enable therealization of a new generation of machine shops thatconsist of very general machines that reconfigure them-selves according to the needs of production tasks. Therange of reconfiguration mechanisms includes the au-tonomous reconfiguration of part feeders, the rearrange-ment of part feeders and local storage units, and the au-tomatic use of different end effectors by robot arms. Thereconfigurability will enable machine shops to be muchmore general and flexible. However, to achieve this flex-ibility and generality the machines must control them-selves using comprehensive perceptual feedback, the ma-chines have to calibrate and teach themselves, and theyhave to reason about whether it is more appropriate tolearn an efficient and tailored production routine or use amore general and inefficient routine that does not requireresources for the learning step. To support the reconfig-urability the machines learn capability models by them-selves and configuration-specific performance models forproduction steps.

Another aspect where cognitive technologies improve theperformance of existing automation technology is robust-ness. Current production control systems make the as-sumptions that the evolution of the production process isonly caused by the machines of the manufacturing systemand that there are very limited ways in which a productionprocess can fail. If we cannot make these assumptions,for example because human workers act simultaneouslyin a shared environment, then pallets can be removed oradded by people from the stock, the order of pallets onthe conveyor belt can be changed, and work pieces canbe modified by other agents. To still work reliably un-der such circumstances the production system is requiredestimate the complete state of the environment and theproduction processes instead of just recognizing specific

predefined triggering events, such as a pallet arriving ata particular manufacturing station. Another equally chal-lenging consequence is that production plans have to bewritten such that they specify the production process fora large range of situations and such that the plans canbe automatically revised in order to deal with previouslyunanticipated situations. Again this capability is realizedby transferring successful ideas from the plan-based con-trol of robotic agents into the domain of automatic factorycontrol [35].

A third aspect in which our concept of the cognitivefactory goes well beyond that of flexible and intelli-gent manufacturing is that the cognitive factory is alsoequipped with autonomous/cognitive (at a later stage mo-bile) robotic assistants (see Figure 5). This robot isequipped with two industrial strength manipulators, colorstereo vision, and laser range sensors that can be posi-tioned with the robot’s effectors. The tasks of this factoryassistant include

• removing and rearranging pallets on the conveyorbelt in order to resolve deadlocks between concur-rent production processes,

• assisting the assembly robot for manipulation tasksthat the assembly robot cannot do by itself (such asturning a large work piece), and

• serving as a mobile sensing platform that can beused to support state estimation and as a mobileinspection device.

In the near future it is planned to have a higher numberof mobile robots with cognitive functions in the factory,also able to cooperative closely with human workers.

Another cognitive aspect of this demonstrator is that ituses sensor networks in order to be aware of the opera-tions in individual machines, robots, and transportationmechanisms. Using sophisticated data processing capa-bilities, integrated data mining and learning mechanisms,the machines learn to predict the quality of the outcomebased on properties of the work piece and their param-eterization. They form situation specific action modelsand use them to optimize production chain processing.

Another station in the cognitive factory mounts parts intothe car body. The weight of the parts and the complexityof the step requires joint human robot action. Heavy partsand tools will be handled by industrial robots and mobileplatforms will provide parts on the fly, such that humanworkers will be relieved from repetitive and strenuousoperations and can focus on tasks that require high-levelreasoning. The robot learns informative predictive mod-

els of the workers’ actions by observing them. The pre-dictive models are then used for synchronizing the jointactions. To adapt to their co-workers, cognitive mech-anisms will enable the machines to explain their behav-ior, for example why they have performed two productionsteps in a certain order. The machines are equipped withplan management mechanisms that allow them to auto-matically transform abstract advice into modifications oftheir own control programs.

Figure 5: B21 robot assistant in the simulated cognitive fac-tory.

Unlike other projects engineering the factory of thefuture, such as “Intelligent Manufacturing Systems”(www.ims.org), where innovative strategies for im-proving the entire manufacturing process from are inves-tigates mostly based on existing production technologies,COTESYS focuses on the aspect of human-machine col-laboration. It is believed, that innovative joint manipula-tion together with a deeper understanding of goals by themachine will revolutionize the production process. Apartfrom more efficient and less monotonous traditional as-sembly tasks, the production line gains a higher degreeof flexiblity capable to adapt to individual needs on aspecifict part. This affects the production range from cus-tomized mass products to highly efficient factory drivenprototype production.

4 Research Areas

Research on neurobiological and neurocognitive foun-dations of cognition — Basic research investigatesthe neurobiological and neurocognitive foundations ofcognition in technical systems by empirically study-ing cognitive capabilities of humans and animals atthe behavioral and brain level. Researchers will in-vestigate, in human subjects, the cognitive control ofmulti-sensory perception-action couplings in dynamic,rapidly changing environments following an integra-tive approach by combining behavioral and cognitive-

neuroscience methodologies.

The research task is to establish experimentally how thesecontrol functions are performed in the brain, in order toprovide (1) neurocognitive “models” of how these func-tions may be implemented in technical systems and (2)guidelines for the effective design of man-machine in-terfaces considering human factors. One of the key re-sults for the research areas studying cognitive mecha-nisms will be a comprehensive model of cognitive con-trol combining mathematical and neural-network modelswith models of symbolic, production systems-type infor-mation processing. In contrast to existing models thatare limited to static, uni-modal (visual) environments andsimple motor actions the COTESYS model will cover cog-nitive control in dynamic, rapidly changing environmentswith multi-modal event spaces.

Research on perceptual mechanisms designs, imple-ments, and empirically analyzes perceptual mechanismsfor cognition in technical systems. It integrates, em-beds, and specializes the mechanisms for their applica-tion in the demonstration scenarios. The challenge forthe area is to develop fast, robust and versatile perceptionsystems that allow the COTESYS demonstrators to oper-ate in unconstrained real-world environments; to endowcognitive technical systems with perception systems thatacquire, maintain, and deliver task-relevant informationthrough multiple sensory modes rather than vast sensordata streams. Besides lower level perceptual tasks, theCOTESYS perception modules will be capable of recog-nizing, classifying, and locating a large number of ob-jects, of conceiving and assessing situations, contexts andintentions, and interpreting intentional activities based onperceptual information. Perceptual mechanisms at thisperformance level must themselves be cognitive. Theyhave to filter out irrelevant data, focus attention based onan understanding of the context and the tasks they are toexecute. The perceptual capabilities investigated are notlimited to the core perceptual capabilities. They also in-clude post-processing reasoning such as the acquisitionof environment models and diagnostic reasoning mech-anisms that enable CTSs to automatically adapt to newenvironments and to debug and repair themselves.

Research on Knowledge and Learning — The ultimategoal of the COTESYS cluster is the realization of techni-cal systems that know what they are doing, which canassess how well they are doing, and improve themselvesbased on this knowledge. To this end, research on knowl-edge and learning will design and develop a computa-tional model for knowledge processing and learning es-pecially designed to be implemented on computing plat-forms which are embedded into sensor-equipped techni-cal systems acting in physical environments. This model

— implemented as a knowledge processing and learninginfrastructure — will enable technical systems to learnnew skills and activities from potentially very little expe-rience, in order to optimize and adapt their operations, toexplain their activities and accept advice in joint human-robot action, to learn meta-knowledge of their own capa-bilities and behavior, and to respond to new situations ina robust way.

The research topics that define the COTESYS approachto knowledge and learning in CTS include the follow-ing: Firstly, the development of a probabilistic frame-work as a means for combining first-order representationswith probability. This framework provides a commonfoundation for integrating perception, learning, reason-ing, and action while accommodating uncertainty. Sec-ondly, a model of “Action Meta-Knowledge” is devel-oped, which considers actions as information processingunits that automatically learn and maintain various mod-els of themselves, along with the behavior they generate.These models are used for behavior tuning, skill learning,failure recovery, self-explanation, and diagnosis. Thirdly,a comprehensive repertoire of sequence learning methodspartly based on theories of optimal learning algorithms.Finally, an embedded integrated learning architecture em-ploying multiple and diverse learning mechanisms capa-ble of generalizing from very little experience.

Research on action selection and planning addressesthe action production aspects of cognition in technicalsystems. These aspects include realization of motionand manipulation skills, context-specific selection of theappropriate actions, commitment to courses of activitybased on foresight, and specific action capabilities en-abling competent joint human-robot action.

To generate high performance and safe, action planningand control for locomotion, manipulation and full bodymotion is integrated. The planning and control sys-tem should be capable of working with minimal, non-technical, and qualitative descriptions of tasks. High per-formance and safe operation will enable close cooper-ation with humans. Another focus is to enable cogni-tive robots to accomplish complex tasks in changing andpartly unknown environments; to manage several taskssimultaneously, to resolve conflicts between interferingtasks, and to act appropriately in unexpected and novelsituations. They even have to reconsider their course ofaction in the light of new information. Hence, the longterm vision is to develop action control methods and adesign methodology to be embedded into self-organizingcognitive architectures.

Research on human factors studies cognitive vehicles,robots, and factories from a human factors and cogni-

c©Prof. Ulbrich, TUM c©Prof. Hirzinger, DLR c©Prof. Beetz, TUM

Figure 6: Demonstrator platforms used in the planned scenarios for cognitive humanoid robots. At the left are two humanoid robots(Johnnie and Lola) to be used for walking and full body motion research. Next is the upper body Justin is used for investi-gating highly dexterous manipulation capabilities. Its hand serving a coffee set is shown next to the right. On the right is amobile robot with industrial strength arms that serves as the initial platform for the AssistiveKitchen scenario.

tive psychological point of view. Particular emphasis isplaced on the interpretation of the environment and thecommunication with humans enabling human-machinecollaboration in unstructured environments. The state-of-the-art in all aspects of human-machine communica-tion will be advanced in order to equip cognitive sys-tems with highly sophisticated communication capabili-ties. To achieve these goals neurobiology and technologyare to inspire each other and thereby develop the follow-ing aspects of cognitive technical systems: advanced in-put/output technology, such as speech, gesture, motion,and gaze recognition is created to construct intuitive userinterfaces and dialogue techniques, as well as sophis-ticated methods to evaluate the multi-modal interactionof humans and systems. The highest and most complexlevel involves emotion, action, and intention recognition,with which cognitive systems become more human-like.To pursue these goals novel computational user modelsof cognitive architectures and appropriate experimentalevaluation methods are investigated.

Similar activities like the above mentioned are pursuedin a number of national and international projects, suchas SFB588 “Humanoid Robots” in Karlsruhe 5, the EUprojects “Cogniron” 6 and “RobotCub” 7 and others. Incontrast to these initiatives, COTESYS spans its consor-tium over a vast variety of research disciplines, supple-menting traditional disciplines like electrical and me-chanical engineering and computer science, with partnersfrom medicine, psychology, sports sciences, and others.This broad interdiciplinary spectrum fosters exchange ofideas and concepts between traditional scientific borders.This gives a unique opportunity to sketch an overall pic-ture of cognition in natural and technical systems and ex-tract the best of both findings to merge into a more con-cise sytem.

5www.sfb588.uni-karlsruhe.de6www.cogniron.org7www.RobotCub.org

5 Demonstrators and Scenarios

The COTESYS demonstrators provide the other areas withdemonstration platforms and challenges in the form ofdemonstration scenarios. The research results from theother research areas will be integrated, specialized, em-bodied, and validated in three scenarios:1. Cognitive mobile vehicles: aerial vehicles for ex-ploration and mapping, terrestrial offroad vehicles, andcollaborative rescue missions for autonomous aerial-terrestrical vehicle teams.2. Cognitive humanoid robots: the two-legged hu-manoid robots JOHNNIE and LOLA are equipped withlightweight arms and multi-fingered hands from DLR.They constitute the main platforms and their control sys-tems are extended to perform full body motion. Thedemonstration scenarios will feature complex everydayactivity, complex full body motion, and sophisticated ma-nipulation of objects.3. Cognitive factory: a production line for individual-ized manufacturing of car bodies is considered. Cogni-tive aspects include skill acquisition, process planning,self-adaptation, and self-modelling. The production lineincludes autonomous mobile robots with manipulators toachieve the necessary flexibility of machine usage.

c©Prof. Wunsche, UBM c©Prof. Hirzinger, DLR

Figure 7: Two autonomous vehicles serving as demonstratorsin the COTESYS cluster: MuCAR-3 and DLR blimp

The AssistiveKitchen with a cognitive robotic assis-tant. One of the demonstration scenarios for the hu-manoid robot demonstrators is the ASSISTIVEKITCHEN

[36] with robotic assistant, where the sensor-equippedkitchen is to observe the actions of the people in thekitchen, to provide assistance for the activities, and to

monitor the safety of the people. In addition, an au-tonomous mobile robot with two manipulators is to ac-quire skills in performing complex kitchen activities suchas setting the table and cleaning up through a combina-tion of imitation- and experience-based learning. Thescenario is set up in a sensor equipped laboratory, whichis shown in Figure 8. The sensor-equipped kitchen envi-ronment consists of RFID tag readers placed in the cup-boards for sensing the identities of the objects placedthere. The cupboards also have contact sensors thatsense whether the cupboard is open or closed. A varietyof wireless sensor nodes equipped with accelerometersand/or ball motion sensors are placed on objects or otheritems in the environment. Several small, non-intrusivelaser range sensors track the motions of the people actingthere.

Figure 8: Overview of the ASSISTIVEKITCHEN.

The sensor network will be made cognitive by dis-tributing cognitive mechanisms through the network andthereby obtaining devices that can estimate hidden states,recognize local activities, abstracting them, learningmodels of them, and storing abstract information andknowledge locally. Using the distributed recognition,learning, and knowledge processing capabilities, the en-vironment can adapt itself locally and avoid flooding thewhole system with irrelevant data.

6 Conclusions

The excellence cluster COTESYS unites a large number ofresearchers from a variety of different disciplines in or-der to understand cognitive mechanisms in humans andanimals, and transfer those findings to technical systems.This worldwide unique composition of the research con-sortium fosters intesive interdisciplinary exchange andtransfers ideas and concepts between traditional researchdisciplines. The goal of COTESYS is to build technicalsystems that perceive their environment, reflect upon it,and act accordingly. Such abilities are crucial for techni-cal systems to act in human environments. For efficient

collaboration, systems must adapt to the human and reactto its actions. Reaction to non-deterministic systems likehuman beings requires highly robost and flexible cogni-tive architectures. These abilities are considered the keyprerequisite to employ robots in human environments andthus for welfare systems.

References[1] H. Moravec, “When will computer hardware match the humanbrain?,” Journal of Evolution and Technology, vol. 1, 1998.[2] R. Sarpeshkar, “Brain power - borrowing from biology makesfor low power computing,” IEEE Spektrum, vol. 43, no. 5, pp. 24–29,2006.[3] N. Shadbolt, “Brain power,” IEEE Intelligent Systems and TheirApplications, vol. 18, no. 3, pp. 2–3, 2003.[4] K. P. Kording and D. M. Wolpert, “Bayesian decision theory insensorimotor control,” TRENDS in Cognitive Sciences, vol. 10, no. 7,pp. 319–326, 2006.[5] W. E. Skaggs, B. L. McNaughton, and K. M. Gothard, “Aninformation-theoretic approach to deciphering the hippocampal code,”in Advances in Neural Information Processing Systems 5, [NIPS Con-ference], (San Francisco, CA, USA), pp. 1030–1037, Morgan Kauf-mann Publishers Inc., 1993.[6] R. Shadmehr and S. P. Wise, The Computational Neurobiologyof Reaching and Pointing: A Foundation for Motor Learning. Cam-bridge, Mass.: Bradford Book, 2005.[7] W. Barfield, C. Hendrix, O. Bjorneseth, K. Kaczmarek, andW. Lotens, “Comparison of human sensory capabilities with technicalspecifications of virtual environment equipment,” PRESENCE, vol. 4,no. 4, pp. 329–356, 1995.[8] F. K. B. Freyberger, M. Kuschel, R. L. Klatzky, B. Farber, andM. Buss, “Visual-haptic perception of compliance: Direct matching ofvisual and haptic information,” in Proceedings of the IEEE Interna-tional Workshop on Haptic Audio Visual Environments and their Ap-plications (HAVE), (Ottawa, Canada), 2007.[9] R. Morris, L. Tarassenko, and M. Kenward, Cognitive Systems- Information Processing Meets Brain. San Diego, California: ElsevierAcademic Press, 2005.[10] P. Auer, A. Billard, H. Bischof, I. Bloch, P. Boettcher,H. Blthoff, H. B. on, H. Christensen, T. Cohn, P. Courtney, A. Crookell,J. Crowley, S. Dickinson, C. E. t, and J.-O. Eklundh, “A researchroadmap of cognitive vision,” Tech. Rep. V5.0 23-8-05, The EuropeanResearch Network for Cognitive Computer Vision Systems, 2005.[11] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. Cam-bridge: MIT Press, 2005.[12] M. I. Jordan and D. M. Wolpert, Computational Motor Control.Cambridge: MIT Press, 1999.[13] M. A. Arbib, E. J. Conklin, and J. C. Hill, From Schema Theoryto Language. Oxford University Press, 1987.[14] D. M. Wolpert and M. Kawato, “Multiple paired forward andinverse models for motor control,” J. Neural Networks, vol. 11, no. 7/8,pp. 1317–1329, 1998.[15] T. Dean and M. Wellmann, Planning and Control. San Mateo,CA: Morgan Kaufmann Publishers, 1991.[16] R. Sutton and A. Barto, Reinforcement Learning: an Introduc-tion. MIT Press, 1998.[17] S. Russell and E. Wefald, Do the right thing: studies in limitedrationality. Cambridge, MA: MIT Press, 1991.[18] D. Willingham, M. Nissen, and P. Bullemer, “On the develop-ment of procedural knowledge,” Journal of experimental psychology.Learning, memory, and cognition, vol. 15, no. 6, pp. 1047–1060, 1989.[19] G. Dobbie and R. Topor, “On the declarative and proceduralsemantics of deductive object-oriented systems,” Journal of IntelligentInformation Systems, vol. 4, no. 2, pp. 193–219, 1995.[20] M. Kawato, “Internal models for motor control and trajectoryplanning,” Current Opinion in Neurob., vol. 9, no. 6, pp. 718–727, 1999.[21] R. Miall and D. Wolpert, “Forward models for physiologicalmotor control,” J. Neural Networks, vol. 9, no. 8, pp. 1265–1279, 1996.

[22] D. M. Wolpert, K. Doya, and M. Kawato, “A unifying computa-tional framework for motor control and social interaction,” Phil. Trans.of the Royal Society, vol. 358, no. 1431, pp. 593–602, 2003.[23] D. M. Sobel, J. B. Tenenbaum, and A. Gopnik, “Children’scausal inferences from indirect evidence: Backwards blocking andbayesian reasoning in preschoolers,” Cognitive Science: A Multidisci-plinary Journal, vol. 28, no. 3, pp. 303–333, 2004.[24] T. Mitchell, “The discipline of machine learning,” Tech. Rep.CMU-ML-06-108, Carnegie Mellon University, 2006.[25] M. Beetz, M. Buss, and D. Wollherr, “Cognitive technical sys-tems - what is the role of artificial intelligence?,” in Proceedings of the30th German Conference on Artificial Intelligence (KI-2007), 2007.[26] J. Pearl, Causality: Models, Reasoning, and Inference. Cam-bridge University Press, 2000.[27] H. S. A. Newell, Human Problem Solving. Upper Saddle River,New Jersey: Prentice Hall, 1972.[28] S. Thrun, D. Fox, and W. Burgard, “Probabilistic methodsfor state estimation in robotics,” in Proceedings of the WorkshopSOAVE’97, pp. 195–202, VDI-Verlag, 1997.[29] M. Beetz, “A roadmap for research in robot planning,” tech.rep., PLANET-II Technical Coordination Unit on Robot Planning,2003.[30] S. Thrun, M. Beetz, M. Bennewitz, A. Cremers, F. Dellaert,D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz,“Probabilistic algorithms and the interactive museum tour-guide robotMinerva,” International Journal of Robotics Research, 2000.[31] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron,J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, K. Lau, C. Oak-ley, M. Palatucci, V. Pratt, P. Stang, S. Strohband, C. Dupont, L.-E. Jen-drossek, C. Koelen, C. Markey, C. Rummel, J. van Niekerk, E. Jensen,P. Alessandrini, G. Bradski, B. Davies, S. Ettinger, A. Kaehler, A. Ne-fian, and P. Mahoney, “Stanley, the robot that won the DARPA grandchallenge,” Journal of Field Robotics, 2006.[32] M. Bertero, T. Poggio, and V. Torre, “Ill-posed problems inearly vision,” Tech. Rep. AIM-924, Massachusetts Institute of Tech-nology, 1987.[33] M. Beetz, T. Arbuckle, M. Bennewitz, W. Burgard, A. Cremers,D. Fox, H. Grosskreutz, D. Hahnel, and D. Schulz, “Integrated plan-based control of autonomous service robots in human environments,”IEEE Intelligent Systems, vol. 16, no. 5, pp. 56–65, 2001.[34] M. Beetz, “Structured Reactive Controllers,” Journal of Au-tonomous Agents and Multi-Agent Systems. Special Issue: Best Pa-pers of the International Conference on Autonomous Agents ’99, vol. 4,pp. 25–55, March/June 2001.[35] M. Beetz, “Plan representation for robotic agents,” in Proceed-ings of the Sixth International Conference on AI Planning and Schedul-ing, (Menlo Park, CA), pp. 223–232, AAAI Press, 2002.[36] M. Beetz, J. Bandouch, A. Kirsch, A. Maldonado, A. Muller,and R. B. Rusu, “The assistive kitchen - a demonstration scenario forcognitive technical systems,” in Proceedings of the 4th COE Workshopon Human Adaptive Mechatronics (HAM), 2007.