Dipartimento di Scienze Fisiche (DSF) Universit di Napoli Federico II
Post on 03-Feb-2016
DESCRIPTIONDipartimento di Scienze Fisiche (DSF) Universit di Napoli Federico II. Guglielmo Tamburrini. Key research personnel. Bruno Siciliano is professor of Control and Robotics, and Director of the PRISMA Lab at University of Naples Federico II. President Elect of IEEE RAS. - PowerPoint PPT Presentation
Dipartimento di Scienze Fisiche (DSF)Universit di Napoli Federico IIGuglielmo Tamburrini
Key research personnelBruno Siciliano is professor of Control and Robotics, and Director of the PRISMA Lab at University of Naples Federico II. President Elect of IEEE RAS.Ernesto Burattini is professor of Computer Science at the University of Naples Federico II. His research interests include Knowledge-based Systems, Multimedia Intelligent Systems, Neuro-Symbolic integration, and Intelligent Control for Robots. Giuseppe Trautteur is professor of Computer Science at the University of Naples Federico II. His research interests concern cognitive science and consciousness studies.Luigi Villani is associate professor of Control and Robotics at the University of Naples Federico II.
The PRISMA LabResearch areasredundant and cooperative manipulatorsimpedance and force controlvisual tracking and servoinglightweight flexible armsspace robotshuman-robot interactionservice roboticsPeopleBruno SicilianoLuigi VillaniVincenzo LippielloAgostino De Santis
PRISMA for ETHICBOTS Robots leave factories to meet humans in their every-day domains: optimality criteria change!Design for Physical Human-Robot interaction (pHRI) must take into account new keywordssafetydependabilityCognitive Science and Ethics help to set the priorities: robots for anthropic domains must guarantee a different concept of safety and dependability with respect to industrial robots in structured environments, to avoid damages to humans
Ethics and technical issuesPurely cognitive aspects (considering robots as living creatures) vs. more technical aspectsThere are important ethical issues in the design of slave robots for physical interactionwhich is the maximum acceptable level of autonomy of a robot interacting with humans?control architecture and unpredictable behavioursresponsibility of the designer minimizing the risks for robots interacting with humansheavy moving partssensory data reliabilityunpredictable behaviours
Other aspectsImprovements for safety and dependability can be more or less visible (soft covering/passive compliance vs. active impedance/force control)passive-safety related facilities help perceiving reliabilityactive control is not clearly visible: control is hidden (people must be informed about the presence of ABS and ESP in cars), but it adds dependability!
annotation: "the page has been visited"hidden linksTrash for hiding or restoring linksButton selecting more or fewer detailsUser InterfaceButton for the guided tourhidden linksAdaptive hypermedia
Intelligent control for reactive systems
Philosophy of AI and Robotics Machines and scientific explanation of adaptive behavioursMachine learning and the epistemological problem of inductionHistorical landmarks (Norbert Wiener, Alan Turing, etc.)
Machines in scientific explanationWhat is the role, if any, of machines in providing responsibly supported explanations of adaptive and intelligent behaviours?
Scientific rationality as opposed to Turing-test approaches
Tamburrini, G., Datteri, E. (2005), Machine experiments and theoretical modelling: From Cybernetics to Neuro-robotics, Minds and Machines.
Methodology of bio-robotics(and bionics)
Machine Learning and InductionLearning from examplesInductive hypothesis (Humes problem): Any learning hypothesis found to approximate the target function well over a sufficiently large set of training examples will also approximate the target function well over unobserved examples.Is this inductive hypothesis justified?
Historical LandmarksNorbert Wiener: reconciling freedom and responsibility
FreedomN. Wiener: God & GolemAs long as automata can be made, whether in the metal or merely in principle, the study of their making and their theory is a legitimate phase of human curiosity, and human intelligence is stultified when man sets fixed bounds to his curiosity
N. Wiener, Some moral and technical consequences of automation, Science, May 1960 It is quite in the cards that learning machines will be used to program the pushing of the button in a new push-button warthe programming of such a learning machine would have to be based on some sort of war game... Here, however, if the rules for victory in a war game do not correspond to what we actually wish such a machine may produce a policy which would win a nominal victory on points at the cost of every interest we have at heart
Dissenting with AI pioneer Arthur Samuel, Wiener envisaged disastrous consequences of automatic and learning machines operating faster than human agents.