d1.3 - a name of the deliverable: srs system specification · the objectives of this specification...
TRANSCRIPT
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 1 of 94
DELIVERABLE D1.3 - a
Name of the Deliverable: SRS System Specification
Contract number : 247772
Project acronym : SRS
Project title : Multi-Role Shadow Robotic System for Independent Living
Deliverable number : D1.3 - A
Nature : R – Report
Dissemination level : PU – PUBLIC
Delivery date : 04-Feb-2011
Author(s) : Renxi Qiu, Alexandre Noyvirt, Marcus Mast, Gernot Kronreif, Dayou
Li, Georg Arbeiter and Nayden Chivarov
Partners contributed : ALL
Contact : Dr Renxi Qiu, MEC, Cardiff School of Engineering, Cardiff University,
Queen’s Buildings, Newport Road, Cardiff CF24 3AA, United Kingdom
Tel: +44(0)29 20875915; Fax: +44(0)29 20874880; Email: [email protected]
SRS
Multi-Role Shadow Robotic System for Independent Living
Small or medium scale focused research project (STREP)
The SRS project was funded by the European Commission under the 7th Framework Programme (FP7) – Challenges 7: Independent living, inclusion and Governance Coordinator: Cardiff University
SRS
Multi-Role Shadow Robotic System for Independent Living
Small or medium scale focused research project (STREP)
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 2 of 94
Revision History
Version. Authors Date Change
V1 Renxi Qiu 29 Dec 2010 Initial draft
V2 Renxi Qiu 06 Jan 2011 Revised function list
V3 Alexandre Noyvirt 15 Jan 2011 Formation update
V4 Anthony Soroka 01 Feb 2011 Quality assurance
Final Renxi Qiu 03 Feb 2011 UI Specification
Glossary
COB Care-O-bot ® 3
DEC Decision
NAV_MAN Navigation & Manipulation
HPSU Human Presence Sensor Unit
PLA Platform
PER Perception
RO Remote Operator
ROI Remote Operator Intervention
SUP Support
ToF Time-of-Flight
UI User Interface
UI_LOC UI for Local User
UI_PRI UI for Private Remote Operator
UI_PRO UI for Professional Remote Operator
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 3 of 94
Executive Summary
Aim
The objectives of this specification of the SRS are to:
Provide a system overview of the SRS including definition, goals, objectives, context, and major
capabilities.
To formally specify the associated:
o Assumptions, constraints and dependency
o Function requirements
o Robotic platform selection and SRS user interfaces
o Function specification
o Non-functional requirements
System context
As became clear from the SRS user study, most of daily living tasks would need robots to physically alter
the world. As a home carer, the service robot needs to have manipulation capability. However, limited
perception and decision-making capability prevent robot manipulation under unexpected situation.
Despite significant advances in the field of control and sensing; robots are prevented from real-world
unexpected manipulation tasks due to limited perception and decision-making capability.
Assumptions
The SRS does not cover basic robotic component development; it assumes the basic hardware features
such as mobile platform, griper etc and basic intelligence such as navigation and obstacle avoidance are
available on the selected platform. The project R&D will focus on user interaction and remote control to
fill the gap between existing personal service robot and targeted remotely controlled robotic carer.
Operational Scenarios and Function Requirements
Operational scenarios are detailed into three phases, namely pre-deployment, deployment and post-
deployment. The normal usage phase is the post-deployment phase after deployment when the robot is
used in the home of the elderly person. This is the main focus on for SRS demonstration and evaluation.
General system use cases and human machine interaction processes for scenarios identified in D1.1a are
analysed in details. User requirements were transformed into functional requirements in section 2.4
and 2.5; a detailed interface requirement is presented in section 2.6.
Function specification
Based on the function requirement breakdown, function specification is presented in terms of input,
output and main operations. This is detailed in Section 3. System Components and their interactions are
specified.
Major system capability and SRS solutions
The basic platform selected in SRS project is Care-O-bot 3. It is a mobile robot assistant able to move
safely among humans, to detect and grasp typical household objects, and to safely exchange them with
humans. However, limited by the state of art, Care-O-bot 3 (and other similar robots) can still not fully
cover scenarios required by SRS (Unexpected situations, unexpected objects, unreliable grasp).
Furthermore the user interface of Care-O-bot 3 is not designed to the targeted SRS user groups and do
not achieve the desired usability and acceptance.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 4 of 94
SRS will develop three different user interfaces to fulfil the needs of various user groups
A local user interface – that is a wearable device with a simple user interface tailored to the
elderly users, e.g. big buttons and easy to read display, that allows him/her to start a scenario,
initiate a call to the remote operator, and terminate a scenario. Another part of the local user
interface is a human presence sensor unit (HPSU) which will be either attached to the robot or
stand alone, subject to deployment configuration, and will allow the remote operator and the
SRS system to know the status of the local user, e.g. present or absent, his/her location and
movements. Moreover the HPSU will be able to interpret a certain number of gestural
commands from the local user in addition to the commands that he/she has issued on the local
interface.
A remote operator interface for non professional tele-operators, e.g. family members and
caregivers – this is portable device, e.g. tablet that allows most functions of tele-operation to be
executed and communication with the local user to be carried out. The excluded functions are
those associated with manual grasping control.
A remote operator interface for professional tele-operators, e.g. 24 tele-operation service – this
interface will allow a professional operator to access all functions for remote operation of the
robot. In particular this interface enables assisted object detection and grasp.
Control programmes required by SRS scenarios cover almost every aspects of robotics. The project R&D
focus only on control programme which are not available on the selected platform and can take the
advantage from the remote operator. Therefore, the targeted control modules are:
High level action representation, translation and learning module: enables human operators
issue high level control commands such as “go to kitchen” or “serve me a drink” and translate
them into low level control messages automatically. Based on the skill level, the task planning
and execution can be switched between different autonomous modes. And the SRS will learn
from the operation automatically to expand its skill.
Assisted object detection module: enables interactive object detection under mixed reality
environment. This kind of perception guidance is important for unexpected and complicated
situations. With user assistance such as highlighting interested region or specify object category,
it translate complicated scenes to indentified target, obstacles and the supporting surfaces.
Assisted grasp module: enables interactive object fetch and bring through user assisted grasp
posture and grasp configuration selection.
Apart from the desired controllability and observability, SRS will need to provide safety assurance to
enable safe scenario operation. It will be implemented in the following levels: first level is based on
emergency buttons; the second level is based on gesture inputs on local user interface; the third level is
based on safe interaction capability of the selected platform for dynamic obstacle avoidance. The first
two levels are based on newly developed SRS interfaces. The third level is integrated with selected
platform in the SRS control programme.
Conclusion and Requirements Traceability Matrix
SRS solution includes three new user interfaces; three new control modules and a selected robotic
hardware platform. The focus is on scenario demonstration, easy access as well as efficient control of
the platform. The solution will enable average users from targeted groups to take the advantage of
service robots in real life settings. As the implementation cannot cover every aspect of robotics; the
priority of the functions is decided by the scenarios, the assumption and the system context. The
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 5 of 94
document is concluded by a requirement traceability matrix to link the function specified in this
document with requirements identified in SRS user study.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 6 of 94
Table of Contents Section 1. Introduction .............................................................................................................................. 9
1.1 Specification Definition ................................................................................................................ 9
1.2 Specification Objectives ............................................................................................................... 9
1.3 Intended Audiences ...................................................................................................................... 9
1.4 Specification Overview ............................................................................................................... 10
Section 2. General System Description and Function Requirements ..................................................... 11
2.1 System Context ........................................................................................................................... 11
2.2 Assumptions ............................................................................................................................... 11
2.3 Major System Dependency and Platform Requirement ............................................................ 12
2.3.1 System dependency ........................................................................................................... 12
2.3.2 Platform requirement ........................................................................................................ 13
2.3.3 Major system constraits ..................................................................................................... 14
2.4 System Operational Scenarios and Human Machine Interaction Processes ............................. 14
2.4.1 Phases of SRS System Usage .............................................................................................. 14
2.4.2 General system use case in normal usage phase ............................................................... 17
2.4.3 Fetch and carry + video call scenario ................................................................................. 21
2.4.4 Emergency scenario ........................................................................................................... 26
2.4.5 Preparing food scenario ..................................................................................................... 31
2.4.6 Fetching and carrying of difficult objects ........................................................................... 35
2.4.7 Summary............................................................................................................................. 38
2.5 Function Requirements .............................................................................................................. 40
2.5.1 User Interface ..................................................................................................................... 40
2.5.2 Perception .......................................................................................................................... 42
2.5.3 Decision making.................................................................................................................. 43
2.5.4 Other capabilities ............................................................................................................... 43
2.5.5 Typical operations .............................................................................................................. 44
2.5.6 Function modules ............................................................................................................... 46
2.6 User Interface Requirement ....................................................................................................... 48
2.6.1 Requirements UI Local User “UI_LOC” ............................................................................... 48
2.6.2 Requirements UI Private Remote Operator “UI_PRI” ........................................................ 49
2.6.3 Requirements UI Professional Remote Operator “UI_PRO” .............................................. 50
2.6.4 Selected Technology for UIs ............................................................................................... 52
2.6.5 Possible Layout ................................................................................................................... 54
2.7 Platform Selection ...................................................................................................................... 56
2.7.1 Platform description ........................................................................................................... 56
2.7.2 Function requirement toward platform selection and expected SRS enhancement ........ 58
Section 3. Function Specifications ........................................................................................................... 60
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 7 of 94
3.1 User Interface Function Specification ........................................................................................ 60
3.1.1 UI_F1 Authorisation and privacy function.......................................................................... 60
3.1.2 UI_F2 Video communication among users......................................................................... 60
3.1.3 UI_F3 UI Robot control interface ....................................................................................... 60
3.2 Perception Function Specification .............................................................................................. 63
3.2.1 PER_F1 Recognition of known objects ............................................................................... 63
3.2.2 PER_F2 Sensor fusion ......................................................................................................... 64
3.2.3 PER_F3 Environment perception for navigation and manipulation ................................... 64
3.2.4 PER_F4 Learning new objects ............................................................................................. 65
3.2.5 PER_F5 Human Motion Analysis ........................................................................................ 65
3.3 Decision Making Function Specification ..................................................................................... 66
3.3.1 DEC_F1 Action plan generation, selection and optimisation ............................................. 66
3.3.2 DEC_F2 skill learning function ............................................................................................ 68
3.3.3 DEC_F3 Safety assurance function ..................................................................................... 69
3.4 Manipulation & Navigation Function Specification ................................................................... 70
3.4.1 NAV_MAN1 Navigation/Manipulation by given targets .................................................... 70
3.4.2 NAV_MAN1 Navigation/Manipulation by direct motion commands ................................ 70
3.5 Other supporting Function Specification ................................................................................... 70
3.5.1 SUP_F1 emergency and localisation device ....................................................................... 70
3.5.2 SUP_F2 Function for define a generic new object ............................................................. 71
3.5.3 SUP_F3 Object library update and query ........................................................................... 71
3.5.4 SUP_F4 Function for define a new skill .............................................................................. 72
3.5.5 SUP_F5 Skill library update and query ............................................................................... 73
3.5.6 SUP_F6 Function for define a new action .......................................................................... 73
3.5.7 SUP_F7 Action library update and query ........................................................................... 74
3.5.8 SUP_F8 Function for building environment information ................................................... 74
3.5.9 SUP_F9 Environment information update and query ........................................................ 75
3.6 System Components ................................................................................................................... 76
Section 4. Non-functional requirement .................................................................................................. 78
4.1 User Interaction Requirement.................................................................................................... 78
4.2 Software architecture requirement ........................................................................................... 78
Section 5. Conclusion and Requirements Traceability Matrix ................................................................ 80
Section 6. References .............................................................................................................................. 87
Appendix.1 SRS basic actions for manipulator ...................................................................................... 88
Appendix.2 Scenario breakdown into actions/functions ...................................................................... 89
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 8 of 94
Table of Figures Figure 1 - Breaking down of the scenarios into tasks and actions ............................................................. 18 Figure 2 - General system use case diagram .............................................................................................. 20 Figure 3 - Use case for Fetch and Carry + Video Call Scenario when video communication is used ......... 22 Figure 4 - Use case for Emergency Scenario .............................................................................................. 27 Figure 5 - Use case for Preparing Food Scenario when video communication is used .............................. 32 Figure 6 - Use case for Fetching and Carrying of Difficult Objects ............................................................. 36 Figure 7 - Major SRS conditions.................................................................................................................. 40 Figure 8 - Screen Layout for UI_PRO, “Mobile Platform” .......................................................................... 54 Figure 9 - Screen Layout for UI_PRO, “Manipulator” ................................................................................. 55 Figure 10 - Care-O-bot® 3 ........................................................................................................................... 56 Figure 11 - Task Ontology ........................................................................................................................... 72 Figure 12 - SRS Components Diagram ........................................................................................................ 76
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 9 of 94
Section 1. Introduction
The section introduces the system specification for the Multi-Role Shadow Robotic System for
Independent Living (SRS) system to its readers. It presents the system in terms of all its major facets e.g.
main use cases, assumptions, system conditions, operational scenarios, functions, main components,
platform selection and major system dependencies and constrains.
1.1 Specification Definition
This specification documents the system-level requirements for the SRS system.
1.2 Specification Objectives
The objectives of this specification of the SRS are to:
Provide a system overview of the SRS including definition, goals, objectives, context, and major
capabilities.
To formally specify the associated:
o Assumptions, constraints and dependency
o Robotic platform selection and SRS enhancement
o Function specification
o Non-functional requirement
1.3 Intended Audiences
This document is public and it might be used by those willing to know more about sensor acquisition and data processing in a system used for ambient assisted living purposes. The main stakeholders as identified at the time of writing for this specification include:
Members of the SRS Project Team:
o Project managers and WP/Task leaders.
o Architects and designers, whose design must meet the requirements specified in this SRS.
o Robot platform provider, whose hardware components must support the requirements
specified in this SRS.
o Programmers, whose software components must implement the requirements specified in
this SRS.
o Testers, who must ensure that the requirements are testable and whose tests must validate
the requirements
o Usability and acceptance engineers, who must ensure that the user interfaces, fulfil the
usability and user acceptance requirements.
Users, who are any private individuals that take part in user test in SRS project:
o Elderly people, who can benefit from SRS robotic solution to prolong independent living,
o Private Caregivers and professional tele-operators who use SRS to control the service robot
Any other stakeholders
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 10 of 94
o Members of service robot community
o Members of care for elderly community
o Sponsors
1.4 Specification Overview This specification is aimed to provide a thorough description of the SRS robotic system and give good
traceability from user requirements to the final system features. It is organised into the following
sections:
Introduction, which summarises the SRS specification document to its readers.
System Overview, which provides a brief, high level description of the SRS including its definition,
design goals, objectives, assumption, context, and capabilities.
Functional specification, which specifies the main functions of the system.
Non-functional requirements
Components, which specifies the system’s main building blocks, their interfaces and data control
flows.
Requirements Traceability Matrix
And finally, appendices, which defines ancillary information including basic actions and scenario
breakdown into the actions.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 11 of 94
Section 2. General System Description and Function Requirements
2.1 System Context
The ageing of the European population is having an impact on the provision of adequate services and is
a cause for concern about spiralling costs of social and health care. The care-related issues can be
improved substantially by the deployment of ICT. The main goal of SRS is to promote the quality of life
of elderly people while ensuring their personal independence through service robots.
As a home carer, service robot need to have manipulation capability:
Personal service robots can be divided into two groups – able and unable to manipulate objects in the
environment. Robots unable to manipulate are mainly focused on providing information. However, as
became clear from the SRS user study, most of daily living tasks would need robots to physically alter
the world.
Limited perception and decision-making capability prevent robot manipulation under unexpected
situation:
While such manipulation robots have been implemented successfully in controlled environments
(mainly in factory settings), heterogeneous and unstructured domestic environment poses substantial
technological challenges to open-ended home operations. Despite significant advances in the field of
control and sensing; robots are prevented from real-world unexpected manipulation tasks due to
limited perception and decision-making capability.
The SRS approach:
Facing the limitation, SRS advocates “borrowing” advanced intelligence from humans. The idea is to
combine operation capability of existing personal service robots with perception and decision-making
capabilities of a Remote Operator (RO). The approach can be summarised in twofold: Firstly, from tele-
operator point of view, the RO operates the robot but the robot assists as much as possible with its own
intelligence. Secondly, from robot point of view, the robot executes the task but the RO provide
perception and decision guidance when it is needed. Based on the user need, the developed semi-
autonomous technology will be tested and validated under real user settings for selected home care
applications.
2.2 Assumptions Under the system context, SRS assume the following conditions on hardware, intelligence,
communication, user and user interaction available:
Hardware:
The SRS does not cover basic robotic component development. Hence, it assumes the following
hardware features are available on the selected platform:
The robot has sensors and cameras to sense the environment
The robot has one or more end effectors
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 12 of 94
The robot has high-DOF limb to position the end effectors
The robot has mobile platform
Intelligence:
Also, SRS does not develop robot intelligence from scratch. Hence, it assumes the following intelligence
features are already available on the selected platform:
The robot has basic navigation and manipulation planning capability for predefined applications
The robot should have basic obstacle avoidance capability
Communication:
The SRS assumes the following communication infrastructure available:
Persistent low latency network connection between the local site and the remote operator
Availability of standard TCP/IP network infrastructure
• Standard network security mechanism is in place and configurable, e.g. firewall rules can be
modified to open ports that necessary for the SRS system communication
User and user interaction:
SRS assume there are three types of users for the SRS system:
Local User – i.e. the person being supported by the SRS robot system
Private Remote Operator – i.e. a “private” supporting person which helps to solve any
exceptional situation via remote control; such a remote operator can be a relative of the Local
User, a private care-giver, etc
Professional Remote Operator – i.e. a trained operator being part of a 24hrs-SRS Service Centre.
It is assumed that the local user is using a small and mobile user interface which is always easy reachable.
Main interaction with the SRS robot takes place with this interface – the interface integrated to the
robot (i.e. the existing COB interface for the SRS demonstrator) is “only” used for additional output
information (e.g. robot status, information about next activity – i.e. in order to avoid unforeseen
movement of robot platform or manipulator, error messages, etc).
It is assumed that the private remote operator is using a small and mobile user interface which is always
available. Main interaction with the SRS robot takes place with this interface. Private remote operator
may also use low-cost interface device at home to manipulate robot. Communication with the SRS robot
system is using standard communication technology, like mobile internet or similar technology.
It is assumed that the professional remote operator is using a UI which is installed in a 24hrs-Service
Centre. Mobility thus is not required for such a setup – the focus for the UI_PRO is on functionality and
high-fidelity input/output systems. Communication with the SRS robot system is using standard
communication technology, like high-speed internet.
2.3 Major System Dependency and Platform Requirement
2.3.1 System dependency
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 13 of 94
Based on SRS hardware assumption, it dependents on the selected service robot platform for:
Actuators (PLA1): Actuator providing suitable hardware mechanics for targeted manipulation tasks.
Mobility, Chassis and Power (PLA2): Mobility enabling navigation tasks. Chassis providing the hardware
framework of the robot. Power providing the power source to the robot components.
Payload (PLA3): Payload allowing the robot to transport objects within its load capacity.
Basic navigation and manipulation planning capability (PLA4): platform performs the manipulator and
navigator motion when action plan is given.
The required action plan is generated from SRS system. SRS will extend existing platform capability to
unexpected situations through user assistance.
Basic environment perception capability (PLA5): platform should be able to sense and interpret a
simple scene under predefined situation.
SRS will introduce user assisted environment interpretation and object detection to extend this ability.
Basic decision making capability (PLA6): platform should be able to perform predefined actions
sequences within know environment.
SRS assisted decision making will extent the decision making to unexpected task in partial known
environment.
Safe Interaction feature (PLA7): the robot should have basic obstacle avoidance capability.
SRS will reinforce the safety feature with the help of RO.
2.3.2 Platform requirement
From the dependencies and assumption above, the platform integrated with the SRS system should
satisfy the following criteria:
ID Description Requirement
PLA1 Actuators The system should be able to handle general sized household
objects and door handles
PLA2 Mobility, chassis
and power
The system is able to manoeuvre in narrow spaces: usually elderly
living in small apartments full of furniture.
PLA3 Payload The system is able to handle objects with a reasonable heavy
weight.
PLA4 Basic navigation
and manipulation
planning
The system should performs the manipulator and navigator motion
when action plan is given
PLA5 Basic
environment
perception
Platform should be able to sense and interpret a simple scene under
a predefined situation
PLA6 Basic decision
making
Platform should be able to perform predefined task within known
environment
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 14 of 94
PLA7 Safe Interaction
feature
The robot should have basic obstacle avoidance capability
2.3.3 Major system constraits
As SRS does not develop new robotic component, the following constrains form hardware platform may
apply:
Constraint of PLA1: The object should be graspable with the gripper of the robot and accessible with the
arm of the robot, e.g. Not too small, slippery or fragile.
Constraint of PLA2: The environment should be accessible by the selected SRS platform.
Constraint of PLA3: The maximum weight of the object that is fetched by the robot should be below the
maximum payload of the arm of the robot.
2.4 System Operational Scenarios and Human Machine Interaction
Processes
2.4.1 Phases of SRS System Usage
To understand the SRS demonstration scenarios that will be described subsequently, it is necessary to
distinguish the three phases. Pre-deployment (1) is the phase before the robot is deployed to an
individual home. This phase specifies the knowledge SRS comes equipped with when it leaves the
factory. This knowledge is general and not adapted to the local context. The deployment phase (2) is the
“first day” of the robot in the individual home where it is to be deployed. In the deployment phase, the
robot gets taught basic information like important objects expected to be needed during normal usage
and a map of the apartment. This information is required later for autonomous and semi-autonomous
navigation and manipulation. Finally, the normal usage phase (3) is the phase after deployment when
the robot is used in the home of the elderly person. This is the main phase focused on for SRS
demonstration and evaluation.
Pre-Deployment Phase
Every SRS system comes equipped with certain knowledge about its application scenarios before it is
deployed into a specific local environment. In particular SRS is shipped with:
Library of action sequences: This library contains action sequences and relevant parameters for
all the application scenarios that SRS comes equipped with and can be extended with further
action sequences after deployment. If SRS was to be used for preparing food, it would know the
basic procedure for this, e.g. first the fridge or cupboard needs to be opened to fetch a food
package, then it might have to be opened in some cases, the oven might need to be pre-heated
next, etc.
Library of objects: This library is just a data structure and all entries can be considered to be
placeholders that will be filled with real object data (textures, shapes, names of specific objects)
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 15 of 94
during the deployment and normal use phases. However, categories and object names for all of
SRS’s expected application scenarios can be pre-filled. For example, if SRS was to be used for
cooking and baking, the library would contain certain moveable objects (e.g. pot, jug), certain
fixed facilities (e.g. tap to fetch water, kitchen sink for waste water, oven which has buttons for
on/off and setting the temperature, stove with certain buttons), and certain surfaces for placing
things (table, kitchen worktop). The objects have features. E.g. moveable objects like a cup have
a standard position in the apartment and several additional positions where they were
previously placed or fetched by the user (this is e.g. necessary for the tidy up task where the
robot brings back an object to a previous position).
3D object model library: for “mixed reality 3d model” approach to object manipulation. This
library contains the shapes of typical objects needed for the application scenarios (e.g. cup,
bottle, etc.). This library will be used for deforming objects over the live video as an approach
for semi-autonomous grasping. The library could be joined with the library of “object names and
features”.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 16 of 94
Deployment Phase
On the first day of SRS being in a new apartment, an initial setup routine is run. The robot gets taught
the basics of its new environment. The user interface could be a wizard-style interface asking the user to
help the robot acquiring the requested information. The user interface for this phase will not be
developed during the runtime of the SRS project as the project focuses on the normal usage phase. SRS
acquires the following information:
SRS deployment personnel supply a 2D map of the apartment is supplied which will be shown in
the user interface and used for navigation and information purposes during normal usage.
SRS builds a 3D map, by visiting every room in the apartment, for future autonomous and semi-
autonomous navigation (navigable area can be seen on this map and it can be overlaid in the UI
with the 2D map)
SRS learns to recognize all door handles so it can open doors during normal usage.
SRS learns important objects necessary to execute the application scenarios it ships with. E.g. in
the case of cooking it would learn the position and visual features of one or several pans, the
position of the fridge and the visual features of its handle, position and visual features of oven
(including buttons for turning it on and off or setting the temperature), etc.
SRS learns the positions of fixed (e.g. fridge) and standard positions of movable (e.g. sauce pan)
objects necessary for executing its main scenarios
Adapt the standard action sequences if necessary
Normal Usage Phase (Post-Deployment)
This phase is the everyday operation and interaction with the robot in its specific home environment.
See the following section for a description of the functionality provided during normal use.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 17 of 94
2.4.2 General system use case in normal usage phase
The subsequent scenario descriptions represent goals for SRS development. The final demonstration of
the SRS system will take place in a standard home or simulated standard home (no smart home, no
special objects and changes like barcodes or RFID tags on objects). The starting point of the
demonstration will be after deployment, i.e. the deployment phase will assumed to be finished. It is
assumed that SRS has undergone the standard process for deployment and was taught important
objects for its applications scenarios, a room plan for map-based navigation, etc.
From the user needs specified in the user requirements document D1.1, a number of scenarios have
been identified for SRS as:
First Priority Scenarios (Base Scenarios):
Fetch and Carry + Video Call Scenario
Second Priority Scenarios:
Emergency Scenario
Preparing Food Scenario
Fetching and Carrying of Difficult Objects Scenario
Each scenario can be break into a number of tasks and then each task into more fine grained actions.
The decision making algorithms of the robot at run-time will assemble tasks from action library to fulfil
any of the above scenarios. A graphical representation of the principle of the approach of breaking
down of the scenarios into tasks and actions is shown in Fig 2.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 18 of 94
SRS perception and decision making
SRS perception and decision making
Action 1
Task 1
Scenario A
Task 2 Task 3 Task N
Action M
Action 2
Action 3
Action 4
Action 7
Action 6Action 8
Action 5
Figure 1 - Breaking down of the scenarios into tasks and actions
Moreover, SRS is capable of handling unexpected situation and is open to new operation scenarios.
Simplify adding new actions and objects to the existing library and modifications the action sequences,
new scenarios which are not considered in the design stage can be established. This process can be
achieved manually or automatically through SRS self learning. A full list of task and actions for the
identified can been seen in Appendix 2.
A general system use case diagram of the SRS robotic system has been derived from the needs and is
presented using UML on Fig3. It displays the three major actors, i.e. Local User (LU), the Remote
Operator (RO) and the robot as well as the main possible use cases of the system.
The general use cases are high level groups of tasks that describe the SRS system behaviour from an
actor's point of view. Short description of the separate use cases follows below:
• Robot use cases package – includes the use cases that involve the robot system
o Environment manipulation
o Information Exchange
o Task Execution Monitoring
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 19 of 94
o Semi-autonomous task execution tele-assistance
o Manual tele-operation
• LU-RO use cases package – includes the use cases that involve the local user and the remote
operator. RO can be either a non-professional, e.g. family member, or a professional tele-operator from
the 24h on-demand service.
o Help request
o Emergency attention request
o Voice communication
o Video communication
To derive the major SRS function requirements, use cases for each of the selected scenarios will be
studied in the following sessions. First, for each scenario, a short discussion of scenario is given together
with an UML swim lane activity diagram. It should be noted that in the diagrams the tasks done by the
robot (in the swim lane of the actor Robot) are positioned across the RO and the robot swim lanes to
reflect the intervention given by the RO in Adaptive semi-autonomous or manual Mode of operation.
The extent of this intervention is determined in run-time by the SRS system’s decision making and user
interface modules.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 20 of 94
Help request
Local
User +
Env.
Remote
Operator
(professional
or non-
professional)
<< use case package >>
LU-RO Remote Interface use
cases
Emergency attention
request
Voice
communication
<< use case package >> Robot use
cases
Robot
Video comunication
Information
exchange
Task execution
monitoring
Semi-autonomous
task execution tele-
assistance
Manual tele-
operation
Env. manipulation
Figure 2 - General system use case diagram
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 21 of 94
2.4.3 Fetch and carry + video call scenario
This scenario involves both video/voice communication and physical manipulation of the local
environment.
After the local user initiates the scenario by selecting an option on his/her local user interface device the
robot comes from its default position to the user.
If the user has selected an option that allows the remote operator to observe the environment through
the cameras on the robot, it is necessary for the robot to move to the suitable position that allows the
remote operator to have clear and unobstructed view of the local user and his/her immediate
surrounding environment. In a case when the user has selected an option to execute the scenario
without the video communication and it is clear what needs to be fetched the robot to save time will go
directly to the location where the target object can be found.
During the video/audio session the Remote Operator, after he/she has clarified with the LU what has to
be done, will select the object to be fetched by the robot and its location through the RO user interface.
Also he/she will be able, again through the same interface, to provide assistance, in the navigation and
the grasping if necessary. Once the object is grasped and placed on the tray it is carried by the robot
and given to the local user. For safety reasons the robotic arm is not operated when the robot is in close
proximity with the local user. The object is left on the tray until the LU picks it up. To execute such
sequence, first, the robot needs to know what object needs to be fetched and where it is located. If the
object is known to the robot, e.g. pre-defined and pre-learned, the selection process will include search
in the database of known objects. If the object is not known to the robot the remote operator will need
to define the object and generate a grasp configuration or manually control the griper. When the robot
has brought the object close to the local user and the local user has taken the object from the tray the
video communication continues so the RO can establish whether the local user is satisfied with the
fetched object and whether there is anything else that the local user wants to be done or brought. If
there is nothing else the robot return to its home position to wait for further commands and recharge
its batteries if necessary. The RO and the LU have the option to end the video communication at any
point of time during the scenario if they wish to do so and continue communicating only through voice
via the local user interface device without interrupting the execution of the rest of the scenario.
Additionally, if the local user decides he/she can select the object to be fetched through the local user
interface and the scenario to be executed entirely without video communication with the remote
operator. In such a case the robot will move directly to the object and seek the RO intervention if the
object cannot be grasp and carried to the local user in autonomous mode.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 22 of 94
Local User
Remote Operator (family
member or 24hr prof.
service)
Robot
Communicate via video
Start
End
Does the LU want
anything?
Yes
No
Issue a command
to fetch the object
(specify object
and location)Move to the object
Move to home position
End
NoYes
Move to home position
Grasp object
Place object on the
tray
Move close to LU
Communicate via video
Does the LU needs
anything else
Move close to the
LU
Figure 3 - Use case for Fetch and Carry + Video Call Scenario when video communication is used
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 23 of 94
Scenario
User
requirements
explicitly
accomplished
Technological
Background
Major SRS
innovations in red
Usability and use
acceptance
Major SRS user
study in red
Elisabeth Baker (84) lies at home in bed due to
a cold. To check if everything is alright, her son
Martin initiates a request for a remote session
from his workplace. Elisabeth accepts the
request on her portable communication device
and a video communication is established.
Martin asks if he could do anything for his
mother.
R02, R22
R23 to R32
Authorization of local
user for remote
session
Video call
functionality
Privacy of SRS
users
Safety of SRS
system
User acceptance
of private care
giver and local
user (elderly)
Usability of local
UI for elderly
Elisabeth answers that she feels a bit thirsty.
Martin therefore wants to fetch a bottle of
water and a glass from the kitchen. He uses a
room plan to specify that SRS should go to the
kitchen.
Having arrived in the kitchen, Martin drives SRS
to the specific place where the bottle and glass
are located.
R03, R05,
R06, R09,
R12, R13
Navigation by room
plan. SRS can handle
unexpected
situations, such as
rearranged room
layout etc
Usability of UI for
private care giver
Having arrived in the kitchen, robot checks the
environment and passes the perception to
Martin. Martin drives SRS to the specific place
where the bottle and glass are located.
R02, R05 R08
Environment
perception and scene
interpretation
SRS indicates by a rectangle that it recognizes
the bottle. This bottle has previously been
taught to SRS. However, the glass is not
indicated to be recognized. It is a new glass
that SRS has not been taught before.
R19
Recognition of
previously taught
objects and user-
friendly indication in
the user interface
Object detection in
complicated scene
Martin clicks on the bottle and SRS puts it on
the tray. R09, R12, R13
Grasping of
recognized objects
Because the glass is not recognized, Martin
switches to user-assisted grasping mode. From
a library of 3D object models, Martin selects
from the category “glasses” a cylinder-shaped
glass similar to the one to be grasped. He
adjusts its shape (height, width, position) so
that it matches what he sees on the video
picture. He then clicks “GO” and SRS grasps the
object and puts it next to the glass on its tray.
R09, R12, R13
User-assisted 3D
object model
approach for teaching
the robot grasping of
objects
Having finished the grasping, SRS asks Martin if R18 Teaching of new
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 24 of 94
this object should be saved for future grasping.
SRS suggests to save it in the category
“glasses”. Martin confirms and assigns a name:
“long glass”.
objects remotely
Martin directs SRS back to the bedroom of his
mother. While SRS drives back, Elisabeth asks
Martin what he is doing and why it takes so
long. Martin speaks with his mother telling her
that SRS will be there soon.
R03, R05,
R06, R22
Simultaneous video
calling and
teleoperation
User acceptance
study, trust
association
between robot
and private care
giver
Usability of local
UI for elderly
Martin and his mother agree to end the
conversation and speak again tomorrow. After
ending the call, SRS autonomously drives back
to its charging place.
R22 Navigation by room
plan
Next day: Martin calls again and again wants to
get his mother a glass and bottle. Today,
Martin can just click on the glass to grasp it.
R02, R19,
R09, R12, R13
SRS learning
capability
However, today grasping the bottle fails even
though this is an object taught to SRS. There
are many other objects in the scene. Martin
uses the “reduce search space” approach and
SRS successfully grasps the bottle.
R09
“Reduce search
space” approach to
enhance chance for
successful object
recognition
Usability of UI for
private care giver
Scenario requirement:
UI functions:
Authorization of local user for remote session
Video call functionality
UI support for navigation
UI support for perception
Teaching of new objects remotely
Simultaneous video calling and tele-operation
Perception and control:
Semi-autonomous navigation by room plan
Manual navigation
Recognition of previously taught objects
Grasping of recognized objects
User-assisted 3D object model approach for teaching the robot grasping of objects
Learn new objects
Demonstration of SRS handling the object taught
“Reduce search space” approach to enhance chance for successful object recognition
Usability:
Usability of UI for local elderly
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 25 of 94
Usability of UI for private care giver
User acceptance:
Privacy of SRS users
Safety of SRS system
User acceptance of the scenario, trust association between robot and private care giver
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 26 of 94
2.4.4 Emergency scenario
In this scenario the SRS robotic system is used to access the level of seriousness in an emergency
citation like a fall. Once alerted by the Local User, by pressing the wearable alarm button, the robot
comes close to the Local User and allows the RO to have a clear view of the situation and to attempt a
voice commutation with the Local User in distress. This will allow him/her to understand better the
current situation and to act accordingly. For example, if the local user is able to recover quickly from the
fall and is able to communicate clearly that there is no need for an ambulance to be send then the RO
may decide that sending emergency response services is not needed or they are send to check on the
elderly person as a low priority service. On the contrary, if the local user is not able to recover from the
fall and/or does not respond to any attempts for communication that the response level of the case can
be escalated and the ambulance send as a high priority. Compared with the “Emergency Button” alone
service the SRS system allows the RO to get better insight and understanding of the situation because
compared with a smart home or “emergency button” support operator the robot can move around to
obtain better unobstructed view, open doors when necessary and fetch objects for local user.
If emergency response services, e.g. an ambulance service, are coming to the home of the older person
the robot moves close to the entrance door in a safe position not blocking it in case of a malfunction and
with the authorisation, and if needed assistance, of the RO it opens the door from inside to allow the
emergency crew to enter the home of the older person when they arrive.
In this scenario, after the emergency button has been once pressed the video observation via the
cameras of the robot is considered very important and therefore it is not an optional task.
If the condition of the elderly person requires regular checks it is possible, provided that this has been
agreed in advance, a periodic operation to be scheduled for the robot to come and allow the RO to
check on the local user condition. For example, every one hour the robot can make short visits to the
local user to allow the RO to access the situation ,e.g. has the local user fallen.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 27 of 94
Local User
Remote Operator (family
member or 24hr prof.
service)
Robot
Communicate via video
Start
(emergency
button
pressed)
End
Observe the environment via
video stream
Is the situation serios
Yes
No
Call the
emergency
medical services
Move to the front door
Has the
ambulance
arrived?
Issue
“open the
door
command”
Check the visitor’s
identity and open the
door
Move to home position
Communicate via video
Move close to the LU
End
No
Yes
Move to home position
Move closer to the
LU
Figure 4 - Use case for Emergency Scenario
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 28 of 94
Scenario
User
requirements
explicitly
accomplished
Technological
Background
Major SRS innovations
in red
Usability and use
acceptance
Major SRS user
study in red
Elisabeth Baker (84) watches TV. In the
commercial break, she wants to go to the
bathroom but falls on her way, unable to
get up again.
R14 User acceptance
study on scenario,
With a device she always carries attached
to her belt, Elisabeth presses a button
“emergency”.
R14 Implementation of an
emergency device
Usability of local
interface for
elderly
Right away, a call is placed to her son and
daughter as well as to the 24-hour
teleassistance center.
R16
R23 to R32
Multi-user system
implementation
The device asks Elisabeth for her current
position and she selects the room from a
list.
R15
Localization of elderly
person: Development
of localization
technology is not
within the scope of
SRS. Localization will
be achieved through
manual position
indication by the
elderly user in the
device’s UI, or
alternatively, in
spoken form during
the video call, or by
the operator
navigating the robot
and searching.
Usability of local
interface for
elderly
SRS starts moving from its charging
station to the room where Elisabeth fell. R03, R04, R05
Autonomous
navigation. SRS can
handle unexpected
situations, such as
rearranged room
layout etc
The 24-hour center first accepts the call.
Through SRS’s camera, Claudia, the tele-
operator, can see Elisabeth on the floor
and asks what happened. She uses
manual navigation to further drive the
robot to the place where Elisabeth lies
and to point the robot’s camera more
downwards.
R02, R22, R06
Video calling
Robot camera
transmission
Manual navigation
and camera
perspective control
User acceptance
study,
Privacy of SRS
users
Then Martin, Elisabeth’s son joins the R02, R22 Multicast video
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 29 of 94
remote session. streaming to several
users simultaneously
Because Elisabeth can no longer move her
legs due to strong pain, the three decide
to call an ambulance. Martin logs off to
come over in person and Claudia from the
24-hour service keeps talking to Elisabeth.
R02, R22
User acceptance
study,
physiological
support in case of
emergency
The ambulance arrives before Martin and
rings the door bell. As Elisabeth cannot
move, the tele-operator navigates SRS to
the door to open it.
R03, R04, R05,
R06
Semi-autonomous
navigation by room
plan
SRS fails to find a suitable grasping point.
Claudia tries to use user-assisted grasping
mode (3D model approach) but it fails
too. Therefore, she changes to
professional manual mode and uses the
force-feedback device to open the door.
The ambulance personnel enters and
helps Elisabeth.
User assisted grasp
and Force-feedback
based advanced
manual manipulation
Scenario function requirement:
UI functions:
Multi-user system implementation
Video calling, Robot camera transmission, Manual navigation and camera perspective control
Multicast video streaming to several users simultaneously
Perception and control:
Autonomous navigation
Semi-autonomous navigation by room plan
User assisted grasp and Force-feedback based advanced manual manipulation
Other functions:
Implementation of an emergency device
Localization of elderly person: Development of localization technology is not within the scope of
SRS. Localization will be achieved through manual position indication by the elderly user in the
device’s UI, or alternatively, in spoken form during the video call, or by the operator navigating
the robot and searching
Usability:
Usability of UI for local elderly
Usability of UI for private care giver
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 30 of 94
User acceptance:
Privacy of SRS users
Safety of SRS system
User acceptance of the scenario
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 31 of 94
2.4.5 Preparing food scenario
In this scenario the robot has to retrieve the food from the fridge, place it for heating in the microwave
oven and when it is ready to bring/serve it to the Local User. Although parts of this scenario resemble to
a certain extend parts of the “Fetch and Carry + Video Call” scenario the additional tasks, e.g. Opening
the fridge, operating and operating the microwave oven, require additional actions which make the
scenario quite different from the generic “Fetch and Carry” scenario.
Once a request for the Preparing Food scenario is received by the robot or the scheduled time has come
it moves closer to the local user and establishes a video communication between the RO and the LU.
The video communication session is considered necessary to allow the LU to discuss with the RO the
food preferences and the available options. It is also considered that in such a way the LU experience of
having a meal prepared will be as close to a human contact as possible in the given circumstances. As
second option if the RO is unavailable is for the LU to select the food to be prepared from the multiple
options displayed on the touch screen embedded on the carry tray of the robot. Once it is clear what
the LU wants for a meal the robot goes to the kitchen, finds the fridge to extract it the container with
the selected food. After closing the fridge the robot goes to the microwave, opens it, places the
container in the microwave oven and initiates appropriate re-heating or cooking programme. Once the
meal is ready the robot extracts the food container from the microwave oven and carries it to the table
where it serves the meal for the local user. After it has finished the scenario and there is no extra
command the robot returns to its home position and recharges if necessary.
Due to limitations of the grippers to remove packaging and transfer food between food distribution
containers and cooking utensils, e.g. bakeware and ovenware, this scenario is considered at the moment
to be executed only with a microwave oven. However, it is possible that in the future by designing
appropriate special food containers, able withstand the higher temperatures of a traditional cooker and
at the same time are as economically viable and environmentally friendly as the current range of food
distribution food packages, the scenario to be extended to cover a wider range of traditional cooking
methods, e.g. roasting, baking or frying to offer greater variety of food options to the LU.
Similarly to other scenarios the video communication can be skipped if the local user selects so through
the local user interface and it can be replaced with a voice communication between the local user
interface and the remote operator or removed entirely by direct selection of the food choice through
the local user interface. In such a case the robot goes directly to the kitchen to start preparing the food.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 32 of 94
Local User Remote Operator Robot
Issue a “Start preparing food
scenario” command and
specifies the meal selection
Communicate via video
Move to the kitchen
Open fridge
Extract food container and
places it on the tray
Move to the microwave oven
Open microwave oven
Place food in the microwave
oven
Close the microwave oven
Switch on the microwave
Wait for predefined time period
Extract food container and
places it on the tray
Move to the table
Place the food container on the
table
Communicate via video
Start
End
Close the fridge
Anything
else?
Yes
No
Move to the LU
Figure 5 - Use case for Preparing Food Scenario when video communication is used
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 33 of 94
Scenario
User
requirements
explicitly
accomplished
Technological
Background
Major SRS innovations
in red
Usability and
use acceptance
Major SRS user
study in red
Because Elisabeth Baker (84) recently
neglected to eat, stating she has no
appetite, she and her son Martin have
agreed that Martin would prepare lunch
for her daily for a while and they would
have a chat while he does it. Therefore,
Martin today during his lunch break at
work calls his mother. Elisabeth accepts
the call and they talk about how her day
went. Martin asks her what she would
like to eat and Elisabeth chooses pasta.
R02, R22
R23 to R32
Authorization of local
user for remote session
Video call functionality
User acceptance
of the scenario
During the conversation, Martin directs
SRS to the kitchen. Through SRS he
opens the microwave oven, then the
fridge, fetches the pasta microwave meal
package, puts in the microwave, closes
fridge and microwave oven, turns on the
microwave by setting it to 5 min, fetches
some water and puts it on the table, and
after 5 min fetches the food and places it
on the table.
R05, R06, R03
Remote execution of a
sequence of fetch-and-
carry tasks
Usability of UI
for private care
giver
At the end of the process, Martin
receives a message from SRS notifying
him that a similar action sequence as
today has been carried out 2 times
before. SRS displays the recognized
sequence and asks if it should be saved
for future autonomous execution:
1. Open microwave
2. …
16. Place object pasta microwave meal
on living room table.
R18, R19
SRS self-learning
mechanism of action
sequences (learning
from semi-autonomous
operation)
Martin is also given the option to edit
the action sequence before saving it. E.g.
he can shorten it, delete elements, or
define variable elements that SRS should
ask for before executing the sequence.
Martin thinks to himself “This is nice, so
next time I can fully focus on my
conversation with Mum and I will simply
wait for SRS to finish preparing the meal,
only intervening in case SRS encounters
a problem.”
R18, R19 User editing of action
sequences
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 34 of 94
Martin cuts the segment “Fetch object
water bottle; bring to location living
room table” because his mother often
has some water sitting there already.
Also, Martin sets the sequence object
pasta microwave meal a variable object
so SRS will next time ask what kind of
food to prepare.
Next day: Martin again calls his mother.
However, today, SRS prepares the meal
autonomously and Martin and his
mother chat on how her day has been.
R02, R22, R19 SRS learning capability
Scenario requirement:
UI functions:
Authorization of local user for remote session
Video call functionality
User editing of action sequences
Perception and control:
Remote execution of a sequence of fetch-and-carry tasks
SRS self-learning mechanism of action sequences (learning from semi-autonomous operation)
Demonstration of learned behaviour
Other functions:
Localization of elderly person: Development of localization technology is not within the scope of
SRS. Localization will be achieved through manual position indication by the elderly user in the
device’s UI, or alternatively, in spoken form during the video call, or by the operator navigating
the robot and searching
Usability:
Usability of UI for local elderly
Usability of UI for private care giver
User acceptance:
User acceptance of the scenario
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 35 of 94
2.4.6 Fetching and carrying of difficult objects
This scenario resembles the “Fetch and Carry +Video Call” scenario. However, it focuses on difficult to
grasp and carry objects which require much higher degree of RO intervention e.g. from a professional
RO. This is necessitated by the fact that the current state of art on robotic systems are still unable to
handle intricate objects in cluttered environments, e.g. grasping a book placed in very close proximity
to other book on a shelf, in fully autonomous mode. Therefore, it is envisaged that once the SRS or any
other robotic service for home care become commercialised a 24/7 specialised professional tele-
operator centres will become available and will provide such high precision manipulation service on
demand. The service is expected to range from a direct low level manipulation of the robotic system, e.g.
when the family members cannot do certain manipulations with the robot arm because for example are
unavailable, to a general support on the optimal operation of the robotic system.
In this scenario, aimed at demonstration of transfer of the remote control on demand to a professional
service, the robot will pick a book placed on a shelf together with other books. The close proximity of
the books to each other makes it difficult for the family member to manipulate the robot precisely to
grasp a single book. Also this is challenging task for the robot working in autonomous mode. Therefore,
as it will be demonstrated by the project, in such a case a professional tele-operator with more
sophisticated interface, e.g. a 3D virtual environment or motion input device, and more expertise in
remote operation will manually or semi-autonomously control the arm and the gripper of the robot to
shift the other books aside until it finds and grasps the one requested from LU.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 36 of 94
Local UserRemote Operator
(family member)Robot
Communicate via video to establish
what the LU wants
Start
End
Issue
command to
fetch the object
(specify object
and location)
NoYes
Communicate via video
Does the LU needs
anything else
Remote Operator
(professional tele-
operator service)
Grasp the object in manual or semi-
autonomous mode
Move close to the LU
Move to the object
Place the object on
the tray
Move close to LU
Move to home position
Figure 6 - Use case for Fetching and Carrying of Difficult Objects
Scenario
User
requirements
explicitly
accomplished
Technological
Background
Major SRS
innovations in red
Usability and
user acceptance
Major SRS user
study in red
Francesco Rossi (78) is mentally still quite
fit. However, he does not feel safe
climbing a ladder and has fallen before. He
has an SRS system to help him with
difficult objects. Since he has no cognitive
deteriorations, he usually handles SRS
himself, only falling back to a tele-
operator in case it fails to execute an
interaction with SRS.
R01, R02, R10 User acceptance
of the scenario
Francesco wants to find some information
in an old book located on a high shelf. He R05, R11
Semi-autonomous
navigation by room Usability of local
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 37 of 94
uses his interaction device to navigate SRS
by map to the shelf.
plan, system usage by
elderly person
themselves
UI for elderly
Since Francesco knows that SRS has never
before seen this object, he switches to 3D
object model approach of grasping.
However, after several failed attempts, he
gives up (the book is surrounded by other
books causing problems with the collision-
free path planning for the arm).
R09, R11 Usability of local
UI for elderly
Recognizing the failed attempts, SRS
suggests to forward the interaction
request to Gianni, his son. Francesco
agrees. Gianni does not answer however,
so SRS suggests forwarding the interaction
request to the 24-hour service. Francesco
agrees.
R02, R33 Call priority chain
function
User acceptance
of the scenario
Claudia from the 24-hour service answers
the call and sees on her screen the steps
that lead SRS to suggest call him (failed
manipulation attempts). She greets
Francesco and asks him to explain what he
would like to do. Francesco explains it and
shows her the book.
R22 Video call function Usability of local
UI for elderly
Claudia uses the professional mode with
user assisted grasp to hold the book,
moving aside the other books.
R11
User assisted grasp
and Force-feedback
based advanced
manual manipulation
Knowing that Francesco will later want to
return the book on his own, Claudia
teaches SRS the book by the “rotate-on-
gripper” approach. Francesco says thank
you and the two agree to end the remote
session.
R18, R22
Rotate-on-gripper
approach for teaching
objects
Francesco searches the book and finds
what he was looking for. He now uses the
standard semi-autonomous grasping
mode to return the book. He simply taps
the object on his device (it is highlighted
by a rectangle) and places it back on the
shelf by tapping the desired place on the
shelf.
R11, R13, R19
Demonstration of SRS
handling the object
taught
Scenario requirement:
UI functions:
Call priority chain function
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 38 of 94
Video call functionality
UI support for navigation and manipulation
Perception and control:
Semi-autonomous navigation by room plan
User assisted grasp and Force-feedback based advanced manual manipulation
Rotate-on-gripper approach for teaching objects
Demonstration of SRS handling the object taught
Usability:
Usability of UI for local elderly
Usability of UI for private care giver
User acceptance:
User acceptance of the scenario
2.4.7 Summary
In summary, the analysis of the scenarios in this section shows that in order to fulfil the scenarios and to
satisfy the user needs, SRS solution should be capable of:
Interact with local user (elderly people)
Interact with private care giver (family members)
Interact with professional remote operators
Smooth indoor navigation
Grasp known objects
Learn new objects
Manipulation for predefined tasks
Manipulation for undefined tasks
Based on SRS assumptions, the mechanical part of the solution will be enabled by the selected hardware
platform that satisfies the platform requirement. The project R&D will focus on user interface and
control programme development.
It is clear that designing one interface which satisfies the usability and the function requirements of all
user groups is impossible. Therefore, SRS will develop three different user interfaces to fulfil the needs
of various user groups, e.g. local user (elderly person), private remote operator (family members and
caregivers) and professional remote operator. The three interfaces are:
A local user interface – that is a wearable device with a simple user interface tailored to the
elderly users, e.g. big buttons and easy to read display, that allows him/her to start a scenario,
initiate a call to the remote operator, and terminate a scenario. Another part of the local user
interface is a human presence sensor unit (HPSU) which will be either attached to the robot or
stand alone, subject to deployment configuration, and will allow the remote operator and the
SRS system to know the status of the local user, e.g. present or absent, his/her location and
movements. Moreover the HPSU will be able to interpret a certain number of gestural
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 39 of 94
commands from the local user in addition to the commands that he/she has issued on the local
interface.
A remote operator interface for non professional tele-operators, e.g. family members and
caregivers – this is portable device, e.g. tablet that allows most functions of tele-operation to be
executed and communication with the local user to be carried out. The excluded functions are
those associated with manual grasping control.
A remote operator interface for professional tele-operators, e.g. 24 tele-operation service – this
interface will allow a professional operator to access all functions for remote operation of the
robot. In particular this interface enables assisted object detection and grasp.
Control programmes required by SRS scenarios cover almost every aspects of robotics. The project R&D
focus only on control programme which are not available on the selected platform and can take
advantage of the remote operator (This assumption is detailed in section 2.2). Therefore, the targeted
control modules are:
High level action representation, translation and learning module: enables human operators
issue high level control commands such as “go to kitchen” or “serve me a drink” and translate
them into low level control messages automatically. Based on the skill level, the task planning
and execution can be switched between different autonomous modes. And the SRS will learn
from the operation automatically to expand its skill.
Assisted object detection module: enables interactive object detection under mixed reality
environment. This kind of perception guidance is important for unexpected and complicated
situations. With user assistance such as highlighting interested region or specify object category,
it translate complicated scenes to identified target, obstacles and the supporting surfaces.
Assisted grasp module: enables interactive object fetch and bring through user assisted grasp
posture and grasp configuration selection.
Apart from the desired controllability and observability, SRS will need to provide safety assurance to
enable safe scenario operation. It will be implemented in the following levels: first level is based on
emergency buttons; the second level is based on gesture inputs on local user interface; the third level is
based on safe interaction capability of the selected platform for dynamic obstacle avoidance. The first
two levels are based on newly developed SRS interfaces. The third level is integrated with selected
platform in the SRS control programme.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 40 of 94
2.5 Function Requirements
Under the system context, perception, decision making and User Interface (UI), are the focus of project
development and considered as bases of SRS hardware, software, and intelligence and communication
design.
Robot
Perception
Decision making
ROUser Interface
Figure 7 - Major SRS conditions
Perception (PER) perceives the environment from the information provided by sensors. The process
includes map building, self-perception, internal system monitoring, scene interpretation, object
detection, fault detection and health monitoring. In SRS the perception process is enhanced by RO
intervention to handle unexpected and complicated situations.
Decision making (DEC) optimises the selection of actions, route between goals, determining the current
location on a plan, and providing the next action. It can be further divided into Action planning which
generates and optimises action plans; Learning which imitates the decision process of the RO; and
Safety improves the safety of the SRS interaction. Similar to perception, the SRS decision making
mechanism benefits from the RO intervention in undefined situations.
User Interface (UI) is an instance of the HRI and bridges perception and decision making between the
robot and the RO. It provides motivation and intervention to the robot and sends feedback to the RO.
Motivation allowing a RO to provide the robot with a set of objectives. The robot may choose,
depending upon its own motivations (such as safety) to undertake some or all of these objectives.
Intervention improves autonomous robotic perception and decision. And feedback allows RO to evaluate
the result of indented task operation. In summary, major SRS functions will be categorised and
evaluated under above three aspects. The capability of the SRS is detailed in the following sessions.
2.5.1 User Interface
UI1 Authorisation and privacy
For privacy protection, SRS authorisation process in UI only allows authorised user to access the system.
This capability will support following user requirements:
R23 Only authorized persons to have access to the remote control of the system
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 41 of 94
R24 Authentication procedure as a protection of the access to be included for both family caregivers
and professionals.
R25 Possibility of external and non-authorized intruders to be avoided by a robustness security
system
R26 Avoid possibility of access to the system without explicit consent of the elderly, including non
authorized access of authorized remote operators
R27 If remote operator changes within one session, the elderly user must be informed
R28 Unintentional, not authorized disclosure of information related to the life of the users has to be
prevented by restricting access to the information stored in the system.
R31 Unintentional, not authorized disclosure of information related to the life of the users to be
prevented by including agreements of confidentiality for authorized users.
UI2 Communication support
SRS communication support includes video call functionality, manual camera control through remote
interface, and exchange information about system status. It support simultaneous video calling and tele-
operation, multicast video streaming to several users simultaneously and call priority chain function.
This capability will support following user requirements:
R02 A flexible system of communication and advice sending should be designed, because family
caregivers like the system but they do not want to be on-line 24 hours-a-day (related to psychological
burden).
R32 An “on/off” mode to be implemented in order to protect privacy in very personal moments. The
access to the “on/off” mode could be adaptable attending to the specific frailty of the elderly user.
R33 Verification of the plans of action by asking the elderly user before it starts acting before it starts
acting autonomously
R36 There should be a clear indication on the robot side if the robot is in autonomous mode or in
remote controlled operation
UI3 Support for navigation
The user interface should support SRS navigation; the capacity can be divided into 3 categories:
UI3a provides targets to semi-autonomous navigation-
This is required by SRS semiautonomous navigation such as NAV_MAN1
UI3b provides motion commands to manual navigation-
This is required by SRS manual navigation such as NAV_MAN1
UI3c visualisation the feedback of the navigation-
The visual feedback is required by most of SRS navigator operation, it is also raised by the
following user requirement
R34 Communication of action outcomes during performance of the robot, in order to maximize the
awareness of the elderly user. Communication is as continuous as possible.
UI4 Support for manipulation
UI4a provides targets to semi-autonomous manipulation-
this is required by SRS semiautonomous manipulation such as NAV_MAN1
UI4b provides motion commands to manual manipulation-
This is required by SRS manual manipulation such as NAV_MAN1
UI4c visualisation the feedback of the manipulation-
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 42 of 94
Similar to UI3c, the visual feedback is required by most of SRS manipulator operation, it is also
raised by The following user requirement
R34 Communication of action outcomes during performance of the robot, in order to maximize the
awareness of the elderly user. Communication is as continuous as possible.
UI5 Support for perception
The capability is required by SRS environment perception, it provides RO intervention to the perception
process. The capability can be further divided into the following categories. Their functionality will be
detailed in function specification.
UI5a display identified objects
UI5b define interested region
UI5c define new objects
UI6 Support for decision making
The capability is required by SRS decision making, it provides RO intervention to the decision making
process. The capability can be further divided into the following categories. Their functionality will be
detailed in function specification.
UI6a display identified/predicted action plan
UI6b define new action sequences and edit existing action sequence
2.5.2 Perception
PER1 Recognition of known objects: recognise learned object for manipulation
The capability is required by most of SRS manipulation operation and also raised by user requirement:
R08 The system recognizes and identifies objects (shapes, colours, letters on food boxes, numbers
on microwave display).
PER2 Environment perception for navigation: Perceive and interpreter the environment for navigation
tasks
PER3 Environment perception for object manipulation: Perceive and interpreter the environment for
manipulation tasks
The capabilities PER2 and PER3 are required by SRS manipulation and navigation. It also needed for the
following user requirement:
R05 The system recognizes different environment of the house (kitchen, living room, bedroom,
bathroom).
R06 The system moves around recognizing and avoiding obstacles (a door, a chair left in the house
in the wrong place)
PER4 Learning new objects: Learn new object during the operation
The capability is required by SRS grasp unexpected objects. It also needed for the following user
requirement:
R08 The system recognizes and identifies objects (shapes, colours, letters on food boxes, numbers
on microwave display).
PER5 Human motion tracking and analysis: recognise and trace user in the environment
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 43 of 94
The function can be used for SRS robot to observe the location, pose and actions of the human in order
to deduct information about the movements, gestures and intentions of the local user. It is also raised
by user requirement:
R04 recognise and trace user in the environment recognizes the user position in the environment
2.5.3 Decision making
DEC1 Action plan generation, selection and optimisation
DEC1 generate high level action sequence, plan for each of the actions and grasp configuration for grasp
planning. It can be further divided into:
DEC1a skill (action sequence) generation, selection and optimisation
DEC1b navigation plan generation, selection and optimisation
DEC1c manipulation plan generation, selection and optimisation
DEC1d grasp configuration generation, selection and optimisation
DEC2 Skill learning
Skill learning expands the capability of the robot by RO imitation. It consists of two steps:
DEC2a skill recoding
Skill recording should also satisfy the following privacy requirement raised by SRS user study
R18 The system stores information about activities and recalls it for next activities in order to cope
with complex housekeeping activities (i.e. while the remote operator through the system is managing
shopping in the afternoon and putting everything at its place, the remote operator also stores
information into the system; in this way at dinner time, food menu results updated)
R29 Storage and management of personal information related to behaviours and preferences of the
users have to be done in safe, restricted databases
R30 Storage of personal information related to behaviours and preferences of the users will be
limited to that information relevant for the functionalities of the system. Non relevant information is not
processed if not necessary.
DEC2b skill generalisation
For unknown tasks, the recording action sequence should be generated to handle similar
situations.
DEC3 Provide high level safety assurance
DEC3 provide safety assurance in conjunction with native platform safety feature. It is required by the
following user requirement.
R12 The system brings objects to the user avoiding contact with potential dangerous parts (i. e. bring
the object nearer using the platform)
R35 No robot movement should happen without initial confirmation by the user who is in direct
physical contact with the robot / the robot should avoid physical contact with the user.
2.5.4 Other capabilities
NAV_MAN1 Navigation/Manipulation by given targets: perform the motion when provided with a plan.
The plan includes target and environment information.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 44 of 94
NAV_MAN2 Navigation/Manipulation by direct motion commands: perform the
navigating/manipulating motion based on direct motion command.
Those two functions bridge the SRS control programme with the selected hardware platform. They are
linked directly to PLA4 basic navigation and manipulation planning.
SUP1 Implementation of an emergency and localisation device
SUP1 include emergency function, which is needed by following user requirement
R16 The system alerts a remote operator when an emergency has occurred by sending a high
priority call to the relative or to the service operator.
It also provides localisation information of elderly person. However, as the development of fully
automated localisation technology is not within the scope of SRS. Localisation will be achieved through
manual position indication by the elderly user in the device’s UI, or alternatively, in spoken form during
the video call, or by the operator navigating the robot and searching.
SUP2 Object library: Maintain and update object library for object detection
SUP3 Skill library: Based on task ontology, skills and their associated action sequence are stored in the
library
SUP4 Action library: An updateable library store typical high-level actions that involved in SRS
operation and their corresponding low-level motor programs that the robot can execute in the
domain.
SUP5 Pre-knowledge about the environment: An updateable library provide 2D and 3D map of the
environment in a robot readable format.
2.5.5 Typical operations
The major system capabilities listed in the previous session enable following SRS operations:
Operation ID Description
O1 Smooth indoor navigation
O2 Grasp known objects
O3 Learn new object
O4 Manipulation for predefined tasks
O5 Manipulation for undefined tasks
For simplicity, in the following sessions, we may refer the five typical operations by its ID only. The
pipelines of the operations are listed as follows:
Name Smooth indoor navigation (O1)
Input Navigation triggered, target specified by user interface (UI3a) or derived by decision making
(DEC1a)
Pipeline PER2, DEC1b and NAV_MAN1,UI1c
Description PER2 perceive the environment, DEC1b generate action plan for navigation, the action plan
is performed by NAV_MAN1, and finally the result is feedback to RO via UI1c
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 45 of 94
Associated user
requirements
R11 The system is able to bring objects in difficult places to reach for elderly people (i.e.
reach a book on a high shelf)
Name Grasp known objects (O2)
Input Grasp triggered, target already learned and stored in object library
Pipeline PER1,PER3, DEC1d, , NAV_MAN1, UI4c
Description PER1 identify the object, PER3 update the 3D map for manipulation, DEC1d generate a
optimal grasp configuration, the information passes to NAV_MAN1 for execution, the result
is feedback to RO via UI4c
Associated user
requirements
R07 The system should help elderly people with mobility issues such as reaching or
carrying out heavy objects.
R09 The system is able to grasp objects (i.e. bottle, books).
Name Manipulation for predefined tasks (O4)
Input Task triggered
Pipeline DEC1a, PER3, DEC1c, NAV_MAN1, UI4c
Description First, DEC1a decide an action sequence, for each action PER3 update the 3D map for
manipulation, DEC1c generate a action plan, the information passes to NAV_MAN1 for
execution, the result is feedback to RO via UI4c
Associated user
requirements
R13 The system is able to manage objects with care (i.e. choosing an object on a shelf;
open/close oven door).
Name Manipulation for undefined tasks (O5)
Input Task triggered and Dec1a failed
Pipeline UI6b, O4, DEC2a, DEC2b
Description Define new action sequences via UI6b, perform the new action sequence as O4, record the
operation DEC2a and learn from the operation DEC2b.
Associated user
requirements
R13 The system is able to manage objects with care (i.e. choosing an object on a shelf;
open/close oven door).
R14 The system should help with coping with unexpected, emergency situations such
as falling.
Name Learn new object (O3)
Input Per 1 failed
Pipeline UI5c, SUP2
Description UI5c define the simple shaped object online, SUP2 stores it in the knowledgebase
Associated user
requirements
R07 The system should help elderly people with mobility issues such as reaching or
carrying out heavy objects.
R10b The system is able to handle objects with different shapes (i.e. bottles of water vs.
books).
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 46 of 94
2.5.6 Function modules
UI for Local User “UI_LOC”, the user interface focuses on usability of elderly users, privacy and safety
assurance. It will provide a partial implementation of the UI1, UI2 and UI6. It is aimed to be used in the
post-deployment stage to control the robot. The UI_LOC includes a smartphone based mobile user
interface which is always easily reachable, and a box which extends view of RU and issuing simple
commands based on local user motions. Main interaction with the SRS robot takes place with this
interface – the interface integrated to the robot (i.e. the existing COB interface for the SRS demonstrator)
is “only” used for additional output information.
Human Presence Sensor Unit, Since the onboard cameras, available on the robot, allow only a limited
view sufficient for the navigation of the robot and object manipulation but not 360° wide view needed
for the good orientation and awareness of the RO of the presence of humans in the environment, it is
considered necessary that a Human Presence Sensor Unit (HPSU) to be developed that will provide the
following functionality:
Camera with multidirectional view, independent from the operation of the robot, enabling the
remote operator to scan and observe the environment around the robot for presence of
humans and allowing a safe and environment aware remote control of the robot.
3D depth information that will be used by the human motion algorithms to identify the
location of the local user, to track his/her movements and position of his hands and alert the RO
when his/her attention is required, e.g. a person approaching the robot.
Simple gestural commands, derived from the movement of the hands of the user, that will be
sent to the high-level control module of the robot for enactment. Although HPSU is not
intended to replace the UI_LOC interface the function will enhance the local user interface with
elements of natural gestural communication.
The unit will be implemented under PER5.
UI for Private Remote Operator “UI_PRI”, the user interface seeks for a balance between functionality
and mobility; it is aims to provide full implementation of the UI1, UI2, UI3 and UI6. And a partial
implementation of UI4 and UI5. The UI is mainly targeted to full-autonomous SRS operation and less
complicated semi-autonomous operation. It is assumed that the PRI is using a small and mobile user
interface based on multi-touch tablet which is always available. Interaction between private care giver
and SRS robot takes place with this interface only. Operator can send high level control commands to
the robot through this GUI.
UI for Professional Remote Operator “UI_PRO”, the user interface focuses on functionality and high-
fidelity input/output systems; it is aimed to be used in the deployment stage for the teaching and
learning and providing full implementation of the UI1 to UI6.
High level action representation, translation and learning module which enables human operators
issue high level control commands such as “go to kitchen” or “serve me a drink” and translate them into
low level control messages automatically. It will be realised by full implementation of DEC1, DEC2, SUP3
and SUP4.
Assisted object detection module which enables interactive object detection under mixed reality
environment. It will be realised by full implementation of PER1 to PER4 and SUP2 with UI5.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 47 of 94
Assisted grasp module which enables interactive object fetch and bring through user assisted grasp
posture and grasp configuration selection. It will be realised by full implementation of NAV_MAN2 with
UI4.
The safety assurance will be enabled by three levels. The first level is based on emergency buttons. By
exploring common interaction patterns of service robot working in the domestic environment, it will be
implemented in the UI_LOC and UI_PRO. The second level is based on human sensing and gesture inputs
on local user interface. By identifying the location of the local user, and tracking his/her movements and
position of upper body, the motion control programme will prevent direct interact between human and
robot arm automatically. It will be supported by PER5 and DEC1. The third level is based on safe
interaction capability of the selected platform for dynamic obstacle avoidance. In summary, the first two
levels of safety are based on newly developed SRS interfaces and control programme. The third level of
safety is based on selected platform.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 48 of 94
2.6 User Interface Requirement Based on the aforementioned sections the following requirements for the different UIs can be outlined.
2.6.1 Requirements UI Local User “UI_LOC”
Basic Assumption(s):
It is assumed that the LOC is using a small and mobile user interface which is always easy reachable.
Main interaction with the SRS robot takes place with this interface – the interface integrated to the
robot (i.e. the existing COB interface for the SRS demonstrator) is “only” used for additional output
information (e.g. robot status, information about next activity – i.e. in order to avoid unforeseen
movement of robot platform or manipulator, error messages, etc).
General requirements:
UI_LOC1: Wireless communication with robot (via base station)
UI_LOC2: Small, lightweight and mobile (“portable”)
UI_LOC3: Intuitive interface – preferably without additional devices (i.e. touch-screen functionality)
UI_LOC4: Wireless communication with PRI or PRO (preferably video communication)
Requirements for main operation:
UI_LOC5: Accept/deny remote operation
Information in case of PRI or PRO desiring to log-in
Possibility to refuse or accept connection, or information about later log-in.
UI_LOC6: Selection and start of task
Simple and intuitive menu structure – e.g. by using cascaded menus and icons – for selection of
tasks
Tasks grouped according to task type (bring tasks, monitoring tasks, etc)
UI_LOC7: Selection of parameters for a specific robot task
Examples: which object to bring, where to bring, which meal to prepare (e.g. selection list), etc
UI_LOC8: Stop action (Emergency Stop functionality)
No “Emergency Stop” related to requirements from standards due to wireless communication
Reliability as high as achievable (communication watchdog, etc)
UI_LOC9: “Localisation” of LOC position
In general, there are several possibilities for LOC localisation – e.g. identification via WLAN data.
Realisation of such a localisation functionality is not in the focus of SRS. This, a more stable
option will be used for the SRS prototype, i.e. the location of the target position for the selected
task (might be the LOC location, but can also be location of a desired table, etc!) will be defined
by the user with an appropriate UI function.
UI_LOC10: Selection of desired object for grasping tasks
If not specified as part of the task parameters, the robot sends a 2D image of the available
objects and the LOC simply selects the desired object by pointing to it. Using different
segmentation techniques - like region growing method, edge detection, etc – the contour of the
selected object will be marked and user should confirm.
UI_LOC11: Display of basic robot status messages
No details – only basic data like connection status, battery status, BASIC error message
UI_LOC12: Display of control status (autonomous mode, remote-operated mode)
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 49 of 94
Level of autonomy must be CLEARLY visible – i.e. LOC must know without any doubt if SRS robot
is in remote control mode
UI_LOC13: Display of next robot activity (especially when in remote-operated mode)
Serves as a “warning” in order to know which kind of action the robot will perform next
2.6.2 Requirements UI Private Remote Operator “UI_PRI”
Basic Assumption(s):
It is assumed that the PRI is using a small and mobile user interface which is always available. Interaction
with the SRS robot takes place with this interface only. Communication with the SRS robot system is
using standard communication technology, like mobile internet or similar technology.
General requirements:
UI_PRI1: Wireless communication with robot (via base station)
UI_PRI1: Small, lightweight and mobile (“portable”)
UI_PRI1: Intuitive interface – preferably without additional devices (i.e. touch-screen functionality)
UI_PRI1: Wireless communication with LOC (preferably video communication)
Requirements for basic remote operation:
UI_PRI1: Login to remote operation
Standard case: PRI will be contacted by the SRS system (or manually by the LOC) in case of an
exceptional situation
Optional case: PRI wants to log in to the SRS system. For successful connection the LOC needs to
accept the log-in, there can be situations (defined for each SRS installation separately) where
such log in also can be done without such acceptance.
UI_PRI2: Switching back to fully autonomous mode (after remote intervention)
Switching back to autonomous mode is only possible if all pre-conditions for the next action are
fulfilled.
UI_PRI3: Selection and start of task
Similar functionality as for UI_LOC
UI_PRI4: Selection of parameters for a specific robot task
Similar functionality as for UI_LOC
UI_PRI5: Stop action (Emergency Stop functionality)
Similar functionality as for UI_LOC
UI_PRI6: Showing robot position in (2D) map
Position and orientation of the mobile platform in 2D map (needs scrolling/zooming/panning
functionality in order to show part of the map on a small display)
UI_PRI7: Showing environment around robot (camera image and/or 2D/3D model of environment)
Option 1: Map information (2D) around robot and overlay with sensor data (see UI_PRI8 below)
Option 2: Camera image in driving direction
Possible implementation: only Option 1 or switching between Option 1 and Option 2
Proposal for implementation: overlay with planned trajectory (maybe also calculated envelope)
and possibility to modify trajectory manually (moving of particular segments of the trajectory) –
see also UI_PRI10
UI_PRI8: Overlay of sensor data (e.g. obstacle detection) at displayed robot environment
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 50 of 94
See UI_PRI7 above
UI_PRI9: Selection of desired object for grasping tasks (in 2D image)
Similar functionality as for UI_LOC (UI_LOC10)
UI_PRI10: Remotely guiding of mobile platform to desired positions based on autonomous robot control
Selection of pre-defined target or target position in map + trajectory planning “high level
navigation”
Start of movement (optional: with permanent confirmation) by means of UI_PRI
UI_PRI11: Remotely guiding of mobile platform to desired positions based on semi-autonomous robot
control (sensor based movement)
Moving of mobile platform using a “graphical joystick” (for example)
Input of PRI will be re-mapped based on sensor information (e.g. reduction of speed based on
measured distance to obstacles)
UI_PRI12: Remotely guiding of manipulator to desired (grasp) positions based on autonomous robot
control
Selection of target object on 2D image (see UI_PRI9) and trajectory planning
Start of movement (optional: with permanent confirmation) by means of UI_PRI
UI_PRI13: Control of robot camera (angle, zoom) for monitoring tasks
Proposal for implementation: graphical joystick (see above)
UI_PRI14: Display of basic robot status messages
Similar functionality as for UI_LOC (UI_LOC11)
2.6.3 Requirements UI Professional Remote Operator “UI_PRO”
Basic Assumption(s):
It is assumed that the PRO is using a UI which is installed in a 24hrs-Service Center. Mobility thus is not
required for such a setup – the focus for the UI_PRO is on functionality and high-fidelity input/output
systems. Communication with the SRS robot system is using standard communication technology, like
high-speed internet.
General requirements:
UI_PRO1: Wireless communication with robot (via base station)
UI_PRO2: Full 3D input (preferably with force-feedback) for remote control of manipulator; optional: 3D
output with data overlay (augmented 3D output)
UI_PRO3: Wireless communication with LOC (preferably video communication)
Requirements for advanced remote operation:
UI_PRO4: Login to remote operation
Similar functionality as for UI_PRIV (see above)
UI_PRO5: Switching back to fully autonomous mode (after remote intervention)
Similar functionality as for UI_PRIV (see above)
UI_PRO6: Selection and start of task
UI_PRO7: Selection of parameters for a specific robot task
UI_PRO8: Definition/modification of parameters for a specific robot task
Parameters must be uploaded to local base station after modification
UI_PRO9: Selection and start of sub-task (=action)
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 51 of 94
Task is structured to sub-tasks/actions. Each action has pre- and post-conditions. Execution of
particular sub-tasks must consider these conditions.
UI_PRO10: Modification/definition of (new) robot tasks (i.e. assembly of a new task based on existing
actions, definition of action sequence and pre/post conditions, etc) and upload to SRS base station
UI_PRO11: Stop action (Emergency Stop functionality)
UI_PRO12: Showing robot position in (2D) map
Similar functionality as for UI_PRIV (see above)
UI_PRO13: Showing environment around robot (camera image and/or 2D/3D model of environment)
Similar functionality as for UI_PRIV (see above)
Display of robot and robot environment in different views like described for UI_PRI7 – due to
the bigger possible size of the display both views (3D model as well as camera view) can be
active simultaneously
Option: full 3D display
Additional overlay of important status information -- e.g. approach vector robot, forces,
distance to object, etc
UI_PRO14: Overlay of sensor data (e.g. obstacle detection) at displayed robot environment
See above.
UI_PRO15: Correction of wrong localisation data for mobile platform
In case of wrong robot self localisation, PRO can manually move the robot to a definite position
and reset localisation information
UI_PRO16: Update of map (e.g. inserting/removing of static obstacles, definition of no-go areas,
definition of new target positions, etc)
If environment at LOC site is changing (e.g. new or moved obstacles) PRO can update and
upload the map so that this information can be considered for subsequent trajectory planning
procedures
UI_PRO17: Modification/overruling of planned trajectories for the mobile platform
Example: no valid trajectory because of safety margin between obstacle and robot for “standard”
trajectory planning – PRO can overrule safety margin and execute (possibly risky) movement
with reduced speed and permanent monitoring of sensor data
UI_PRO18: Selection of desired object for grasping tasks (either in 2D image and/or in 3D point cloud)
Similar functionality as for UI_PRIV (see above)
If image segmentation in 2D image is not working manual segmentation
If automatic mapping of 2D object selection to 3D point cloud is not working manual
segmentation of “search space” in 3D point cloud
UI_PRO19: Modification/overruling of planned grasping configuration
UI_PRO20: Adding/modification of object database (e.g. grasp configurations, shape) – including
teaching of new objects (REMARK: concrete procedure TBD in order to design the connected UI
function(s) accordingly)
UI_PRO21: Initiation of 3D map update (for subsequent trajectory planning)
UI_PRO22: Remotely guiding of mobile platform to desired positions based on autonomous robot
control
Similar functionality as for UI_PRIV (see above)
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 52 of 94
UI_PRO23: Remotely guiding of mobile platform to desired positions based on semi-autonomous robot
control (sensor based movement)
Similar functionality as for UI_PRIV (see above)
UI_PRO24: Remote-operation of mobile platform in fully manual mode (i.e. also without collision
avoidance)
If the two aforementioned types of platform movement are not working (e.g. because of
possible collision or because of wrong/instable sensor reading) fully manually mode (without
obstacle avoidance)
Example: Robot pushes away a light obstacle which blocks the only possible route desired
“collision” or overruling active collision avoidance
UI_PRO25: Remotely guiding of manipulator to desired (grasp) positions based on autonomous robot
control
Similar functionality as for UI_PRIV (see above)
UI_PRO26: Remotely guiding of manipulator to desired (grasp) position based on semi-autonomous
robot control (sensor based movement)
Similar functionality as for UI_PRIV (see above)
UI_PRO27: Remote-operation of manipulator in fully manual mode (i.e. also without collision avoidance)
in different coordinate frames
Full remote control of manipulator arm
Option: collision detection before execution of movement (similar to teleoperation for space
robotics) – Advantage: safe movement; Drawback: time consuming
Feedback via updated 3D model of environment and robot (option: generation of force-
feedback based on calculated distance to obstacles) and/or camera information
Augmentation of 3D information (see UI_PRO13 and UI_PRO14)
Switching between coordinate frames: Joint coordinates and Cartesian coordinates (preferably
TOOL coordinate system)
UI_PRO28: Control of robot camera (angle, zoom) for monitoring tasks
Similar functionality as for UI_PRIV (see above)
UI_PRO29: Remotely open/close gripper
UI_PRO30: Display of detailed robot status messages (joint positions, limits, etc)
As image overlay (see above) and/or in a separate part of the screen
2.6.4 Selected Technology for UIs
Original deliverable D1.3 includes a detailed discussion of different interaction devices and their
suitability for SRS User Interfaces. The analysis includes the following devices:
Standard-PC with LCD display and standard input devices (keyboard, mouse)
3DOF force feedback input device (“Falcon”, or similar)
3D-mouse (e.g. SpaceMouse, 3dconnexion, or similar)
3D gesture controller (e.g. Microsoft ™ Kinect) + PC
Wireless 3D motion tracking handheld controller (e.g. Wii controller) + full HD monitor (or
similar)
Multi-touch tablet computer (e.g. Apple ™ iPad)
Smartphone with (multi-touch) touch-screen (e.g. Apple ™ iPhone 4)
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 53 of 94
The analysis shows possibilities and drawbacks of different devices. In conclusion, there is no device
type which is NOT suitable for SRS. The selection of the used technology/device type thus needs a more
strategic rather than functionality-based decision. Feedback from user needs assessment as well as
analysis of SRS use scenario description clearly shows the need of a relatively small and mobile “all in
one” interaction device – especially for UI_LOC and UI_PRI. On the other hand, UI_PRO is based on a
fixed workstation in the framework of a 24hrs-Service Center and thus does not need to be mobile. For
this user interface focus is on maximum of functionality and RO support. Based on these requirements,
the following devices have been selected for the three different user interfaces of the SRS demonstrator:
1. UI_LOC: iPod touch or similar. Robot interface is not being used for input – but will add
additional output functionality (e.g. output of status messages)
2. UI_PRIV: Smartphone (e.g. iPhone 4) – if display area is not big enough (TBD after design of
screens) iPod might be used as alternative
3. Standard-PC with 20” LCD display (or bigger) – preferably with touch-screen functionality
Additional input device: force-feedback input device (falcon with switching between
position/orientation) or full 3D input device (e.g. Omega system, Phantom, …)
Optional: 3D display
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 54 of 94
2.6.5 Possible Layout The following drawings are showing a first proposal for a screen layout. For this example, UI_PRO has
been selected. It should be mentioned here that these proposals are only showing a first idea. Final
design needs to be discussed between partners responsible for the concept (HDM, IMA) and partners
responsible for implementation (BAS, etc).
Figure 8 - Screen Layout for UI_PRO, “Mobile Platform”
Task bar (context sensitive)
e.g. movement of mobile platform,
start „graphical joystick“, etc.
Main window 1: 2D map with robot and
planned trajectory (incl. envelope) +
sensor data overlay
Toggle switch Manipulator/
Mobile Base
Main window 2: Video or snap-
shot from on-board camera
with data overlay (e.g.
trajectory)
Status bar: basic status
information
Task bar (context sensitive)
e.g. movement of mobile platform,
start „graphical joystick“, etc.
Main window 1: 2D map with robot and
planned trajectory (incl. envelope) +
sensor data overlay
Toggle switch Manipulator/
Mobile Base
Main window 2: Video or snap-
shot from on-board camera
with data overlay (e.g.
trajectory)
Status bar: basic status
information
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 55 of 94
Figure 9 - Screen Layout for UI_PRO, “Manipulator”
Task bar (context sensitive)
e.g. switching to different coordinate
systems, open/close gripper, etc.
Main window 1: 2D/3D map with
robot + overlay with sensor data
(e.g. 3D point cloud) and manipulator
data (e.g. approach vector)
Toggle switch Manipulator/
Mobile Base
Main window 2: Video or snap-
shot from on-board camera
with data overlay (e.g.
Selected = segmented object)
Status bar: basic status
information
Manipulator status bar:
manipulator status
information (e.g. joint limits)
Insert of object library for
selection
Task bar (context sensitive)
e.g. switching to different coordinate
systems, open/close gripper, etc.
Main window 1: 2D/3D map with
robot + overlay with sensor data
(e.g. 3D point cloud) and manipulator
data (e.g. approach vector)
Toggle switch Manipulator/
Mobile Base
Main window 2: Video or snap-
shot from on-board camera
with data overlay (e.g.
Selected = segmented object)
Status bar: basic status
information
Manipulator status bar:
manipulator status
information (e.g. joint limits)
Insert of object library for
selection
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 56 of 94
2.7 Platform Selection
2.7.1 Platform description
The SRS prototype is based on the Care-O-bot® 3 robot from Fraunhofer IPA.
Care-O-bot® 3 Design Concept
Butler design, not humanoid in order not to raise wrong expectations
Separation of working and serving side:
o Functional elements at back, specifically robot arm
o HRI at front using a tray and integrated touch screen
o Sensors can be flipped from one side to the other
Safe object transfer by avoiding contact of user and robot arm
Intuitive object transfer through tray
Care-O-bot® 3 hardware requirements specifications
Figure 10 - Care-O-bot® 3
Dimensions (L/W/H) 75/55/145 cm
Weight 180 kg
Power supply Gaia rechargeable Li ion battery 60 Ah, 48 V
Internal: 48 V, 12 V, 5 V
separate power supplies to motors and controllers
All motors connected to emergency-stop circuit
Omnidirectional platform 8 motors (2 motors per wheel: 1 for rotation axis, 1 for drive)
Elmo controllers (CAN interface)
2 SICK S300 laser scanners
1 Hokuyu URG-04LX laser scanner
Speed: approx. 1.5 m/s
Arm Schunk LWA 3 (extended to 120 cm)
CAN interface (1000 kbaud)
Payload: 3 kg
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 57 of 94
Gripper Schunk SDH with tactile sensor
CAN interfaces for tactile sensors and fingers
Torso 1 Schunk PW 90 pan/tilt unit
1 Schunk PW 70 pan/tilt unit
1 Nanotec DB42M axis
Elmo controller (CAN interface)
Sensor head 2 AVT Pike 145 C, 1394b, 1330×1038 (stereo circuit)
MESA Swissranger 3000/4000
Tray 1 Schunk PRL 100 axis
LCD display
Touch screen
Processor architecture 3 PCs (2 GHz Pentium M, 1 GB RAM, 40 GB HDD)
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 58 of 94
2.7.2 Function requirement toward platform selection and expected SRS enhancement
The Care-O-bot® 3 robot is selected for the SRS project in the project proposal stage, the justification for the selection and expected improvement in the project is listed in the
following table:
ID Description Function Requirement Justification SRS Enhancement/Strategy
PLA1 Actuators and
mobility
The system should be able to handle
general sized household objects and
door handles
Yes
Care-O-bot® 3 is equipped with a highly flexible,
commercial arm with seven degrees of freedom as
well as with a three-finger hand. This makes it
capable of grasping and operating a large number
of different everyday objects. Using tactile sensors
in the fingers, Care-O-bot® 3 is able to adjust the
grasping force.
Personal service robot field is under rapid
change and advancement. New hardware
emerges monthly. SRS project is not
indented to compete over basic robotic
component development. Its focus is to
convert existing/ future service robot into
a home carer for independent living. This
is achieved through the SRS perception
and decision making mechanisms.
Hence, the control software is robot
hardware independent. Most of control
modules operate only on higher level
hardware-independent abstractions.
Therefore it can be easily integrated with
future service robots.
PLA2 Chassis and power The system is able to manoeuvre in
narrow spaces: usually elderly lives in
small apartments full of furniture.
Yes
Care-O-bot® 3 has an omnidirectional platform
with four steered and driven wheels. This
kinematic system enables the robot to move in any
desired direction and therefore also safely to pass
through narrow passages.
PLA3 Payload The system is able to handle objects
with a reasonable heavy weight.
Yes
Care-O-bot® 3 has Schunk LWA 3 (extended to 120
cm), it can handle payload upto 3 kg
PLA4 Basic navigation
and manipulation
planning
The system should performs the
manipulator and navigator motion
when action plan is given
Yes
The manipulators of Care-O-bot® 3 can be
actuated by a simple control interface which allows
movements in joint and cartesian space. According
to the parameters given in the configuration files,
the robot is modelled by oriented bounding boxes,
which are used for collision avoidance calculations.
Extent the capability into unexpected
situations through user assisted action
planning and SRS self-learning.
PLA5 Basic environment
perception
platform should be able to sense and
interpret simple scene under
Yes
A multiplicity of sensors enables Care-O-bot® 3 to
Extent the capability to partial known
situation through user assisted
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 59 of 94
predefined situation detect the environment in which it is operating.
These range from stereo colour cameras and laser
scanners to a 3D range imaging camera. The
sensors serve, for example, to detect and locate
objects for manipulation as well as relevant
obstacles in the robot's environment.
environment interpretation and object
detection.
PLA6 Basic decision
making
platform should be able to perform
predefined task within know
environment
Yes
Care-O-bot® 3 is also able to autonomously plan
and follow an optimal, collision free path to a given
target under predefined situation
Extent the capability to unexpected task
in partial known environment through
user assisted decision making, and
capability to handle simple shaped
unknown object
PLA7 Safe Interaction
feature
the robot should have basic obstacle
avoidance capability
Yes
The primary interface between Care-O-bot® 3 and
the user consists of a tray attached to the front of
the robot, which carries objects for exchange
between the human and the robot. The tray
includes a touch screen and retracts automatically
when not in use. The robot arm is only used to
place objects on the tray or take them away from
it. It is stopped immediately as soon as people are
detected in the vicinity of the robot. Combined
with safety sensors for navigation this concept
enables the safe operation of Care-O-bot® 3 in
public places.
Reinforce the safety feature through
advanced decision making process with
the help of the RO.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 60 of 94
Section 3. Function Specifications
3.1 User Interface Function Specification
3.1.1 UI_F1 Authorisation and privacy function
Function UI_F1 Purpose
For privacy protection, SRS authorisation process in UI only allows authorised user to access the system.
Function UI_F1 Input
Credentials send by SRS user
Function UI_F1 Operations
Authentication confirms the user's identification via Kerberos V5, Secure Socket Layer/Transport Layer
Security (SSL/TLS) etc,
Targeted to private caregivers and professionals.
Prevent non authorized intruders by a robustness security
Inform local user the change of RO within one session
Protect information stored in the system
Display agreements of confidentiality after authorised users login to the system
Function UI_F1 Outputs
Authorized user receive token to access SRS system.
3.1.2 UI_F2 Video communication among users
Function UI_F2Purpose
Video call functionality among users, it should also be able to display the status users
Function UI_F2 Input
Video captured by web camera
Function UI_F2 Operations
Skype like video communication, capable of multicast video streaming to several users simultaneously,
status of the participating user can be displayed on the programme.
Support “on/off” mode to protect privacy of local user. The access to the “on/off” mode is adaptable
based on the specific frailty of the elderly user
It should also provide mechanism to confirm the action execution.
Function UI_F2 Outputs
Status indication and video link between local user and remote users
3.1.3 UI_F3 UI Robot control interface
Function UI_F3 Purpose
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 61 of 94
• See the robot position in a 2D map of the house.
• See a real view of the environment around the robot at its position (in all possible directions).
• See which objects in the environment are recognized by the robot.
• Select or deselect any of the recognized objects (one at a time).
• Select possible action from a list, according to currently selected object.
• Define unrecognized objects in the environment.
• Navigate the robot platform to any position in the living place by specifying the goal position.
• Navigate manually the robot platform to any position in the living place.
• Navigate manually the robot arm at any possible position in the real environment, according to
the position of robot platform.
Function UI_F3 Input
• 2D video cameras and ToF camera
• Laser scanner data
Additionally we will use such data from the robot as:
• 2D maps and navigation data
• All internal control state like - joint positions/velocities/accelerations
• Knowledge database with all tasks/skills data
Function UI_F3 Operations
Main terms used in the specification below are:
• Model – program abstraction of a concrete subset of the problem domain.
• View – a part of the user screen specially model to handle logically related functions.
• Layer – a part of the view, correlated with concrete model. Layers overlay each other to
generate most expressive view of the environment. User can switch on/off them in order to make the
view most usable for him. Overlaying layers results in presenting the main Augmented Reality interface
to the RO.
Models
• 2D map – used to describe the scheme of the living place.
• 3D information model – the 3D data that comes as a result from Time-of-Flight scanning of the
environment.
• Object library – a predefine set of 3D object abstractions of real objects that are used to
describe objects not yet recognized to the robot.
• Action library and Skill library– a predefine set of typical actions and action sequences
Views
Main views are:
• 2D map view – used for semiautonomous navigation of the robot in the living place.
• Environmental view – the main view of the user interface. It consist of:
o Video layer – used to display the video streaming from the robot camera. It is a vital
part of the AR User Interface.
o 3D information layer – used to visualize the 3D data from the robot. This layer provides
the depth information by coding different depths with different colours. It helps determine the
distance to objects and also 3D environment properties.
o Robot simulation layer – used for 3D visualization of the robot state.
o Control layer – used to display some control variables of the robot state, like position,
speed, acceleration, etc.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 62 of 94
o Action layer – used to visualize a list of possible action that the operator can select from,
according to the selected object.
o Object library layer – used for selection of predefined objects from the 3D object library
– object geometry, object grasp points, etc.
• Control view – used for visualization of all the variables of the robot state, like platform/joint
positions, speed, battery function, etc.
• Configuration view – used to predefine / update the robot databases, like knowledge database,
configuration database, etc.
Interaction scenarios
From technical stand point:
User can select objects using mouse and keyboard. User can specify object names and
parameters using keyboard.
User can set grasp points using mouse.
User can move the robot using joystick.
User can set/adjust joint parameters of simulation using mouse and keyboard. User then can
synchronize the robot with the simulation by sending the tested joint position to the real robot.
3D information helps solve the problem of limited field of view. User can retrieve precise depth
information by pointing the mouse pointer to a specific location on the view. Relative depth
information is provided at a glance due to colour differences corresponding to different depth.
RO can see frames surrounding the objects recognized by the Robot. RO can also retrieve
detailed model information using Mouse and/or Keyboard.
3D model view helps see those parts of the environment that is hidden from the direct camera
view of the Robot.
User can specify robot trajectory target point on the map using mouse and keyboard.
User can set Points of Interest (POIs) on the map.
User can set various software configuration parameters using mouse and keyboard
User can show/hide, move and resize different views using mouse and/or keyboard.
User can set various view parameters (show/hide grid, overlap some of the layers, etc.) using
mouse and/or keyboard.
User can set program’s mode (Manual, Autonomous or Semi-autonomous) using mouse or
keyboard.
User can always access the Emergency Stop button (located in the Environmental view) using
mouse, keyboard or joystick.
User can control all joints either individually or as a group (i.e. the Arm) using joystick, keyboard.
User can manipulate manually robot hand including wrist and fingers.
From user scenarios standpoint the interaction with the UI are:
See the robot position in a 2D map of the living room.
The robot position is visualized on 2D map view.
See a real view of the environment around the robot at his position (in all possible directions).
This is done by using the joystick for setting the camera head movement parameters and platform
rotation parameters and environmental view for feedback of the real movement.
See which objects in the environment are recognized by the robot.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 63 of 94
All 3D data of the robot will be visualized on 3D information layer on the environmental view
Select or deselect any of the recognized objects.
This is done by mouse clicking in the 3D information layer on the environmental view. If the
coordinates correlate with any recognized object in the environment than it selection state will
be changed.
Select possible action from a list, according to currently selected object.
After selection/de-selection of an object from the environment, the list with possible actions in the
action layer is refreshed. Then the user can select one of them by mouse click.
Define unrecognized objects in the environment.
This is done by drag and drop with the mouse of objects from the object library layer to 3D
information layer and setting the 3D object parameters with the mouse and keyboard.
Navigate the robot platform to any position in the living place by specifying the goal position.
This is done by mouse click on 2D map view. Correlated coordinates are the goal position.
Navigate manually the robot platform to any position in the living place.
This is done by using the joystick for setting the platform movement parameters and environmental
view for feedback of the real movement.
Navigate manually the robot arm at any possible position in the real environment, according to
the position of robot platform.
This is done by using the joystick for setting the arm movement parameters and environmental view
for feedback of the real movement.
Function UI_F3 Outputs
Support to manipulation, navigation, perception and decision making
3.2 Perception Function Specification
3.2.1 PER_F1 Recognition of known objects
Function PER_F1 Purpose
Object detection tries to associate sensor input data with the object model library. If a learned object
can be detected, the pose of the object is calculated. Object detection may fail if the environment
conditions are inappropriate or if the object hasn’t been learned yet. The RO can improve object
detection by manual intervention. The RO specifies the object to be manipulated on the user interface.
For unknown objects, the generic object library can be used.
Function PER_F1 Input
Data from TOF and colour cameras, object models from the knowledge base and Intervention from RO.
Function PER_F1 Operations
Object detection is able to identify those objects already learned. Those reside in the knowledge base. If
detection fails, RO specifies a region of interest on the 2D or 3D view of the user interface. Also, the
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 64 of 94
basic shape of the object has to be specified. Then, automatic fitting of the shape within the ROI can
take place and the object pose is known.
Object detection doesn’t have to be real-time but fast enough for convenient operation. Up to 5
seconds for a detection cycle are acceptable. Object detection will not run continuously but be triggered
in certain situations.
Function PER_F1 Outputs
Output is the ID and the pose of the detected object; it can be used by the manipulation component
3.2.2 PER_F2 Sensor fusion
Function PER_F2 Purpose
Data from both colour and TOF cameras are combined in order to generate coloured point clouds as
input for other components. The sensors are calibrated to each other and sensor fusion should generate
point clouds at sensor frame rate.
Function PER_F2 Input
Data from TOF and colour cameras
Function PER_F2 Operations
Various other components like navigation, manipulation or user interface need combined sensor data as
input. Therefore, sensor fusion is able to fuse data from TOF and colour cameras in real time.
Function PER_F2 Outputs
coloured point cloud, used by navigation, manipulation and user interface 3.2.3.
3.2.3 PER_F3 Environment perception for navigation and manipulation
Function PER_F3 Purpose
Environment perception updates the static 2D and 3D maps in order to count for dynamic changes in
the environment. Also, identification of areas of interest like furniture takes place. The updated
environment maps are used for navigation, manipulation and visualization. Furthermore, the 3D map
can be segmented in order to find geometric structures like furniture or table-tops.
Function PER_F3 Input
Data from sensor fusion and from sensors and the static 2D and 3D map from SRS knowledgebase
Function PER_F3 Operations
Output is an updated 2D and 3D map of the environment. The representation can be a point cloud or a
geometric map. This is used by navigation, manipulation and user interface.
Function PER_F3 Outputs
Output is an updated 2D and 3D map of the environment. The representation can be a point cloud or a
geometric map. This is used by navigation, manipulation and user interface.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 65 of 94
3.2.4 PER_F4 Learning new objects
Function PER_F4 Purpose
In order to learn new objects for manipulation, the corresponding 3D object models are learned. The
object is placed on the robot gripper and the model is learned autonomously. The models are stored in
the knowledge base and used during object detection and grasp planning.
Function PER_F4 Input
Input is data form TOF and colour cameras.
Function PER_F4 Operations
An object is placed in the gripper and rotated in front of the cameras. During rotation, 3D and colour
data is gathered. Then, dominant features are extracted from the sensor data and stored as a 3D model.
Function PER_F4 Outputs
Output is 3D feature point models and category of the objects.
3.2.5 PER_F5 Human Motion Analysis
Function PER_F5 Purpose
The Human Motion Analysis (HMA) Specification has been derived from the need of the robot to
observe the location, pose and actions of the human in order to deduct information about the
movements, gestures and intentions of the local user. Such information can be used for path planning,
safety and enhancing of the human-computer interface.
Function PER_F5 Input
Input is data form TOF and colour cameras.
Function PER_F5 Operations
The HMA subsystem will receive information from a 3D TOF camera. It will filter out of the noise and do
a runtime classification of various elements in the scene, i.e. local user, other people, background
artefacts and separate body parts. The subsystem will compute statistical data like height, body
orientation, torso orientation, centre of mass, chest direction, pelvis directions and user volume. On its
output the module will provide identification and tracking information of the movements of the users’
body parts and body skeleton reconstruction.
From the extracted 3D coordinates of key body parts the HMA subsystem will also infer a number of
pre-programmed gestures and key commands that will be used later by the decision making module of
the robot. These will include waving commands to the robot to stop the currently executed task, to
come closer or to go away, repeat a task and so on.
Function PER_F5 Outputs
The subsystem will output real-time information about the 3D coordinates of key points of the human
body, e.g. arms, head and legs.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 66 of 94
3.3 Decision Making Function Specification
3.3.1 DEC_F1 Action plan generation, selection and optimisation
3.3.1a DEC_F1a Formal formation for object and action representation
Function DEC_F1a Purpose
Construct a formal formation for action representations toward corresponding objects. The formation
will enable us to abstract the capabilities of a robot and its working environment. Such information can
be used for intent-directed planning, skill learning etc.
Function DEC_F1a Input
Action ontology
Function DEC_F1a Operations
A list of special constants which denote certain aspects of SRS domain, such as valid workspace locations,
manipulation instrument, and house-hold objects.
• Workspaces: cupboard, oven, fridge etc.
• Instrument: Griper, Arm, mobile base etc.
• Objects: apple juice box, water bottle etc.
A set of high-level physical actions, which are represented as predicates, divide the set of object
manipulations into context-dependent operations. Those actions come with conditions and effects that
change the state of the world. For example
grasp(?o, ?l, ?h) Grasp object ?o from the edge of ?l using gripper ?h
move(?l1,?l2) Move the robot from location ?l1 to location ?l2.
Each action is corresponding to low-level motor programs that the robot can execute in the domain. All
of the above actions are parameterised with variables denoting objects, locations, and grippers.
A set of high-level properties (predicates and functions) model features of the world, robot, and domain
objects, and correspond to abstract versions of information available at the robot level. E.g. “inRange”,
“in Gripper” and “obj Open” etc.
High-level properties are typically formed by combining information from multiple low-level sensors in
particular ways, and packaging that information into a logical form. Like actions, high-level properties
can be parameterised and instantiated by defined constants.
A set of high-level sensing actions that observe the state of the world. For example
inRange(?x) -- A predicate indicating that object ?x is in the range of the robot’s ordinary gripper.
sense-open(?x) -- Determine whether object ?x is open or not.
Similar to physical actions, each sensing action is corresponding to low-level motor programs that the
robot can execute to perceive the environment.
Action sequences are formed by combining actions together taking the above constants and properties
into account either by RO or by planning function, DEC1b called skills or action plans, respectively. In an
action sequence, parameters in actions included are replaced with constants and high-level properties
to produce specific action instances. It is these action instances that will ultimately be passed to the
robot and converted into low-level motor programs for execution in the real world. The formation will
not provide plan-level actions that specify 3D spatial coordinates. Details of the actual execution of
these actions are left to the robot controller.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 67 of 94
Function DEC_F1a Outputs
A list of typical high level actions. And the mechanism for combining the actions into sequences
3.3.1b DEC_F1b Integrated decision making and switching between semi-autonomous mode and fully
autonomous mode
Function DEC_F1b Purpose
Generate action plan for a given task
Function DEC_F1b Input
A given task
Function DEC_F1b Operations
Case-based reasoning
Case-based reasoning is needed to decide what previous experience (skills) can be used given a task in
order to improve efficiency by directly using the previous experience to complete the task.
The input of this function is the given task in the form specified in skill library and workspace
descriptions in the form of 2D and 3D models specified in environment model.
The function will operate in the way described below:
First, it compares the features of given task with those defined for pre-defined task categories to find
out to which task category the given task belong. It, then, follows the task ontology of that task category
to identify an identical task. The task ontology contains tasks that are performed previously and defined
for each task category. After this step, it retrieves the skill associated with that task using the association
between tasks and skills.
The output of the function will be the skill retrieved if such an identical task is found. In the case where
such a task cannot be found, it passes on the task to planning.
Planning
Planning is performed to produce an action plan when case-based reasoning fails.
Planning can basically be a logical forward or backward chaining process. However, logical backward
chaining is preferred to reduce the complexity caused by the fact that the same type of action can have
various versions each of which has a distinct effect.
The output of this function is an action plan in which the condition of the first action is the same as the
current state of workspace where an SRS is located, the effect of the last action is the same as that
specified by the ultimate goal of a given task, and the effect of any other action is the same as the
condition of the next action.
Planning fails if such an action plan cannot be formed. The function will then send a signal to F21e to
trigger operation mode switch from autonomous mode to semi-autonomous or manual mode.
Intent/intention recognition
This function operates when an SRS works in semi-autonomous or manual mode. The aim is to recognise
a RO’s intent/intention to trigger operation mode switch from remote controlled mode to autonomous
mode.
Function DEC_F1b Outputs
High action plan and operation mode
3.3.1c DEC_F1c Grasp configuration generation and selection
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 68 of 94
Function DEC_F1c Purpose
Grasp configuration generation for multi-fingered robotic hand
Function DEC_F1c Input
Detected object from PER_F1 and environment perception based on PER_F3
Function DEC_F1c Operations
Generate grasp configuration based on grasp plane and contact point for the identified object.
Function DEC_F1c Outputs
Optimal grasp configuration as the target for the following path planning.
3.3.2 DEC_F2 skill learning function
Function DEC_F2 Purpose
Aim of skill learning is to collect and use previous experience in terms of skills to efficiently complete
tasks that were completed previously by an SRS (supporting DEC_F1b).
Function DEC_F2 Input
Action plan as output of DEC_F1b
Function DEC_F2 Operations
At the pre-deployment phase, skill learning starts when the SRS is taught to complete tasks. The inputs
are the observed action sequences demonstrated to the SRS during the teaching. As the teaching signals,
the conditions and effects, in terms of the state of workspace and the object manipulated, and the
motor programs of every individual action in an action sequence are clearly stated. The function saves
all such actions in an Action Bank (AB). For a given task, the function keeps the indices of the actions in
the order that they appear in the action sequence as a skill to complete the specific task.
At post-deployment phase, skill learning is invoked when the SRS is switched to remote controlled mode.
For a given task, the function records the action sequence performed through RO’s manipulations, the
corresponding motor programs and the states of the objects and workspace. The function then calls
DEC_F3 which breaks down the sequence into individual actions and saves the actions in the AB,
associates the actions with motor programs and saves the motor programs also in the AB, and saves the
state information in the AB and associates the actions with the state information. It then keeps the
indices of the actions in the order that they appear in the action sequence as a specific skill.
Skill generalisation follows skill learning at both phases. The aim is to create the generalised form of
skills for the similar tasks for the purpose of intent/intention recognition (DEC_F3).
A generalised skill is in statistics the most commonly used skill that was employed to complete the same
or similar tasks. It is also a sequence of indices of actions but in the order of such a skill. Conditions,
effects and motor programs of all the actions are removed. Individual actions of all skills that were used
to complete the same or similar tasks will be used to train action sequence models that represent the
generalised skills.
Function DEC_F2 Outputs
The outputs of DEC_F2 are specific skills saved in a Skill library
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 69 of 94
3.3.3 DEC_F3 Safety assurance function
Function DEC_F3 Purpose
Providing additional safety assurance based on scenario operation.
Function DEC_F3 Input
Action plan as output of DEC_F1b
Function DEC_F3 Operations
Provides automatic collision avoidance and is intended to be used for safer teleoperation of a robot
base. Given a plan to follow and a costmap, the controller produces velocity commands to send to a
mobile base.
Function DEC_F3 Outputs
Updated action plan
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 70 of 94
3.4 Manipulation & Navigation Function Specification
3.4.1 NAV_MAN1 Navigation/Manipulation by given targets
Function NAV_MAN1 Purpose
Control selected robot through high level action planning
Function NAV_MAN1 Input
High level action plan received via SRS UI or output of DEC_1b
Function NAV_MAN1 Operations
For each action retrieve low-level motor programs that the robot can execute in the domain from SRS
action library using SUP_F7, retrieve relevant parameters from PER_F2 and PER_F3.
Function NAV_MAN1 Outputs
Expected task execution
3.4.2 NAV_MAN1 Navigation/Manipulation by direct motion commands
Function NAV_MAN2 Purpose
Provide direct manual control function for the selected robot
Function NAV_MAN2 Input
Motion control command received via SRS UI
Function NAV_MAN2 Operations
Checking system constraint on command execution, execute the command
Function NAV_MAN2 Outputs
Expected task execution
3.5 Other supporting Function Specification
3.5.1 SUP_F1 emergency and localisation device
Function SUP_F1 Purpose
Raise emergency alter and provide location information when needed
Function SUP_F1 Input
Input from local user
Function SUP_F1 Operations
A mobile device held by the local user. Locate user can press buttons on the device to provide
information about emergency and their location.
Function SUP_F1 Outputs
Information about emergency or information about location.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 71 of 94
3.5.2 SUP_F2 Function for define a generic new object
Function SUP_F2 Purpose
In the pre-deployment phase an object model library of pre-defined basic shapes will be set up and
stored in the knowledge base. The object library may be updated during the deployment phase. Basic
geometric shapes have to be defined and 3D object models are generated for those shapes. The format
of the models has to be suitable for registration of objects in point clouds.
Function SUP_F2 Input
Basic geometric shapes like cylinders, cubes or balls are constructed by a human in order to fill the
object library. The shapes should correspond to the majority of objects used by manipulation.
Function SUP_F2 Operations
Basic geometric shapes like cylinders, cubes or balls are constructed by a human in order to fill the
object library. The shapes should correspond to the majority of objects used by manipulation.
Function SUP_F2 Outputs
Outputs are generic 3D models and category of basic object shapes. Those are used in the user interface
for identification and learning of new objects.
3.5.3 SUP_F3 Object library update and query
3.5.3a SUP_F3a Object library update
Function SUP_F3a Purpose
Update object library
Function SUP_F3a Input
Output from SUP_F2 for new generic object model;
Output from PER_F4 for specific object model;
Batch input from existing household object database;
Function SUP_F3a Operations
Update, object library based on the input
Function SUP_F3a Outputs
Successful or not
3.5.3b SUP_F3b Object library query
Function SUP_F3b Purpose
Retrieve object information for identification and learning
Function SUP_F3b Input
Object id or object category
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 72 of 94
Function SUP_F3b Operations
Select relevant objects from database
Function SUP_F3b Outputs
An object model identified in the database
3.5.4 SUP_F4 Function for define a new skill
Function SUP_F4 Purpose
Define new skill
Function SUP_F4 Input
Input of the function includes: 1) pre-defined tasks, 2) tasks performed through RO manipulations, 3)
skills used in the form of action segments, and 4) properties of the objects manipulated within the tasks.
Function SUP_F4 Operations
This function builds up task categories and task ontology corresponding to the task categories.
An example of task ontology is given below:
Figure 11 - Task Ontology
At the pre-deployment phase, this function defines task categories and the corresponding task ontology.
The function will operate manually by using tasks defined according to the project scenarios, action
segments of the action sequences employed to complete the tasks and the properties of the objects
handled within the tasks. At post-deployment phase, the function will further develop new task
categories when an SRS is working in remote controlled mode. The function will an action sequence that
is used to complete a given task through RO’s manipulations into action segments and will then
compare the actions with those in the task categories defined at the first phase. If no match can be
found, the function will create a new task category, otherwise, it will enrich the task ontology of a
matching category with the task.
The outputs of the function are task categories and ontology.
Function SUP_F4 Outputs
Complete definition for a new skill
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 73 of 94
3.5.5 SUP_F5 Skill library update and query
3.5.5a SUP_F5a skill library update
Function SUP_F5a Purpose
Updating existing skill or insert a new skill
Function SUP_F5a Input
Indices of actions in the order of a particular skill, and its task categories and the corresponding task
ontology
Function SUP_F5a Operations
Insert and update record in the database.
Function SUP_F5a Outputs
Successful or not
3.5.5b SUP_F5b skill library query
Function SUP_F5b Purpose
Retrieve action sequences for a selected skill
Function SUP_F5b Input
Skill ID
Function SUP_F5b Operations
Retrieve corresponding action sequence
Function SUP_F5b Outputs
Action sequences for a selected skill
3.5.6 SUP_F6 Function for define a new action
Function SUP_F6 Purpose
Associate high level action with low level motor programme.
Function SUP_F6 Input
A low-level motor program with parameter about objects, locations, and grippers
A high-level physical action, which are represented as predicates, divide the set of object manipulations
into context-dependent operations
Function SUP_F6 Operations
Associate high-level physical actions with low-level motor programs that the robot can execute in the
domain. All of the above actions are parameterised with variables denoting objects, locations, and
grippers.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 74 of 94
Function SUP_F6 Outputs
Complete definition of an action and its corresponding motor programme, object and parameters
3.5.7 SUP_F7 Action library update and query
3.5.7a SUP_F7a action library update
Function SUP_F7a Purpose
Update existing or add new action in action library
Function SUP_F7a Input
Complete definition of an action and its corresponding motor programme, object and parameters
Function SUP_F7a Operations
Update in the database about the corresponding actions
Function SUP_F7a Outputs
Successful or not
3.5.5b SUP_F7b action library query
Function SUP_F7b Purpose
Retrieve motor program for desired high level action
Function SUP_F7b Input
Action ID and its parameters
Function SUP_F7b Operations
Retrieve corresponding low level motor programme.
Function SUP_F7b Outputs
Low-level motor programs that the robot can execute in the domain
3.5.8 SUP_F8 Function for building environment information
Function SUP_F8 Purpose
Generate 2D or 3D map for environment
Function SUP_F8 Input
Data from various sensors and from manually added data from plans serve as input
Function SUP_F8 Operations
The 2D map for navigation will be generated out of data from laser scanners, odometry and manually
measured data or already available maps like building plans. The required sensor data is acquired by
driving through the environment. All the input data is transformed into a 2D map for navigation that is
readable by the navigation component. Input sensors for the 3D map are 3D TOF and colour cameras
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 75 of 94
and manually measured data or already available 3D CAD data. The static 3D map is used both for
visualization of the environment. For manipulation the output from PER_F3 is used.
Function SUP_F8 Outputs
2D and 3D map of the environment in a robot readable format. The output data is used by navigation,
manipulation and user interface.
3.5.9 SUP_F9 Environment information update and query
3.5.9a SUP_F9a environment information update
Function SUP_F9a Purpose
Update environment information with newly generated map
Function SUP_F9a Input
2D and 3D map of the environment in a robot readable format and its corresponding categorisation
information.
Function SUP_F9a Operations
Update old map with new map, or create new map for unknown region.
Function SUP_F9a Outputs
Successful or not
3.5.9b SUP_F9b environment information query
Function SUP_F9b Purpose
Retrieve information about a specific area
Function SUP_F9b Input
Map ID or MAP categorisation
Function SUP_F9b Operations
Retrieve relevant MAP for current operation
Function SUP_F9b Outputs
2D or 3D map in a robot readable format
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 76 of 94
3.6 System Components
SRS components diagram is illustrated in the picture below. Colour coding has been used to indicate
whether the module is hardware specific (in green colour) or hardware independent (yellow colour).
Also based on the area of expertise of the project partners the individual modules has been assigned to
a partner and this has been indicated in the diagram with the small blue circles on the left-hand corner
of the module. The arrows between the components/group of components, shown in the diagram,
specify the high level conceptual interactions between the components.
Figure 12 - SRS Components Diagram
The function of SRS components are elaborated in the table below.
Compo
nent
ID
Component Name Functions
C1
Environment
Perception & Object
Recognition
PER_F3 Environment perception for navigation and manipulation
PER_F4 Learning new objects
SUP_F2 Function for define a generic new object
SUP_F8 Function for building environment information
C2
Human
Motion
Detection
PER_F5 Human Motion Analysis
C3
Robot
Motion
Interpretation
SUP_F6 Function for define a new action
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 77 of 94
C4 Context Extraction PER_F1 Recognition of known objects
PER_F2 Sensor fusion
C5 Decision
Making
DEC_F1a Formal formation for object and action representation
DEC_F1b Integrated decision making and switching between semi-
autonomous mode and fully autonomous mode
DEC_F1c Grasp configuration generation and selection
C6 Safety DEC_F3 Safety assurance function
C8 Learning SUP_F4 Function for define a new skill
DEC_F2 skill learning function
C9 Remote User
Interface
UI_F1 Authorisation and privacy function
UI_F2 Video communication among users
UI_F3 Robot control interface
C10 SRS Knowledgebase
SUP_F3 Object library update and query
SUP_F5 Skill library update and query
SUP_F7 Action library update and query
SUP_F9 Environment information update and query
C11 Planning &Control flow NAV_MAN1 Navigation/Manipulation by given targets
NAV_MAN1 Navigation/Manipulation by direct motion commands
SRS Components Description
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 78 of 94
Section 4. Non-functional requirement
4.1 User Interaction Requirement
Section 4.1 lists SRS UI function requirement, in addition UI general interaction requirement is listed as
follow:
Well suited for high-level operation (GUI-based arrangement, button presses, entering text,
assigning objects to classes, pointing at map location, pointing at objects to be manipulated, etc.)
This kind of operation is required for teaching and semi-autonomous operation
Well suited for low-level operation without trajectory copying or low-latency interaction (like
smartphone tilting), e.g. assuming Kinect-type interaction only by command gestures
Well suited for low-level operation with trajectory copying and low latency interaction (like
smartphone tilting)
The device (or device combination) is always on (important for remote user notifications)
All interaction with the device (including in particular the main form of interaction of the device,
like trajectory copying in the case of “Kin”, “Six”, “Fal”, “3dm”) will probably work over Internet,
assuming a state-of-the-art home configuration: DSL/cable (16000 kbps downstream, 1000 kbps
upstream, ping times around 25 ms) + Wi-Fi 802.11n.
Interaction device is portable (important for private remote operator)
The device and all associated devices are affordable (ideally some users have it anyway, and will
not have to buy it)
The device combination does not require much additional space (important user requirement by
elderly)
Versatility: Works for remote operation tasks of many application scenarios (not only the
currently chosen scenarios like preparing food and night monitoring) and for control of many
service robots (because the SRS concept is independent of a specific robotic platform)
4.2 Software architecture requirement
Robotic Hardware Independence: SRS is not bond to one single platform; the software must be
as robot hardware independent as possible. To archive this target, some software modules in
the architecture need function as device drivers and thus are tied to hardware. The rest of
modules should operate only on higher level hardware-independent abstractions.
Parallel Processing: Applications involved in SRS requires considerable amount of computational
resources for planning and control to sustain the local intelligence of the robot. At the same
time, the SRS software has to satisfy long term learning and analysis requirement. Furthermore,
it should also satisfy real-time constraints required by the real world applications. Generally, the
onboard computational resources of the robot cannot support all the required computation, so
separation the computational load across multiple sources e.g. off board machines is required.
Modularity and Collaborative Development: SRS project involves dozens of researchers and
developers from different organisation, discipline and background. Since it targets to build large
systems contributing to a sizable code base, it is of high importance to enforce modularity and
interoperability between the software components and organise them in a systematic way
allowing concurrent work on the system from all the partners in a fashion that components can
be developed and verified separately and integrated efficiently in the future.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 79 of 94
Cross Platform Communication: SRS communication requires transfer of data and commands
between various hardware platforms, operating systems and applications. In order to achieve a
versatile concept of robot communications it is very useful to build the SRS communications
based on an efficient and reliable foundation. Furthermore, although most of robotic resources
are available within Linux environment, some sensors and development kit come with only
binary Windows drivers. Therefore, SRS software system must be able to deal with multiple
operating system, and cross platform communication is required.
Integration with other Code Base: SRS intended to take the advantage of the latest progress of
robotic development. It should be capable of re-use code available from other source. For
example identified suitable candidates are the navigation system, and simulators from the
Player project, vision algorithms from OpenCV, and planning algorithms from OpenRAVE, among
many others. In each case, it should only to expose various configuration options and to route
data into and out of the respective software, with as little wrapping or patching as possible.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 80 of 94
Section 5. Conclusion and Requirements Traceability Matrix SRS solution includes three new user interfaces; three new control modules and a selected robotic hardware platform. The development focus is on scenario demonstration, easy access as well as efficient control of the platform. The basic hardware platform selected in SRS project is Care-O-bot 3. It is a mobile robot assistant able to move safely among humans, to detect and grasp typical household objects, and to safely exchange them with humans. However, the user interface of Care-O-bot 3 can not satisfy SRS user requirement in terms of usability and acceptance. In addition, limited by the state of art, Care-O-bot 3 (and other similar robots) can not cover SRS scenarios such as unexpected situations. Therefore, the project R&D will focus on the gaps between Care-O-bot now and use it as a remotely controlled home carer. It can be summarised in the following two aspects: Human-Robot Interaction of Remotely-Controlled Service Robots: The development in user interaction is to satisfy the usability and safety requirement of remotely controlled service robotic systems in a domestic environment. This work is critical for the overall acceptance of the system. The development of the SRS three user interfaces will enable the SRS solution to be accessed by average users in real life settings. Cognitive Capabilities of Remotely-Controlled Service Robots: The development in cognitive capability is to facilitate perception and decision making capabilities for remotely controlled semi-autonomous robotic systems. For tasks that cannot be performed by robots autonomously but can be executed remotely, a robot can try and support the remote operator for as much as possible. The above cognitive capability will be realised in the SRS control modules. The completeness of the SRS specification can be checked by the requirement traceability table below. It associates the SRS capability and functions with the original requirement raised in user requirement study.
Req. No Req. Description Related Scenarios Capability Function Implementation
R01 Frail elderly people selected as
potential users should have normal
cognitive capabilities or mild to
moderate cognitive impairment, but
neither moderately severe nor
severe cognitive impairment, in
order to be able to cooperate with
ALL N/A N/A
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 81 of 94
the robot.
R02 A flexible system of communication
and advice sending should be
designed, because family caregivers
like the system but they do not want
to be on-line 24 hours-a-day (related
to psychological burden).
ALL UI2 Communication support
UI_F2 Video communication
among users
R03 The system is able to maneuver in
narrow spaces: usually elderly lives in
small apartments full of furniture.
ALL PLA2 Chassis and power Based on selected platform
R04 The system recognizes the user
position in the environment (i.e. user
sat on the sofa in the living room,
user in bed).
ALL PER5 Human motion identification
and tracking
PER_F5 Human Motion Analysis
R05 The system recognizes different
environment of the house (kitchen,
living room, bedroom, bathroom).
ALL PER2 Environment perception for
navigation
PER3 Environment perception for
object manipulation
PER_F3 Environment perception
for navigation and manipulation
R06 The system moves around
recognizing and avoiding obstacles (a
door, a chair left in the house in the
wrong place)
ALL PER2 Environment perception for
navigation
PER_F3 Environment perception
for navigation and manipulation
R07 The system should help elderly
people with mobility issues such as
reaching or carrying out heavy
objects.
Mainly in S1 and S4, however, it
can also be useful in other
scenarios
O2 Grasp known objects
O3 Grasp unknown simple shaped
object
R08 The system recognizes and identifies
objects (shapes, colours, letters on
food boxes, numbers on microwave
ALL PER1 Recognition of known objects
PER_F1 Recognition of known
objects
PER_F4 Learning new objects
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 82 of 94
display). PER4 Learning new objects
R09 The system is able to grasp objects
(i.e. bottle, books).
ALL O2 Grasp known objects
R10a The system is able to handle objects
with a weight of 3 kg.
Mainly in S4, however, it can also
be useful in other scenarios
PLA3 Payload Based on selected platform
R10b The system is able to handle objects
with different shapes (i.e. bottles of
water vs. books).
ALL O3 Grasp unknown simple shaped
object
R11 The system is able to bring objects in
difficult places to reach for elderly
people (i.e. reach a book on a high
shelf)
ALL O1 Smooth indoor navigation
R12 The system brings objects to the user
avoiding contact with potential
dangerous parts (i. e. bring the
object nearer using the platform)
ALL DEC3 Provide high level safety
assurance
PLA7 Basic safe Interaction feature
Combine DEC_F3 Safety
assurance function
with platform safety features
R13 The system is able to manage objects
with care (i.e. choosing an object on
a shelf; open/close oven door).
ALL O4 Manipulation for predefined
tasks
O5 Manipulation for undefined
tasks
R14 The system should help with coping
with unexpected, emergency
situations such as falling.
Mainly in S2, however, it can also
be useful in other scenarios
O5 Manipulation for undefined
tasks
And complete sequence for
scenario details in section 2.6.2
R15 The system monitors activities and
recognizes the user position, and the
time spent in the same position.
Not pursued, Reason detailed in
D1.1A
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 83 of 94
R16 The system alerts a remote operator
when an emergency has occurred by
sending a high priority call to the
relative or to the service operator.
Mainly in S2 SUP1 Implementation of an
emergency and localisation device
SUP_F1 emergency and
localisation device
R17 The system could support the weight
of a man in a getting up task.
Not pursued, Reason detailed in
D1.1A
R18 The system stores information about
activities and recalls it for next
activities in order to cope with
complex housekeeping activities (i.e.
while the remote operator through
the system is managing shopping in
the afternoon and putting everything
at its place, the remote operator also
stores information into the system;
in this way at dinner time, food
menu results updated)
ALL DEC2a skill recoding DEC_F2 skill learning function
R19 The system remembers the past
activities and manage autonomously
the information when needed and
uses them for next activities.
ALL DEC2b skill generalisation DEC_F2 skill learning function
R20 The system can be programmed in
order to reminds the user to do
things at determinate times
Not pursued, Reason detailed in
D1.1A
R21 The system could be programmed in
order to perform some operation
during different times of the day
Not pursued, Reason detailed in
D1.1A
R22 The system allows communication
between user and remote operator,
so providing the user with help in
ALL ALL
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 84 of 94
housekeeping and mobility could be
an indirect way of making him/her
able to use more spare time for
social contacts
R23 Only authorized persons to have
access to the remote control of the
system
ALL UI1 Authorisation and privacy
UI_F1 Authorisation and privacy
function
R24 Authentication procedure as a
protection of the access to be
included for both family caregivers
and professionals.
ALL UI1 Authorisation and privacy
UI_F1 Authorisation and privacy
function
R25 Possibility of external and non
authorized intruders to be avoided
by a robustness security system
ALL UI1 Authorisation and privacy
UI_F1 Authorisation and privacy
function
R26 Avoid possibility of access to the
system without explicit consent of
the elderly, including non authorized
access of authorized remote
operators
ALL UI1 Authorisation and privacy
UI_F1 Authorisation and privacy
function
R27 If remote operator changes within
one session, the elderly user must be
informed
ALL UI1 Authorisation and privacy
UI_F1 Authorisation and privacy
function
R28 Unintentional, not authorized
disclosure of information related to
the life of the users has to be
prevented by restricting access to
the information stored in the system.
ALL UI1 Authorisation and privacy
UI_F1 Authorisation and privacy
function
R29 Storage and management of
personal information related to
behaviors and preferences of the
ALL DEC2a skill recoding
DEC_F2 skill learning function
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 85 of 94
users have to be done in safe,
restricted databases
R30 Storage of personal information
related to behaviors and preferences
of the users will be limited to that
information relevant for the
functionalities of the system. Non
relevant information processed if not
necessary.
ALL DEC2a skill recoding
DEC_F2 skill learning function
R31 Unintentional, not authorized
disclosure of information related to
the life of the users to be prevented
by including agreements of
confidentiality for authorized users.
ALL UI1 Authorisation and privacy UI_F1 Authorisation and privacy
function
R32 An “on/off” mode to be
implemented in order to protect
privacy in very personal moments.
The access to the “on/off” mode
could be adaptable attending to the
specific frailty of the elderly user.
ALL UI2 Communication support UI_F2 Video communication
among users
R33 Verification of the plans of action by
asking the elderly user before it
starts acting before it starts acting
autonomously
ALL UI2 Communication support
UI6 Support for decision making
UI_F2 Video communication
among users
UI_F3 Robot control interface
R34 Communication of action outcomes
during performance of the robot, in
order to maximize the awareness of
the elderly user. Communication as
continuous as possible.
ALL UI3c display feedback of the
navigation
UI4c display feedback of the
manipulation
UI_F3 Robot control interface
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 86 of 94
R35 No robot movement should happen
without initial confirmation by the
user who is in direct physical contact
with the robot / The robot should
avoid physical contact with the user.
ALL, in case of emergency, the
elderly user raise confirm the
movement by pressing emergency
button.
DEC3 Provide high level safety
assurance
PLA7 Safe Interaction feature
Combine DEC_F3 Safety
assurance function
with platform safety features
R36 There should be a clear indication on
the robot side if the robot is in
autonomous mode or in remote
controlled operation
ALL UI2 Communication support UI_F2 Video communication
among users
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 87 of 94
Section 6. References
Document No. Document Title Date Author
1 SRS D1.3 06 Sep 2010 Renxi Qiu
2 SRS D1.1a 04 Feb 2011 David Facal et al
3 Scenario Revision V3 21 Dec 2010 Marcus Mast
4 SRS User Specification V4 21 Dec 2010 Marcus Mast
5 Requirements and Concept for SRS
User Interface
16 Dec 2010 Gernot Kronreif
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 88 of 94
Appendix.1 SRS basic actions for manipulator
Table for basic actions for manipulator:
ACTION PRE-
CONDITION
POST-
CONDITION
EXCEPTIONS
ERROR STATES
REQUIRED RO
INTERVENTION
Plan trajectory Target
position
specified
updated 3D
map available
Collision-
free
trajectory
specified
Target position not
specified
- RO enters target
position
- Previous step in
action sequence
has to be
repeated/correcte
d
3D map not
available or
outdated
- RO inits 3D map
update
No trajectory found - Remove obstacles
- Manual move
Move to position Collision-free
trajectory
specified
Target
position
reached
No trajectory
specified
Previous step in action
sequence has to be
repeated/corrected
Target not reached
due to collision
- Remove obstacles
- Manual move
- Move platform
and repeat
trajectory planning
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 89 of 94
Appendix.2 Scenario breakdown into actions/functions
SCENARIO -
TASKS
SUB-TASKS ACTIONS/FUNCTIONS COMPONENT
(Decision making
communicates with ...)
PRE-CONDITIONS POST-CONDITIONS
Scenario 1
– Bring and
object
located on
a table
Get Order Get Starting Signal Local/ Remote User Interface SRS in idle state Name of task available
Get Sub-Task List from
knowledge base
Knowledge base Name of task available List of sub-tasks available
Find Table Plan trajectory for base Navigation Target position specified
2D map available
Trajectory found
Move base to scan position Navigation Trajectory found Position reached
Build 3D map Environment Perception
3D map generated
Locate table Environment Perception 3D map available Table extracted
Move base to
table
Plan trajectory for base Navigation Target position specified
2D map available
Trajectory found
Move base to table Navigation Target position specified
And valid trajectory
Position reached
Find object Locate objects
Object Detection Object is in knowledge base
Robot is in “find-object”
position
At least one object recognized
Select object Local/ Remote User Interface
Or automatic if previously
specified
At least one object recognized Grasp position specified
Pre-grasp position specified
Grasp configuration specified
Grasp object Update 3D map Environment Perception 3D map updated
Plan trajectory for arm Manipulation pre-grasp position specified
updated 3D map available
Trajectory found
Move arm to pre-grasp
position
Manipulation Trajectory found Pre-grasp position reached
Move gripper to open Manipulation Pre-grasp position reached Gripper open
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 90 of 94
position Grasp configuration available
Plan trajectory for arm Manipulation grasp position specified
updated 3D map available
Trajectory found
Move arm to grasp position Manipulation Gripper is open
grasp position reachable
grasp position specified
grasp position reached
Move gripper to close
position
Manipulation Gripper is open
grasp position reached
Grasp configuration available
Object grasped
Place object on
tray
Move tray to up position Manipulation Tray is up
Plan trajectory for arm Manipulation tray position specified
updated 3D map available
Trajectory found
Move arm to tray position Manipulation tray position specified Tray position reached
Move gripper to open
position
Manipulation Tray position reached
Tray is up
Gripper is open
Plan trajectory for arm Manipulation Folded position specified
updated 3D map available
Trajectory found
Move arm to folded position Manipulation Gripper is empty Folded position reached
Move gripper to close
position
Manipulation Gripper is empty Gripper is closed
Move base to
person *)
Plan trajectory for base Navigation Target position specified
2D map available
Trajectory found
Move base to person Navigation Target position specified
And valid trajectory
Position reached
Cleanup Check table empty Environment Perception Table empty
Move tray to folded position Manipulation Tray is folded
Plan trajectory for base Navigation Target position specified
2D map available
Trajectory found
Move base to home position Navigation Target position specified
And valid trajectory
Position reached
Set to idle state Home position reached Robot is in idle state
*) Assume the person is static (position specified in advance); the movement of the person can be detected and tracked by SRS.
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 91 of 94
SCENARIO -
TASKS
SUB-TASKS ACTIONS COMPONENT
(Decision making
communicates with ...)
PRE-CONDITIONS POST-CONDITIONS
Scenario 1-
Heating and
serving
dinner
Get Order Select meal Local/ Remote User Interface SRS in idle state Meal selected
Get Starting Signal Local/ Remote User Interface Meal selected Name of task available
Get Sub-Task List from
knowledge base
Knowledge base Name of task available List of sub-tasks available
Find Fridge Move base to scan position Navigation Target position specified Position reached
Build 3D map Environment Perception
3D map generated
Locate fridge Environment Perception 3D map available Fridge extracted
Move base to
fridge
Move base to fridge Navigation Target position specified Position reached
Open fridge Locate handle Object Detection
Environment Perception
Object is in knowledge base
Robot is in “find-object”
position
Object recognized
Move arm to pre-grasp
position
Manipulation Pre-grasp position reachable
(object position), pre-grasp
position specified
Pre-grasp position reached
Move gripper to open
position
Manipulation Pre-grasp position reached
Grasp configuration available
Gripper open
Move arm to grasp position Manipulation Gripper is open
grasp position reachable
grasp position specified
grasp position reached
Move gripper to close
position
Manipulation Gripper is open
grasp position reached
Grasp configuration available
Object grasped
Move arm and base
synchronously to open door
Manipulation
Navigation
Door open trajectory
available/possible
Door open position reached
Move gripper to open
position
Manipulation Door open position reached Gripper is open
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 92 of 94
Move arm to folded position Manipulation Gripper is empty Folded position reached
Move gripper to close
position
Manipulation Gripper is empty Gripper is closed
Move base to
fridge
Move base to fridge Navigation Target position specified Position reached
Find object Locate object Object Detection Object is in knowledge base
Robot is in “find-object”
position
Object recognized
Grasp object Update 3D map Environment Perception 3D map updated
Move arm to pre-grasp
position
Manipulation Pre-grasp position reachable
(object position), pre-grasp
position specified
Pre-grasp position reached
Move gripper to open
position
Manipulation Pre-grasp position reached
Grasp configuration available
Gripper open
Move arm to grasp position Manipulation Gripper is open
grasp position reachable
grasp position specified
grasp position reached
Move gripper to close
position
Manipulation Gripper is open
grasp position reached
Grasp configuration available
Object grasped
Move arm to transport
position
Manipulation Transport position specified Transport position reached
Place object on
tray
See fetch and bring
Close fridge Move arm and base
synchronously to close door
Find
Microwave
Move base scan position Navigation Target position specified Position reached
Update 3D map Environment Perception
3D map generated
Locate microwave Environment Perception 3D map available Microwave extracted
Move base to
microwave
Move base to microwave Navigation Target position specified Position reached
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 93 of 94
Open
microwave
Locate button Object Detection
Environment Perception
Object is in knowledge base
Robot is in “find-object”
position
Object recognized
Move arm to pre-push
position
Manipulation Pre-push position reachable
(object position), pre-push
position specified
Pre-push position reached
Move gripper to push position Manipulation Pre-push position reached
push configuration available
Gripper is in push
configuration
Move arm to push position Manipulation Gripper is in push
configuration
push position reachable
push position specified
push position reached
Move arm to pre-push
position
Manipulation Pre-push position reachable
pre-push position specified
Pre-push position reached
Move arm to folded position Manipulation Gripper is empty Folded position reached
Move gripper to close
position
Manipulation Gripper is empty Gripper is closed
Update 3D map Environment Perception 3D map updated
Move arm to delivery position Manipulation Delivery position reachable Delivery position reached
Move gripper to open
position
Manipulation Delivery position reached Gripper is open
Place object in
microwave
Move arm to folded position Manipulation Gripper is empty Folded position reached
Move gripper to close
position
Manipulation Gripper is empty Gripper is closed
Move base to door open
position
Navigation Door open position specified Door open position reached
Move arm and base
synchronously to close door
Manipulation
Navigation
Door closed trajectory
available/possible
Position of door/ handle
stored
Door closed position reached
Locate button Object Detection
Environment Perception
Object is in knowledge base
Robot is in “find-object”
Button detected
SRS Deliverable 1.3 - A Due date: 15 Jan 2011
FP7 ICT Contract No. 247772 1 February 2010 – 31 January 2013 Page 94 of 94
position
Close
Microwave
Move arm to press button
position
Manipulation Button position specified
Button reachable
Button pressed
See above
Activate
Microwave
Set cooking parameters Local/ Remote User Interface
Optional, see above
See above
Open
Microwave
Manipulation
Find Object See above
Grasp Object See above
Find table
Move base to
table
Place object on
table
Locate: Perception actions
Move: Movement actions
Get/Set/Check/Enab