the benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/bruna...
TRANSCRIPT
![Page 1: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/1.jpg)
Eindhoven, 22-6-2011
Identity number 0620272
in partial fulfilment of the requirements for the degree of
Master of Science
in Human Technology Interaction
Supervisors:
R.H. Cuijpers, Industrial Engineering & Innovation Sciences
J.R.C. Ham, Industrial Engineering & Innovation Sciences
The benefits of using high-level goal information for robot navigation
M.T.Bruna
![Page 2: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/2.jpg)
2
SUMMARY
A challenging aspect in the field of assistive robots is approaching users for social
interaction. The robot has to be able to do this efficiently and in an acceptable way.
To navigate towards a moving target its future location has to be inferred.
To investigate the benefits of using information about high level behavior for robot
navigation two studies have been done.
In the first study the effect of using high level information about behavior on accuracy of
movement prediction was investigated. A Hierarchical Hidden Markov model that uses
high level information and a Hidden Markov Model that did not uses this were
implemented and tested on their prediction accuracy for motion goals.
Both models were tested using movement data collected from two COPD patients.
The Hidden Markov Model performed reasonable, correctly predicting 40 to 60% of the
movement goals compared to 20% accuracy using a random model. While the
hierarchical model obtained the same accuracy in predicting higher level behavior, this
did not result in improved motion goal prediction.
In the second study the effect of movement goal prediction on user evaluation was
investigated. Fourteen participants took part in the experiment. They had to interact with
Nao, a social robot, in three scenarios, varying in urgency. To robot approached the
participants in three different ways, varying from reactive to anticipatory.
Each participant went to all 9 possible conditions. After each trial they rated the robot’s
behavior on several dimensions such as anthropomorphism, likeability and perceived
intelligence. The results show that while there is no effect of approaching behavior on
user ratings, there were significant differences in ratings between the different scenarios.
Scenarios in which the users had the most interaction with the robot were rated more
positively than scenarios in which there was less interaction.
![Page 3: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/3.jpg)
3
1 INTRODUCTION............................................................................................................................... 4
1.1 BACKGROUND .............................................................................................................................. 4 1.2 GOAL PREDICTION........................................................................................................................ 4 1.3 USER EXPERIENCE ........................................................................................................................ 5 1.4 AIM OF THE STUDY ....................................................................................................................... 6
2 HIDDEN MARKOV MODEL........................................................................................................... 7
2.1 INTRODUCTION............................................................................................................................. 7 2.2 HIDDEN MARKOV MODEL.......................................................................................................... 10 2.3 HIERARCHICAL HIDDEN MARKOV MODEL................................................................................. 13 2.4 DATA ......................................................................................................................................... 16 2.5 TEST METHOD............................................................................................................................ 18 2.6 RESULTS .................................................................................................................................... 19
3 USER EXPERIENCE TEST............................................................................................................ 25
3.1 PARTICIPANTS ............................................................................................................................ 25 3.2 METHOD .................................................................................................................................... 25 3.3 SETUP......................................................................................................................................... 26 3.4 EXPERIMENTAL DESIGN ............................................................................................................. 27 3.5 TASK .......................................................................................................................................... 28 3.6 PROCEDURE ............................................................................................................................... 28 3.7 DATA ANALYSIS ......................................................................................................................... 28
4 RESULTS .......................................................................................................................................... 30
4.1 GENERAL RESULTS ..................................................................................................................... 30 4.2 ANIMACY ................................................................................................................................... 30 4.3 ANTHROPOMORPHISM ................................................................................................................ 31 4.4 LIKEABILITY .............................................................................................................................. 33 4.5 PERCEIVED INTELLIGENCE ......................................................................................................... 34 4.6 PERCEIVED SAFETY.................................................................................................................... 35 4.7 APPROPRIATENESS ..................................................................................................................... 37
5 DISCUSSION .................................................................................................................................... 39
5.1 MOVEMENT PREDICTION ............................................................................................................ 39 5.2 USER EXPERIENCE TEST.............................................................................................................. 39
6 REFERENCES.................................................................................................................................. 41
APPENDIX A GENERALIZED VITERBI ALGORITHM........................................................... 43
APPENDIX B INFORMED CONSENT FORM ............................................................................. 45
APPENDIX C INSTRUCTIONS ...................................................................................................... 46
APPENDIX D QUESTIONNAIRE................................................................................................... 48
![Page 4: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/4.jpg)
4
1 Introduction
1.1 Background
In an ageing society there are more elderly people and less people to take care of them.
Especially people with diseases such as COPD (Chronic Obstructive Pulmonary Disease)
will need extra care, which will become more and more difficult to provide.
Therefore it is necessary to come up with alternatives for traditional care methods.
One of these alternatives is the use of smart homes, homes that use data from various
sensors to assist people in their daily lives. The KSERA project aims to integrate assistive
robots in smart homes. An important aspect of assistive robots is their navigation
capabilities. It has to be able to navigate towards a target promptly; this target can either
be static (e.g. the couch) or dynamic (e.g. a human). In the case of a moving target its
future location has to be inferred for efficient navigation. The problem is complicated
further by the fact that a domestic environment is dynamic; for example chairs can be
moved. We also want the robot to approach a person in a natural way. Inferring a
person’s movement goal could not only increase navigation efficiency it might also result
in a more positive evaluation of the robot by the user.
1.2 Goal prediction
Several methods for motion goal prediction have been proposed, most of which use
clustering of movement patterns to predict the most likely movement goal for a certain
trajectory. Bennewitz, Cielniak, Burgard, & Thrun (2005) present an implementation of
clustering using the expectation maximization (EM) algorithm. Vasquez & Fraichard
(2004) use Complete Link clustering and Deterministic Annealing clustering. Their
methods outperform the EM method in inferring movement goals.
While these clustering models are quite successful in static environments, one of thei
disadvantages is their sensitivity to changes in the environment. Bruce & Gordon (2003)
describe a method that learns possible destination using clustering but then uses a path-
planning algorithm to predict motion. This model is more robust against disturbances in
walking behavior. Foka & Trahanias (2010) present a predictive navigation method using
Partially Observable Markov Decision Process that uses human motion prediction for
collision avoidance.
Another important property of most clustering-based approaches is that they use only
low-level information (e.g. trajectory data) to recognize the goal of the movement.
Therefore these methods do not take advantage of the hierarchical nature of many
behaviors. A behavior, for example getting up in the morning, can be comprised of
several sub-behaviors such as making breakfast & taking a shower. These sub-behaviors
can also consist of several sub-behaviors such walking from one room to the other.
This subdivision continues until we one arrives at the level of action primitives which
could be considered to be the building blocks of more complex behaviors (Cuijpers et al.
2006). Thus, low level behaviors can be a part of many higher level-behaviors. This helps
us to represent an infinite set of behaviors in a convenient and economic way. It is likely
![Page 5: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/5.jpg)
5
that if we know what high level behavior a person is involved in, this can help us to
recognize the lower level behaviors. Indeed, Cuijpers, van Schie, Koppen, Erlhagen &
Bekkering (2006) show that information about higher level behaviors can improve
recognition of lower level behaviors.
Kanda, Glas, Shiomi & Hagita (2009) use motion clustering to determine walking
behavior in a shopping mall. They used this information to decide which people their
robot should approach.
Several methods for recognizing high level behavior use implementation and extensions
of the Hidden Markov Model (HMM). HMM’s model the world using discrete states and
changes in the world using transitions between these states. This makes them suitable for
describing temporal patterns, for example in the field of speech recognition (Rabiner
1989). Due to their simple structure efficient inference is possible. Nguyen, Bui,
Venkatesh and West (2003) use a Abstract Hidden Markov Memory Model to recognize
high level behaviors from video data, Duong, Bui., Phung and Venkatesh (2005) use a
Switching Hidden Semi-Markov Model to recognize activities and Nguyen, Phung,
Venkatesh & Bui (2005) use a Hierarchical Hidden Markov Model (HHMM) to infer
high-level activities from movement trajectories. Van Kasteren, Noulas, Englebienne &
Kröse (2008) use both an HMM and conditional random fields for activity detection from
sensor readings. HMM’s and their adaptations can not only be used to infer activities
from observations, it is also possible to use them to do prediction. Gani, Sarwar and
Rahman (2009) use an HMM to predict wireless network traffic.
1.3 User experience
In Human-Robot Interaction there are several factors that influence the users’ opinion of
the robot and the interaction with it. For example, Bartneck, Kanda, Mubin and Al
Mahmud (2009) show that the appearance of the robot has a significant effect on its
perceived intelligence. To measure this they use a questionnaire for evaluating a robot on
several dimensions such as likeability and perceived intelligence developed by Bartneck,
Kulic, Croft and Zoghbi (2008). Dautenhahn et al.(2006) have researched the preferred
way for a robot to approach a seated human and found that people preferred an approach
from the left or right side over a frontal approach.
One approach to improve user evaluation is to make robots behave like humans. Althaus,
Ishiguro, Kanda, Miyashita and Christensen (2004) describe their experiment in which a
robot approaches a group of humans. They find that their robot behaves in a similar way
as the humans. These people rate the robot’s behavior as very natural, which increases its
perceived intelligence. Pacchierotti, Christensen, and Jensfelt (2005) show that a robot
that shows anticipatory behavior to avoid collision is rated positively.
Most research on approaching behavior by robots uses scenarios where the user initiates
the interaction. When the robot has the role of caretaker it also has to be able to initiate
the interaction. In these situations movement prediction is necessary. This is what we
explore in this study.
![Page 6: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/6.jpg)
6
1.4 Aim of the study
To investigate the influence of predictive navigation on human-robot interaction we will
investigate two aspects.
1. The benefits of using information about high-level behaviors on the prediction
accuracy.
2. The evaluation of a robot that uses predictive navigation to approach a user.
To address these two aspects this study aims to answer the following question:
Does using high-level information about a person’s behavior improve the robustness
and flexibility of a robot’s navigational skills in such a way that the robot’s behavior
appears more natural and socially intelligent to users?
From this question and previous studies we can make a number of hypotheses that
address some important elements of predictive navigation. First of all we compare a
prediction system that uses high level information about a person’s behavior to a system
that only uses movement patterns.
1. We expect that using high level information about a user’s goals improves the
reliability of a person’s anticipated destination.
To learn people’s walking behavior as it changes over time and to adapt to new users the
system has to be able to update its parameters. This should improve prediction accuracy.
2. The system can flexibly adjust over time to new circumstances including
different personalities.
Regardless of the success of the chosen prediction method we are interested to know if
using a person’s predicted destination for robot navigation influences the user’s opinion
of the robot. It is likely that a robot that shows anticipatory behavior is preferred over a
robot that only shows reactive behavior.
3. We expect that people experience an interaction with a robot that anticipates
movement to be more positive than one that does not.
An assistive robot has various contexts in which it has to interact with a user. Some are
very urgent such as a medical emergency. In this case it could be preferred that the robot
is near the user. In less urgent scenarios, such as the robot giving advice to the user, this
is probably less important.
4. The user evaluation of the robot’s approaching behavior depends on the urgency
of the situation.
To test hypotheses 1 & 2 we will build two prediction models, one that uses high-level
information about behaviors and one that does not. Both will be tested using data
obtained from COPD patients. To test hypotheses 3 & 4 we will perform an experimental
study in which participants have to interact with an assistive robot in different scenarios
as it approaches the user in different ways.
![Page 7: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/7.jpg)
7
2 Hidden Markov Model
2.1 Introduction
2.1.1 Hidden Markov Model
In order to make predictions about the future location of a person we need to model their
behavior. One way to do this is using a Hidden Markov Model (HMM). In such a HMM
the world is represented by a finite number of states. Between these states transitions are
possible. The chance of going from one state to another is the transition probability. In an
HMM the next state is assumed to be only dependent on the current state, this is the
Markov assumption. This assumption allows us construct a square transition matrix A.
The element aij represents the chance of going to state j give that the current state is i. A
special transition is the self-transition where the system stays in the same state or, more
formally, j=i. Figure 1 shows a simple example with two states (indicated by the blue
circles), including all transitions (indicated by the arrows) and transition probabilities aij.
Figure 1: Example of a two-state Markov Model with given transition probabilities.
The transition probabilities can be represented as a transition matrix. The transition
matrix for Figure 1 would be:
=
1.09.0
8.02.0A
If we apply this to our research topic we can use states to represent the location of a
person. For example one state could represent that the person is in the bathroom. A state
transition would then mean going from one room to another and the transition probability
is the probability of going to a certain room given the current location.
Usually states are not directly observable and have to be inferred from (noisy)
observations. The true state can therefore be thought of as hidden. If we assume that there
is a finite number of different observations we can define the observation matrix B,
where each element bik is the chance that k is observed when the system is in state i.
Formally, k represents an element from an abstract alphabet. Basically, this can be
anything, but in our case it represents observing a person to be present in room k.
In this study we used data of the whereabouts of COPD patients in their own homes.
These data were obtained by recording the room a person is in using motion sensors.
![Page 8: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/8.jpg)
8
However, these sensors introduce some uncertainty: they can give a signal when nobody
is there but they could also fail to detect a person. Therefore we also have to model the
sensors. We want to know the probability of each possible sensor reading given that a
person is in a certain room.
In Figure 2 an example for an observation model is given. In this model there are two
states as before (blue circles), but now an observation is made using the sensors (yellow
circles). In this example the most likely situation is that the person is observed to be in
the room he or she is actually in (vertical solid arrows). However, there is also a small
chance that the sensors erroneously detect the person to be in the other room (diagonal
solid arrows).
Figure 2: Two state Hidden Markov Model with two possible observations
As before, the observation probabilities can be captured in a matrix. For the system in
Figure 2 we obtain:
=
9.01.0
3.07.0B
Usually we only have a sequence of observations but we want to know the most likely
state. To determine the likelihood of a state given a sequence we can use a modified
version of the forward-backward algorithm described by Rabiner (1989). When we know
this likelihood we can use the transition probabilities to determine the most probable next
state.
2.1.2 Hierarchical Hidden Markov Model
One disadvantage of modeling behavior using an HMM is that it ignores the hierarchical
nature of behaviors. A behavior such as “making breakfast” might involve several sub-
behaviors such as going to the kitchen to get food and going to the living room to eat it.
On the other hand some sub-behaviors such as going from to kitchen to the living room
might be part of more than one higher level behavior. If we can infer the behavior people
are involved in, we might be able to make more accurate predictions about the next sub-
behavior.
![Page 9: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/9.jpg)
9
To capture this hierarchical structure we can use the Hierarchical Hidden Markov Model
described by Fine, Singer and Tishby (1998).This model consists of a layered structure of
Markov Models (MM). On the top levels (the parent level) each state activates another
MM at the child level. This recursive structure ends at the lowest level where the states
produce observation, as in the normal HMM. Each MM behaves as a regular MM until it
reaches an end state. From the end state control of the systems is returned to layer above.
The end state does not produce observations but acts like a flag, signaling the sequence at
the current level has ended.
In Figure 3 our example of the HMM is extended to a two level HHMM. States in the
parent layer (internal states) activate a child HMM in the lower level. The states in this
child HMM are production states, because they do produce observations as in a regular
HMM. Which sub-state is activated first is modeled using the initial sub-state probability
π. There are separate transition matrices for each child MM. In order to move back from
the child states to the parent state the probability of a transition to the end state is added
to the transition matrix. As the end state is reached at the child level the model proceeds
at the parent level where a transition between the internal parent states may occur.
Figure 3: Two layer hierarchical hidden Marko model with two states on level 1 and four states on
level 2
So far this model is able to capture the hierarchical nature of behavior but not yet its
shared sub-structures. To do this we can use parameter tying. If we want to model that
state 1 and 3 represent the same location but for different activities we can make the
observation probabilities for state 1 and 3 equal. With this model we can use the
generalized Viterbi algorithm described by Fine et al. (1998) to predict the most likely
next state given an observation sequence. This procedure will be described in the next
chapter.
![Page 10: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/10.jpg)
10
2.2 Hidden Markov Model
2.2.1 Definitions
A Hidden Markov Model can be described by the following parameters
St State at time t
Ot Observation at time t
qi State i
T Length of sequence
N Number of states
K Number of observations
A N x N transition matrix where [ ]itjtij qSqSPa === + |1 and 1
1
=∑=
N
j
ija
B N x K observation matrix where [ ]ittik qSkOPb === | and 11
=∑=
K
k
ikb
π The initial state probability where [ ]iSPi ==π
λ The model parameters (A, B, π)
2.2.2 Training
To obtain the model parameters we use a procedure called batch learning.
A large data set is divided in two sets. One part, the training set is used to determine the
model parameters. Then the model is tested using the other set, the test set, to determine
model performance.
To estimate the transitions probability ija we count the number of transitions from state i
to j in the training set.
∑=
j
ij
ij
ijn
na
Where nij is the number of transitions from state i to state j in the training set.
In a similar way we can estimate the observation probabilities by counting the number of
times k is observed while in state i.
Where mik is the number of times k was observed in the training set while in state i.
∑=
k
ik
ik
ikm
mb
![Page 11: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/11.jpg)
11
2.2.3 Prediction
To predict the most likely next state we first have to determine the most likely current
state. To do this we can use the forward variable α from the forward-backward algorithm
described in Rabiner (1989).
The forward variable represents the probability of the observation sequence O that ends
in state i at given the model parameters λ. ( ) [ ]λα |,OqSPt iti ==
To estimate the probability of the current state for a given observation sequence and the
model parameters we can use a normalized α
[ ][ ]
[ ]( )
( )∑=
==
==N
j
t
tit
it
j
i
OP
OqSPOqSP
1
|
|,,|
α
α
λ
λλ
The forward variable can be computed iteratively using the following formulas
( ) ( )
( ) ( ) ( )tj
N
i
ijtt
ii
Obaij
Obi
⋅
⋅=
⋅=
∑=
+
1
1
11
αα
πα
If we have observations until time T we can calculate ( )Tiα ; to determine the most
probable state at time T+1 we need ( )1+Tiα .
Because we don’t have an observation at time T+1, we assume that all observations are
equally likely, so we set ( )1+Tj Ob to 1/K for all j, where K is the number of observations.
Then the most likely next state j:
( )
( )
( )
( )
( )
( )
⋅
⋅
=
⋅
⋅
⋅
⋅
=
=
∑ ∑
∑
∑ ∑
∑
∑= =
=
= =
=
=
+
+
N
j
N
i
ijT
N
i
ijT
N
j
N
i
ijT
N
i
ijT
jN
j
T
T
jlemostprobab
ai
ai
Kai
Kai
j
jj
1 1
1
1 1
1
1
1
1
1
1
maxargmaxarg
α
α
α
α
α
α
Because we are interested in the relative probabilities, not in the absolute ones we can
simplify the previous expression by removing the normalization step (the denominator) to
obtain:
( )
⋅= ∑
=
N
i
ijTj
lemostprobab aij1
maxarg α
![Page 12: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/12.jpg)
12
2.2.4 Parameter updating
There are several methods to update model parameters. One simple way to do this is
using an exponentially moving average. The test set is updated in batches.
After a batch of data has been used for prediction that batch is used to construct
observation and transition matrices Aupdate and Bupdate as explained in paragraph 2.2.2.
Then the transition matrix is updated using:
( ) lAlAA updateoldnew ⋅+−⋅= 1 where l is the learning rate 10 ≤≤ l
The same is done for the observation matrix.
( ) lBlBB updateoldnew ⋅+−⋅= 1
![Page 13: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/13.jpg)
13
2.3 Hierarchical Hidden Markov Model
2.3.1 Definitions
The HHMM can be described by the following parameters
d
tS State at time t at level d
d
iq State I at level d
Ot Observation at time t
D Number of layers
Nm Number of states at layer m
K Number of observations
Am
Nm x Nm x Nm+1 transition matrix where
[ ]d
n
d
t
d
i
d
t
d
j
d
t
q
ij qSqSqSPadn ==== ++++
+ ,| 1111
1 and 11
=+∑=
dn
dn a
endi
N
j
a
ij aa
B N x K observation matrix where [ ]D
i
D
ttik qSkOPb === | and 11
=∑=
K
k
ikb
π The initial state probability where [ ]11|1 −− ===
−d
n
d
t
d
i
d
t
q
i qSqSPdnπ
2.3.2 Training
Just as for the HMM the data is divided in a training and a test set. The training set is
used to estimate model parameters. To estimate the transition matrix we use the same
method as for the HMM. We count the number of transitions from i to j given the higher
state n. Then for each element in the transitions matrix we set:
∑=
j
d
ij
d
ijq
ijn
ndn
c
ca
Where d
ijc is the number of transitions from state i to state j on level d+1 while level d is
in state n. To construct the observation matrix we use the same method as for the HMM.
2.3.3 Prediction
Similar to future state prediction for the HMM we first need to determine the most
probable current state.
Because of the hierarchical structure we need a number of steps to do this. Figure 4
shows the most probable state inference procedure schematically.
![Page 14: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/14.jpg)
14
Figure 4: Most probable state inference procedure for HHMM
Again, just as for the HMM, we start with observations. From these we determine the
most probable states at the lower level (level 2) for each possible state at the parent level
(level 1). We use this to determine the most likely state at level 1 for each time step.
Finally we can determine the most likely state at level 2.
This procedure is described by Fine et al.(1998) as the generalized Viterbi algorithm.
For this algorithm they define three variables for each level. For the variables for the
states on level 1 we can omit dependencies on the layer above it (level 0) because there is
none.
• ( )1,,, −+ d
h
d
i qqkttδ is the likelihood of the most probable state sequence generating
ot…ot+k assuming it was solely generated by a recursive activation that started at time
t from state 1−d
hq and ended at d
iq which returned to 1−dq at time t+k
• ( )1,,, −+ d
h
d
i qqkttψ is the most probable state to be activated by 1−d
hq before d
iq . If
such a state does not exist (ot…ot+k was solely generated by d
iq ) we set
( ) 0,,, 1def
d
h
d
i qqktt =+ −ψ .
• ( )1,,, −+ d
h
d
i qqkttτ is the time step at which d
iq was most probable to be called by 1−d
hq . If d
iq generated the entire subsequence we set ( ) tqqkttd
h
d
i =+ −1,,,τ .
All details about updating these variables can be found in Appendix A or in Fine et al.
(1998).
Because this algorithm uses the probability of sequences of states, not individual states,
determining the current state and predicting the next state can be done at the same time.
Just as for the HMM we have an observation sequence of length T and we want to predict
both the production and the internal state at time T+1.
![Page 15: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/15.jpg)
15
Because we don’t know the observation at time T+1, we assume again that all
observations are equally likely: ( )K
Ob Tj
11 =+ where K is the number of observations.
2.3.3.1 Prediction of internal states
To find the most probable state sequence at level 1 we first determine the most probable
final state:
( ){ }11 ,1,1maxarg 1 iqend qTSi
+= δ
Then we can find the most probable transition time using: ( )1,1,1 endswitch STt += τ
The most probable internal state before the switching time is ( )11 ,,1 endswitch Stq ψ=
We repeat this procedure until we have obtained the complete state sequence at level 1.
2.3.3.2 Prediction of production states
Now that we have the most probable internal state sequence we can find the most
probable production state sequence.
Just as for the internal state we first find the most probable production state at time T+1.
( ){ }1
1
22
1 ,,1,1maxarg 1 ++ += TiqT SqTSi
δ
Then we can determine the most probable previous production state using:
( )122
1 ,,, ttswitcht SSttS ψ=−
We repeat this procedure until we have obtained the complete state sequence at level 2.
![Page 16: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/16.jpg)
16
2.4 Data
2.4.1 Data used
To test both models we will use data obtained from 2 COPD patients for the ITV project.
This data consists of sensor readings indicating in which room a person was detected.
Both participants lived in a similar apartment consisting of:
• Bedroom
• Bathroom
• Kitchen
• Living room with a chair
Each room had a motion sensor and the chair in the living room had a pressure sensor.
In Figure 5 a lay-out for one of the apartments is shown.
Figure 5: Apartment lay-out, doors are marked red and sensors are marked yellow
The number of states is 6 (5 locations and ‘away’) and the number of observations is 7 (5
locations, ‘unknown’ and ‘multiple’)
![Page 17: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/17.jpg)
17
2.4.2 Data filtering
To make the data suitable for the HMM several filtering steps were applied
Only days that have data between 6:30 and 23:59 are used.
The data is first filtered to a timescale of 1 minute. For each minute the location in which
the person has been the longest will be marked as the location for that minute.
There are two results that need special attention because they are not usable for training:
• In none of the locations a person is detected.
• A person is detected in multiple rooms.
We need to determine the “true” state for these cases.
No detection
If a person is not detected for less than 15 minutes he is assumed to have stayed in last
room for which there was a sensor reading.
If a person is not detected for more than fifteen minutes he is assumed to be away from
the beginning of the undetected sequence.
Multiple detections
If a person is detected in more than room at the same time the following filter steps are
applied:
When a person is detected in room A and then in room A and B, the person is assumed to
be in room A. If a person is detected in room A, then in room B and C and finally in
room C the person is assumed to have been in room A and C.
After these steps the final results are two vectors, a state sequence vector and an
observation sequence vector, which can be used by the HMM
For the HHMM an additional step is needed: the definition of the higher level states.
Because the dataset used was not obtained for this purpose there is no information about
higher level behavior. Therefore we have to arbitrarily define them. We have chosen to
use the time of the day; we assume that activities are different for the morning, afternoon,
evening and night.
In Table 1 the higher level state for each time of the day can be found.
Table 1: Definition of higher states for HHMM
2.4.3 Removal of self-transitions
When the average consecutive time spent in a room is much greater than the time step the
most likely transition is the self-transition. Therefore often the most probable next state is
Time State
6:30 – 10:59 1
11:00 – 15:59 2
16:00 – 19:59 3
20:00 – 23:59 4
![Page 18: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/18.jpg)
18
the same as the current and the most successful prediction strategy is predicting the
person will stay in the same room. To obtain more useful predictions we can remove the
self-transitions. Of course this causes the loss of temporal information but allows us to
predict the next room a person will go to when he changes rooms.
2.5 Test Method
To test the performance of the different models the following aspects were tested:
• Prediction accuracy
• Learning over time
• Adapting to different persons.
2.5.1 Prediction accuracy
For a given sequence the prediction accuracy is the ratio of the number of correctly
predicted states to the total number of states in the sequence.
To test the prediction accuracy the following method is used:
• The model is trained on 8 days of data.
• The trained model is used to predict the remaining days
• This is repeated for both data sets.
For comparison prediction was also done using the following method:
• The observation matrix was obtained using 8 days of data
• All transition probabilities were set to 1/(N-1) making every transition equally likely.
• Prediction was done again for both datasets
2.5.2 Learning over time
To test the effect of parameter updating on prediction accuracy the following the
following method is used:
• The observation matrix is obtained from training data
• All transitions probabilities in the transition matrix are equal.
• An iterative sequence is started where
o The system is used to test on one day of data
o The system’s parameters are updated using the used day
o This procedure is repeated for each day in the dataset.
2.5.3 Adapting to a person
To test how quickly and how well a system can adapt to a different user the method for
learning over time is used with one addition.
After all the data from the first set have been used for prediction the iterative procedure
described in the previous paragraph is continued with the other data set.
![Page 19: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/19.jpg)
19
2.6 Results
After all the filtering steps 45 days of data are left in dataset 1 and 39 days of data in
dataset 2.
2.6.1 HMM
2.6.1.1 Accuracy
For each dataset the model was trained using the first 8 days of data and then tested on
the remaining days. In Figure 6 the results for dataset 1 can be found, the results for
dataset 2 can be found in Figure 7. The results are summarized in Table 2.
For both datasets the trained model outperforms the control model with a factor 2.
0,0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1,0
Trained
Control
Figure 6: HMM Prediction results for dataset 1
0,0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1,0
Trained
Control
Figure 7: HMM Prediction results for dataset 2
Table 2: Results of HMM
Trained Control
Dataset 1 0.62 0.27
Dataset 2 0.38 0.17
![Page 20: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/20.jpg)
20
2.6.1.2 Learning over time
For both systems a learning parameter of 0.05 was used.
In the Figure 8 the prediction results for dataset 1 for the HMM with parameter updating
are shown. We can see that already after one day the prediction accuracy has increased to
the level of the trained HMM.
The results for dataset 2 show the same pattern, as can be seen in Figure 9
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1
0 5 10 15 20 25 30 35 40 45 50
Time (days)
Accu
racy
Figure 8: HMM Prediction results for dataset 1 using parameter updating (l=0.05)
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1
0 5 10 15 20 25 30 35 40 45
Time (days)
Accu
racy
Figure 9: HMM Prediction results for dataset 1 using parameter updating (l=0.05)
![Page 21: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/21.jpg)
21
2.6.1.3 Adapting to another person
In Figure 10 the prediction results for the HMM that is updated first using dataset 1 and
then using dataset 2. We can clearly see a drop in accuracy after switching to the other
dataset. The model that is first updated using dataset 2 and then using dataset 2 shows the
same behavior, as can be seen in Figure 11
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1
0 10 20 30 40 50 60 70 80 90
P1
P2
Figure 10: HMM Prediction results when trained on P1 (l=0.05)
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1
0 10 20 30 40 50 60 70 80 90
P2
P1
Figure 11: HMM Prediction results when trained on P2 (l=0.05)
![Page 22: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/22.jpg)
22
2.6.2 HHMM
2.6.2.1 Accuracy
For the HHMM we obtain a prediction accuracy for both the internal and the production
states.
In Figure 12 the results for dataset can be seen and Figure 13 shows the result for dataset
2. In both datasets the prediction accuracy for internal and production states are quite
similar.
0,0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1,0
Production
Internal
Figure 12: HHMM Prediction results for dataset 1
0,0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1,0
Production
Internal
Figure 13: HHMM Prediction results for dataset 2
![Page 23: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/23.jpg)
23
2.6.2.2 Learning over time
For both systems a learning parameter of 0.05 was used.
The results for dataset 1 and 2 can be found in Figure 14 and Figure 15
For both datasets we can see that just as with the HMM the system quickly reaches the
values from the trained systems. For dataset 1 it seems that there is a larger fluctuation in
the prediction accuracy for internal states compared to the production states. This is not
the case for dataset 2.
0,00
0,10
0,20
0,30
0,40
0,50
0,60
0,70
0,80
0,90
1,00
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45
Time (days)
Accu
racy
Production
Internal
Figure 14: HHMM Prediction results for dataset 1 using parameter updating (l=0.05)
0,00
0,10
0,20
0,30
0,40
0,50
0,60
0,70
0,80
0,90
1,00
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
Time (days)
Accu
racy
Production
Internal
Figure 15: HHMM Prediction results for dataset 2 using parameter updating (l=0.05)
![Page 24: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/24.jpg)
24
2.6.2.3 Adapting to another person
In Figure 16 the prediction accuracy for the HHMM that is first updated using dataset 1
and then using dataset 2 is shown.
The prediction accuracy for the model that is first update using dataset 2 and then using
dataset 2 can be seen Figure 17. For both cases the prediction accuracy drops for both
internal and production states after the transition between the datasets.
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1
0 10 20 30 40 50 60 70 80 90
Time(days)
Ac
cu
rac
y P1 production
P2 production
P1 internal
P2 internal
Figure 16: HHMM Prediction results when trained on P1 (l=0.05)
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1
0 10 20 30 40 50 60 70 80 90
Time(days)
Ac
cu
rac
y P1 production
P2 production
P1 internal
P2 internal
Figure 17: HHMM Prediction results when trained on P2 (l=0.05)
![Page 25: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/25.jpg)
25
3 User Experience Test To test the effect of movement anticipation on the user evaluation of a human-robot
interaction robot an experiment was performed.
3.1 Participants
Fourteen participants (11 male, 3 female) whose age varied between 19 and 32 (mean age
24.5, SD = 4.3) participated in the experiment. Participants were unfamiliar with the
research topic. They were all students or employees of the Technical University
Eindhoven.
3.2 Method
During the experiment, participants had to perform a task which required their interaction
with a robot. After each interaction they had to rate the robot’s behavior during that
interaction. For this the Godspeed questionnaire by Bartneck was used. It contains a total
24 questions measuring 5 dimensions of human-robot interaction. Each question was a 5
point semantic differential as shown below.
Bad O O O O O Good
The five concepts are:
• Anthropomorphism: The extent to which the robot is “humanlike”. Consisting of
five items e.g. Unconscious - Conscious
• Animacy: The extent to which the robot is “alive”. Consisting of six items, e.g.
Mechanical – Organic.
• Likeability: Consisting of five items, e.g. Unkind-Kind
• Perceived Intelligence: Consisting of five items e.g. ignorant-knowledgeable
• Perceived Safety: How safe participants felt during the interaction. Consisting of the
4 items, e.g. Agitated-Calm.
The questionnaire was extended with an item rating the appropriateness of Nao’s
behavior on a 5 point scale from inappropriate to appropriate.
![Page 26: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/26.jpg)
26
3.3 Setup
The robot used for the experiment was Nao, the social assistive robot used in the KSERA
project. Nao is about 60 cm tall and is shown in Figure 18.
Figure 18: NAO, the robot used in the experiment
The robot was manually controlled by the experimenter from another room.
Because Nao was very slow (6 cm/s) participants had to be slowed down to enable Nao to
keep up with them. For this purpose a distractor task was added. Participants had to stop
halfway and listen to a 30-second audio clip taken from the BBC Radio News. At the end
of the trial they had to answer a question about. This was only to give participants the
impression that is was part of the task, the answers were not used for analysis.
The experiment took place in a living room setting; the layout of the room is shown in
Figure 19. The set up consisted of a dinner table (labeled by A), a coffee table (labeled by
B) and a chair (labeled by C).
A
BC
A
BC
Figure 19: Lay-out of experiment room where A is the dinner table, B is the coffee table and C is the
chair.
![Page 27: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/27.jpg)
27
3.4 Experimental Design
The experiment follows a 3x3 design, with 3 ways of approaching and 3 scenarios.
Participants were submitted to every condition in a random order.
The three ways of approaching are:
1. Follow: Nao follows the person. He announces his behavior by telling the participant
“I will follow you”
2. Intercept: Nao intercepts the person halfway between the coffee table and the chair.
3. Anticipate: Nao goes to the anticipated destination (the chair) and waits for the
participant to arrive there. He announces his behavior by telling the participant: “I
think you will go to your chair.
The intercept and anticipate conditions are two possible implementation of movement
anticipation.
The three ways of approaching result in three different robot trajectories which are shown
in Figure 20.
C B
A
N
N
1
2
3
C B
A
N
N
1
2
3
Figure 20: Different approaching behavior
To see whether the user evaluation of approaching behavior was influenced by the
circumstances three scenarios were devised based on use cases for the KSERA project.
The scenarios tested during the experiment are:
• Phone Call: the robot approaches the participant, says “There is a phone call for you”
and starts making a ringing sound. The users are instructed to reject the phone call by
touching the robot’s head.
• Medical Emergency: the participant is told to imagine that he or she feels ill and has
to wait at the coffee table for the robot to arrive there. After arriving close by and in
front of the participant the robot says “Do not worry I will call the doctor”
• Exercise: Nao approaches the participant to tell him “this might be a good time to
exercise”.
![Page 28: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/28.jpg)
28
3.5 Task
The experiment consisted of nine trials. Each trial started at the dinner table (A in Figure
19) where the participants received their first instructions.
Participants were told that they were to imagine that they had just finished a meal at the
dinner table and wanted to read the paper in their favorite chair. To do this they first had
to pick up the paper at the coffee table (B in Figure 19). On the coffee table there was a
laptop which showed new instructions. The participants had to listen to a 30-second audio
clip from the BBC Radio news. They were instructed to listen carefully to the audio clip
because they would have to answer a question about it at the end of the task. After the
audio clip had finished they were instructed either to pick up the paper and walk to the
chair (C in Figure 19) or stay at the coffee table because they felt ill and wait for Nao to
come to them. When the participants had reached the chair or Nao had reached the
participant the trial was over. After filling out the questionnaire the next trial started.
3.6 Procedure
After arriving in the lab, each participant was asked to sit down at the coffee table and
read and sign an informed consent form. The form can found in Appendix A. After
signing the form they received written instructions which they were asked to read. It
explained the context and instructed them to walk to the coffee table for further
instructions. The complete set of instructions can be found in Appendix C. When they
had read it and had no further questions they could start the trial. The experimenter
controlled Nao from another room and could not be seen by the participant during trials.
After the trial was finished the participants had to fill out the described in paragraph 3.2
in which they had to rate the robot’s behavior. It also contained the question about the
news clip. The full questionnaire can be found in Appendix D. The participants filled out
the questionnaire at the coffee table. They were told that after filling it out, they could
immediately start the next trial.
After the end of the last trial the participant was asked for comments or remarks. The
research topic was explained to them; they received their financial reward and were
thanked for their participation
3.7 Data analysis
To test the effects of our manipulations on the different dimensions of the Bartneck
questionnaire we will use a repeated measure analysis of variance analysis of variance
(Anova). We will test for both manipulations (approaching behavior & scenario) and
their interaction.
Before using the data to compare the effects of the experimental manipulations, the
internal consistency for the items of each scale was tested. Items that had a strong
detrimental effect on consistency were removed.
To test this we used the Cronbach’s Alpha. Only the perceived safety dimension had a
low consistency score (α=0,647); this could be improved by removing the quiescent –
surprised item. The number of items for each dimension that were averaged to form a
reliable measure and their consistency score can be found in Table 3.
![Page 29: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/29.jpg)
29
Table 3: Internal consistency reliability scores
Items Cronbach’s Alpha
Anthropomorphism 5 ,817
Animacy 6 ,806
Likeability 5 ,883
Perceived Intelligence 5 ,870
Perceived Safety 3 ,831
This set of items was used for the repeated measure Anova.
In the next section we will provide the results of the analysis for each of the 5 dimensions
(Anthropomorphism, Animacy, Likeability, Perceived intelligence and Perceived safety)
and for the appropriateness rating.
![Page 30: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/30.jpg)
30
4 Results
4.1 General results
In Table 4 the range, mean and standard deviation for the different items can be found.
We can see that the average value for all items is higher than the average of the scale.
Table 4: Descriptive statistics for the measured dimensions
Variable Minimum Maximum Mean SD
Anthropomorphism 2.00 4.80 3.50 .66
Animacy 2.17 5.00 3.69 .61
Likeability 2.20 5.00 4.07 .72
Perceived Intelligence 2.20 5.00 3.89 .69
Perceived Safety 2.25 5.00 3.87 .63
Appropriate 1.00 5.00 3.85 1.05
4.2 Animacy
Figure 21 shows the effect of the robot’s approaching behavior on the animacy ratings
given by the participant. The difference between the ratings for the following behavior
(M=3.69, SD=.66), the intercepting behavior (M=3.69, SD=.62)and the prediction
behavior (M=3.71,SD=.55) is not significant: F(2,26)= <1, p=.96
Figure 22 shows the effect of the different scenarios on the animacy ratings. Although
there is a difference in ratings between the exercise scenario (M=3.60, SD=.65) and the
medical emergency (M=3.72, SD=.61) and phone call (M=3.76, SD=.56) scenarios, this
effect is not significant. F(2,26)=1.64, p=.24. There is also no significant interaction
effect between behavior and scenario on animacy ratings F(4,52)<1, p=.78, see Figure 23
Figure 21: The effect of approaching behavior on Animacy ratings
![Page 31: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/31.jpg)
31
Figure 22: The effect of different scenario’s on animacy ratings
Figure 23: The effect of approaching behavior on animacy ratings for each scenario
4.3 Anthropomorphism
Figure 24 shows the effect of the robot’s approaching behavior on the anthropomorphism
ratings. There is a difference between the prediction behavior (M=3.60, SD=.63) and the
following (M=3.47, SD=.68) and intercepting (M=3.43, SD=.68) behaviors, this effect is
not significant, F(2,24)= 2.41, p=.11.
Figure 25 show the effect of the different scenarios on the anthropomorphism ratings
given by the participants. The anthropomorphism ratings for the phone call (M=3,52,
SD=.64), medical emergency (M=3.56, SD=.70) and exercise (M=3.42, SD=.66) are not
significantly different, F(2,24)<1, p=.42
There is also no significant interaction effect (Figure 26) between behavior and scenario
on anthropomorphism ratings F(4,52)<1, p=.70
![Page 32: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/32.jpg)
32
Figure 24: The effect of approaching behavior on anthropomorphism ratings
Figure 25: The effect of different scenario’s on anthropomorphism ratings
Figure 26: The effect of approaching behavior on anthropomorphism ratings for each scenario
![Page 33: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/33.jpg)
33
4.4 Likeability
The effects of approaching behavior on likeability ratings can be seen in Figure 27.There
is no significant difference in likeability ratings between the following behavior (M=4.09,
SD=.67) and the intercepting (M=3.99, SD=.78) and predicting (M=4.14, SD=.72)
behaviors, F(2,24)= 1.54, p=.24.
In Figure 28 the likeability ratings for the different scenarios can be found.
There is a significant effect of scenario on likeability ratings F(2,24)=3.43, p<.05.
The robot is rated less likeable in the exercise scenario (M=3.95, SD=.76) than in the
medical emergency scenario (M=4.20, SD=.64). The ratings in the phone call scenario
are in between the other two scenarios (M=4.07, SD=.75)
There is also no significant interaction effect (Figure 29) between behavior and scenario
on likeability ratings F(1.87,24.23)<1, p=.53
Figure 27: The effect of approaching behavior on Likeability ratings
Figure 28: The effect of different scenario’s on Likeability ratings
![Page 34: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/34.jpg)
34
Figure 29: The effect of approaching behavior on likeability ratings for each scenario
4.5 Perceived Intelligence
There is no significant difference in likeability ratings between the following behavior
(M=3.89, SD=.70) and the intercepting (M=3.85, SD=.76) and predicting (M=3.92,
SD=.63) behaviors, F(2,26)<1, p=.72. These results can be seen in Figure 30
Figure 31 shows the effect of the different scenarios on the perceived intelligence.
We can see that ratings in the phone call scenario (M=3.85, SD=.65) are higher than in
the exercise condition (M=3.66, SD=.60) but lower than in the Medical emergency
condition (M=4.15, SD=.60). This effect is significant F(2,26)=7.56, p<.05.
For the interaction effect (Figure 32) we can see that the prediction behavior is perceived
as the most intelligent for the phone call and exercise scenarios but in the medical
emergency scenario the following behavior is rated highest. However, this effect is not
significant, F(4,52)<1, p=.81.
Figure 30: The effect of approaching behavior on perceived intelligence ratings
![Page 35: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/35.jpg)
35
Figure 31: The effect of different scenario’s on perceived intelligence ratings
Figure 32: The effect of approaching behavior on perceived intelligence ratings for each scenario
4.6 Perceived Safety
Figure 33 show the effect of approaching behavior on the perceived safety ratings given
by the participants. We can see that the intercepting behavior is rated as more safe
(M=3.94, SD=.66) than following (M=3.83, SD=.70) and prediction (M=3.83, SD=.70)
but this effect is not significant, F(2,26)<1, p=.47
The perceived safety ratings are lower in the exercise scenario (M=3.72, SD=.72) than in
the phone call (M=3.92, SD=.61) and the medical emergency scenario (M=3.96, SD=.54)
This effect, shown in Figure 33, is not significant F(2,26)=2.94, p=.07.
When we look at the interaction effect (Figure 35) we see that while ratings in the phone
call scenario are similar for all behaviors, prediction scores lowest in the medical
emergency scenario and following scores lowest in the exercise. This effect is not
significant F(4,52)<1, p=.66.
![Page 36: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/36.jpg)
36
Figure 33: The effect of approaching behavior on Perceived Safety ratings
Figure 34: The effect of different scenario’s on Perceived Safety ratings
Figure 35: The effect of approaching behavior on perceived safey ratings for each scenario
![Page 37: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/37.jpg)
37
4.7 Appropriateness
The effect of approaching behavior on appropriateness ratings can be seen in Figure 36.
Although we can see that intercepting (M=3.88, SD=1.05) is rated lower than following
(M=3.98, SD=.95) and higher than prediction (M=3.69, SD=1.14) this effect is not
significant, F(2,24)<1, p=.76.
The effect of scenario on appropriateness ratings (Figure 37) on the other hand is
significant F(2,24)=6.76, p<.05. The ratings in the exercise scenario (M=3.45, SD=1.15)
are lower than the phone call (M=3.90, SD=1.04) and medical emergency (M=4.19,
SD=.80) scenarios. The interaction effect between behavior and scenario on
appropriateness ratings can be seen in Figure 38. While we can see that prediction is the
most appropriate behavior in the phone call scenario it is the least appropriate behavior in
the other two scenarios. However, this effect is not significant, F(4,48)=1.8, p=.13.
Figure 36: The effect of approaching behavior on Appropriateness ratings
Figure 37: The effect of different scenario’s on Appropriateness ratings
![Page 38: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/38.jpg)
38
Figure 38: The effect of approaching behavior on appropriateness ratings grouped by scenario
![Page 39: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/39.jpg)
39
5 Discussion
5.1 Movement prediction
The results show that a simple HMM can already predict the next location of a person
with a reasonable accuracy. There are a number of ways to improve this accuracy.
First of all, because self-transitions have been removed time information has been lost.
One way to solve this is to explicitly model state duration as described by Rabiner
(1989).
We expected that, in comparison to the simple HMM, an HHMM would increase
prediction accuracy. However the results show than performance is roughly equal.
An important bottleneck could be the definition of the higher states. Instead of manually
defining the model structure a self-organizing algorithm can be used. The fact that even
with these arbitrarily defined higher states a prediction accuracy of 40-60% was achieved
is promising.
We also hypothesized that both HMM and HHMM can flexibly adjust to changing
circumstances including different people. The results for parameter updating show that
while adapting to another person can be done quickly, accuracy barely increases over
time. After a system has been trained it quickly adapts to another user, converging to the
accuracy of the system trained for the new user.
For both HMM and HHMM the results might improve with more accurate data.
In the data used the position of the person was only known on a room level. When
different locations in one room can be distinguished, it could be easier to recognize
different behaviors. Another way to improve the usefulness of the data is to record
information about high level behavior (Kasteren et al. 2008).
5.2 User experience test
To investigate the influence of anticipatory behavior on user experience in a human-robot
interaction we performed a study. In this study participants had to interact with a robot in
three different scenarios with three possible approaching behaviors. After each
interaction the participants rated the robot’s behavior on five aspects using a
questionnaire.
The high averages of the user ratings for all dimensions show a generally positive opinion
of the robot’s behavior.
We expected that a robot that shows anticipatory approaching behavior will be rated
more positive than a robot that just follows the user. To investigate if different behaviors
are preferred by users in different contexts, three scenarios differing in urgency were
used. While we found neither significant effect of approaching behavior on user ratings
nor an interaction effect between approaching behavior and scenario, we found a
significant effect of scenario on two dimensions (Likeability and Perceived Intelligence)
and the appropriateness ratings. In these the medical emergency scenario is rated highest
followed by the phone call scenario; the exercise scenario is rated lowest.
![Page 40: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/40.jpg)
40
There are several factors that might have influenced the small differences in ratings for
the different approaching behaviors.
While the distractor task, listening to the radio, was necessary because of the low walking
speed of Nao, it was perhaps too attention demanding.
Several participants stated after the experiment that they found this secondary task to be
very difficult. Possibly this prevented the user from observing the different approaching
trajectories especially because the absolute path differences between these trajectories
were small. As a consequence of the experimental set-up, there was only one possible
final destination. Therefore it is plausible that the participants could have perceived the
inference of the final destination as a not very impressive achievement.
Even though it is not significantly different from the other two behaviors, the intercepting
behavior gets the lowest ratings on every dimension except for perceived safety. This
could be due to several reasons. Unlike in the other two behaviors, the robot did not
verbally emphasize its behavior. Therefore the intercepting behavior could be too
ambiguous.
While there was no significant effect of approaching behavior on user ratings, the
different scenarios yield significant differences for likeability, perceived intelligence,
perceived safety and appropriateness ratings.
This could be due to differences in interaction styles between the scenarios. In the
exercise condition the robot only talks to the person, in the phone call condition there is
some feedback required from the user and in the medical emergency condition the robot
seems to respond to the user’s needs. These results suggest that the user’s opinion of the
robot depends much more on how natural the interaction style is than on the approaching
behavior.
For future experiment the following aspects could be considered:
Predictive navigation can be made more salient and convincing if there are multiple
possible destinations. While in this relatively short experiment approaching behaviors did
not influence user ratings, this could be different for longer experiments.
Not only is it more believable that the systems has learned a persons behavior than in a
short experiment, the novelty effect will decrease. This could lead to larger differences in
ratings between different approaching behaviors.
![Page 41: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/41.jpg)
41
6 References Althaus, P., Ishiguro, H., Kanda, T., Miyashita, T. & Christensen, H.I. (2004). Navigation
for Human-Robot Interaction Tasks. Proceedings of the IEEE International
Conference on Robotics and Automation, 1894-1900
Bartneck, C., Kanda, T., Mubin, O. & Al Mahmud, A. (2009). Does the Design of a
Robot Influence Its Animacy and Perceived Intelligence? International Journal of
Social Robotics 1(2), 195-204.
Bartneck, C., Kulic, D., Croft, E. & Zoghbi, S. (2009). Measurement Instruments for the
Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived
Safety of Robots. International Journal of Social Robotics 1(1), 71-81
Bruce, A. & Gordon, G. (2004). Better Motion Prediction for People-tracking.
Proceedings of International Conference on Robotics and Automation
Bennewitz, M., Cielniak, C., Burgard, W., Thrun, S. (2005). Learning Motion Patterns of
People for Compliant Robot Motion. The International Journal of Robotics
Research 24(1), 31-48
Cuijpers, R.H., Schie, H.T. van, Koppen, M., Erlhagen W. & Bekkering, H. (2006).Goals
and means in action observation: A computational approach. Neural Networks
19(3), 311-322
Dautenhahn, K., Walters, M.L., Woods, S., Koay, K.L., Nehaniv, C.L., Sisbot,A.,
...Siméon, T. (2006). How May I Serve You?: A Robot Companion Approaching
a Seated Person in a Helping Context. Proceedings of the 1st ACM
SIGCHI/SIGART conference on Human-robot interaction, 172-179.
Duong, T.V., Bui, H.H., Phung, D.Q. & Venkatesh, S. (2005). Activity Recognition and
Abnormality Detection with the Switching Hidden Semi-Markov Model.
Proceedings of IEEE International Conference on Computer Vision and Pattern
Recognition (CVPR), 838-845
Fine, S., Singer, Y. & Tishby, N. (1998). The Hierarchical Hidden Markov Model:
Analysis and Applications.Machine Learning 32(1), 41-62
Foka, A.F. & Trahanias, P.E. (2010). Probabilistic Autonomous Robot Navigation in
Dynamic Environments with Human Motion Prediction. International Journal of
Social Robotics 2, 79–94
Gani, O., Sarwar, H. & Rahman, C.F. (2009). Prediction of State of Wireless Network
Using Markov and Hidden Markov Model. Journal of Networks, 4, 976-984
Kanda, T., Glas, D.F., Shiomi, M. & Hagita, N. (2009). Abstracting People’s Trajectories
for Social Robots to Proactively Approach Customers. IEEE Transactions on
Robotics 24, 1382-1396
Kasteren, T. van, Noulas, A.K., Englebienne, G. & Kröse, B.J.A. (2008). Accurate
Activity Recognition in a Home Setting. Proceedings of the 10th international
conference on Ubiquitous computing. 1-9
Nguyen, N.T., Bui, H.H., Venkatesh, S. & West, G.A.W. (2003). Recognising and
Monitoring High-Level Behaviours in Complex Spatial Environments.
Proceedings of IEEE International Conference on Computer Vision and Pattern
Recognition (CVPR), 620-625
![Page 42: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/42.jpg)
42
Nguyen, N.T., Phung, D.Q., Venkatesh, S. & Bui, H. (2005). Learning and Detecting
Activities from Movement Trajectories Using the Hierarchical Hidden Markov
Model. Proceedings of IEEE International Conference on Computer Vision and
Pattern Recognition (CVPR) 2, 955-960
Pacchierotti, E., Christensen, H.I. & Jensfelt, P. (2005). Human-Robot Embodied
Interaction in Hallway Settings: A Pilot User Study. Proceeding of the 14th IEEE
International. Workshop on Robot & Human Interactive Communication, 164-
171.
Rabiner, L.R. (1989) A Tutorial on Hidden Markov Models and Selected Applications in
Speech Recognition. Proceedings of the IEEE 77(2), 257-286
Vasquez, D. & Fraichard, T. (2004). Motion Prediction for Moving Objects: A
Statistical Approach. Proceedings of the IEEE International Conference on
Robotics Automation 4, 3931–3936.
![Page 43: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/43.jpg)
43
Appendix A Generalized Viterbi Algorithm The generalized Viterbi algorithm.is described by Fine et al.(1998) for inferring the most
probable state sequence in a Hierarchical Hidden Markov Model.
For this algorithm they define three variables for each level. For the variables for the
states on level 1 we can omit dependencies on the layer above it (level 0) because there is
none.
• ( )1,,, −+ d
h
d
i qqkttδ is the likelihood of the most probable state sequence generating
ot…ot+k assuming it was solely generated by a recursive activation that started at
time t from state 1−d
hq and ended at
d
iqwhich returned to
1−dq at time t+k
• ( )1,,, −+ d
h
d
i qqkttψ is the most probable state to be activated by
1−d
hq before
d
iq. If
such a state does not exist (ot…ot+k was solely generated by d
iq) we set
( ) 0,,, 1def
d
h
d
i qqktt =+ −ψ.
• ( )1,,, −+ d
h
d
i qqkttτ is the time step at which d
iq was most probable to be called by 1−d
hq . If d
iq generated the entire subsequence we set ( ) tqqkttd
h
d
i =+ −1,,,τ ..
( ){ } ( ){ } ( ){ }( )lflflfSlSl
def
Sl∈∈
∈ =ΜΑΧ maxarg,max
Production states
For the production states the three variables are initialized using these formulas:
( ) ( ) ( )
( )( ) tqqtt
qqtt
obqqqtt
DD
i
D
i
t
qD
i
q
i
Di
D
=
=
⋅=
−
−
−
1
12
12
,,,
0,,,
,,,1
τ
ψ
πδ
Recursive calculation is done using these formulas:
( ) ( ) ( ) ( ){ }( ) ktqqktt
obaqqkttqqkttqqktt
DD
i
kt
ji
DD
jqj
DD
i
DD
i
Di
D
D
+=+
⋅⋅−+ΜΑΧ=++
−
+−
≤≤
−− −
−
1
1
1
11
,,,
,,1,,,,,,,,(1
1
τ
δψδ
Internal states Because there is only one layer of internal states, we can omit dependencies on the layer
above it.
For the internal states the three variables are initialized using these formulas:
![Page 44: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/44.jpg)
44
( ) ( ) ( ){ }
( )( ) tqtt
qtt
aqttqqtt
d
i
i
q
endrri
q
qri
d
di
=
=
⋅⋅=−
≤≤
,,
0,,
,,max,,
1
11
1
1 01
τ
ψ
δπδ
Recursive calculation is done using these formulas:
( ){ }
( ) ( )( ) ( ){ }Raqtttt
aqkttR
q
j
d
jqj
q
endrrqr
d
di
⋅⋅−′ΜΑΧ=′Ψ′∆
⋅+′=
−≤≤
≤≤
0
1
1
,1,,
,,max
1
1
1
δ
δ
( ) ( ) ( ){ }
0)(
,,max0
1
0 1
1
1
=Ψ
⋅+⋅=∆≤≤
t
aqkttqtq
endrrqr
i
q
i
δπ
( ) ( )( ) ( )
( ) ( )( )11
11
,,,,
,,,,,
ii
ktttii
qkttqktt
tqkttqktt
+Ψ=+
′∆ΜΑΧ=++ +≤′≤
τψ
τδ
![Page 45: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/45.jpg)
45
Appendix B Informed Consent Form
Title: Robot behavior
Experimenter: Maarten Bruna ([email protected])
Supervisor: Raymond Cuijpers ([email protected])
Description
You are invited to participate in an experiment on robot behavior.
In this experiment you will be asked to perform several tasks in which you will interact
with a social robot. You will judge its behavior on several aspects.
Method
The experiment will consist of 9 trials, each one followed by a questionnaire. Video
footage of each trial will also be recorded.
Confidentiality
The data obtained from the questionnaires will be used for analysis. The video footage
will only be used to check for and explain anomalies. All data will be processed
anonymously.
Your individual privacy will be maintained in all published and written data resulting
from the study
Duration
The experiment will last approximately 45 minutes
Reimbursement
You will receive € 7.50 for you participation
Voluntary participation
Your participation is this experiment is voluntary and you have the right to withdraw
your consent or discontinue participation at any time without penalty or loss of benefits to
which you are otherwise entitled. You have the right to refuse to answer particular
questions.
I have read the foregoing information, or it has been read to me. I have had the
opportunity to ask questions about it and any questions I have been asked have been
answered to my satisfaction. I consent voluntarily to be a participant in this study
Name:
Signature:
Date:
![Page 46: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/46.jpg)
46
Appendix C Instructions
Introduction In this experiment you will perform 9 tasks in which you interact with Nao, an assistive
robot. During each task Nao will behave differently. After each task you will be asked to
rate Nao’s behavior during this task on a number of aspects.
Please read the description below and try to imagine you are in the described situation.
You are an elderly person living in a smart home environment. Nao assists you in living
independently. Nao helps you to stay in touch with other people. Because you are not in a
very good condition anymore Nao monitors your health. He can suggest physical exercise
to improve your health and call a doctor in case of an emergency.
During the experiments the following events can happen:
There is a phone call for you, to reject the call touch Nao’s forehead.
Nao will suggest its time to exercise to improve your health; in this case you don’t have
to do anything.
Please turn over the page.
![Page 47: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/47.jpg)
47
Task You have just finished a meal at the table (A on the map) and want to read the paper in
your favorite chair (C on the map) as you always do after dinner.
• Get up
• Put back your chair
• Walk to the coffee table (B on the map) where the paper is.
• Follow the instructions on the laptop.
A
BC
A
BC
A
BC
![Page 48: The benefits of using high-level goal information for ...home.ieis.tue.nl/rcuijper/reports/Bruna M.T._Master thesis.pdf · The robot has to be able to do this efficiently and in an](https://reader034.vdocuments.mx/reader034/viewer/2022051721/5a921ce57f8b9a8b5d8bd85f/html5/thumbnails/48.jpg)
48
Appendix D Questionnaire
Please rate your impression of the robot on the following scales:
Dead O O O O O Alive
Incompetent O O O O O Competent
Unconscious O O O O O Conscious
Unfriendly O O O O O Friendly
Machinelike O O O O O Humanlike
Unintelligent O O O O O Intelligent
Inert O O O O O Interactive
Unkind O O O O O Kind
Ignorant O O O O O Knowledgeable
Artificial O O O O O Lifelike
Dislike O O O O O Like
Stagnant O O O O O Lively
Moving rigidly O O O O O Moving elegantly
Fake O O O O O Natural
Awful O O O O O Nice
Mechanical O O O O O Organic
Unpleasant O O O O O Pleasant
Irresponsible O O O O O Responsible
Apathetic O O O O O Responsive
Foolish O O O O O Sensible
Please rate your emotional state during this task on these scales:
Anxious O O O O O Relaxed
Agitated O O O O O Calm
Quiescent O O O O O Surprised
Where did the bombing take place?