direction kernels: using a simplified 3d model representation for grasping
TRANSCRIPT
Machine Vision and Applications (2013) 24:351–370
DOI 10.1007/s00138-011-0351-y
ORIGINAL PAPER
Direction Kernels: using a simplified 3D model representationfor grasping
Antonio Adán · Andrés S. Vázquez · Pilar Merchán ·
Ruben Heradio
Received: 11 April 2011 / Revised: 24 May 2011 / Accepted: 26 May 2011 / Published online: 17 June 2011
© Springer-Verlag 2011
Abstract Humans decide how to carry out a spontaneous
interaction with an object by using the whole geometric infor-
mation obtained from their eyes. The aim of this paper is to
present how our object representation model MWS (Adán in
Comput Vis Image Underst 79:281–307, 2000) can help a
robot manipulator to make a single and reliable interaction.
The contribution of this paper is particularly focused on the
grasp synthesis stage. The main idea is that the grasping sys-
tem, through MWS, can use non-strict-local features of the
contact points to find a consistent grasping configuration.
The Direction Kernels (DK) concept, which is integrated
into the MWS model, is used to define a set of candidate
contact-points and interaction regions. The set of DK is a
global feature which represents the principal normal vectors
of the object and their relative weight in a three-connectivity
mesh model. Our method calculates the optimal grasp points
(which are ordered according to the quality function) for two-
finger grippers, whilst maintaining the requirements of force
closure and safety of the grasp. Our strategy has been exten-
sively tested on real free-shape objects using a 6 DOF indus-
trial robot.
A. Adán (B) · A. S. Vázquez
Dpto Ingeniería E.E.A.C, Universidad de Castilla La Mancha,
Ciudad Real, Spain
e-mail: [email protected]
A. S. Vázquez
e-mail: [email protected]
P. Merchán
Escuela de ingenierías Industriales,
Universidad de Extremadura, Badajoz, Spain
e-mail: [email protected]
R. Heradio
Dpto de Ingeniería de Software y Sistemas Informáticos,
UNED, Madrid, Spain
e-mail: [email protected]
Keywords 3D model representation · Object recognition ·
Grasping
1 Contact-point selection in the object–robot interaction
problem
Finding appropriate stable grasps on an arbitrary object for a
robotic hand is a complex problem which has been dealt from
different strategies. In general, we can say that the difficulty
arises from the number of degrees-of-freedom and the geom-
etry of the object to be grasped. However, humans greatly
simplify the grasping problem by selecting a few prehensile
postures and contact-points, depending on the object and the
task to be performed. In [1], Cutkosky shows a grasp hierar-
chy which offers a classification scheme for typical human
grasps.
Humans unconsciously use not only the local properties
of the grasp points, but the whole geometry of the object
to infer the best grasp. Figure 1a illustrates examples in
which, despite the similarity between the local geometry on
the contact-points, a human will intuitively choose option A
as the most reliable grasp in comparison to options B and C.
Figure 1b shows various other examples in which the approx-
imate location of the best contact-points can be intuitively
selected. Assuming that all these hand configurations are pos-
sible, why do we select option A as the best choice if the local
conditions are equal in all cases? The response may be that
the final decision is not only made by considering the local
properties on the contact point but also by considering an
extended geometry around the contact-point and the object’s
complete geometry. We are thus able to establish a priority
in the candidates and then check other aspects related to the
interaction.
123
352 A. Adán et al.
Fig. 1 a The quality of different grasping can be intuitively evaluated
for different objects. Although the contact points in all cases have equal
local geometric properties, case A appears to be more secure and reliable
than cases B and C. b Using the same object, grasping A appears to be
the best option, and the local geometry is again the same in all cases
In this paper, we attempt to translate this idea to a robot
manipulator by using the information contained in the so-
called “object representation models”, which can provide
complete local/global information about the object. We par-
ticularly aim to limit the huge number of possible hand con-
figurations to a few reliable grasp candidates after using our
MWS model.
1.1 Previous works
A variety of approaches have been used to tackle the grasping
problem in a wide variety of subjects (mechanical design,
automatic control, sensors and transducer, path planning,
perception of the environment, intelligent interaction, etc).
In this paper, our interest lies exclusively in the grasping syn-
thesis stage in which the contact-points are established. The
most extended research line in this area is addressed towards
calculating the best grasp-points by using strict kinematic and
dynamic criteria. The selection is therefore obtained depend-
ing on the contact dynamic model which, in many cases, is
directly or indirectly related to form-closure and/or force-
closure requirements [2] and is subject to a specific quality
criteria [3–11]. In many cases, the grasping is highly sim-
plified owing to the fact that the models that represent the
objects are simple (e.g., planar objects) and ideal [6,9–11],
which makes these methods inappropriate or difficult to use
in real world robot interactions.
Other systems obtain the best contact-points by means
of experimental learning in a virtual or real environment.
Kawarazaki et al. [12], have developed a learning technique
that is based on a trial-error strategy. Their intention is to
carry out a non-invasive interaction in scenes with obstacles
by exploring the initial paths and poses of the grippers. The
principal idea is to drastically reduce the number of grasping
configurations through rule-based knowledge. This method
has two principal disadvantages: the learning process takes
a long time and it is only valid for a hand which has been
trained. Thus, if the grippers or the sensorial system alter, the
rules must be rewritten. In [13] a training set composed of
400 synthetic grasping tests is used. These combine decision
trees and nearest neighbor techniques to infer the best grasp.
Morales et al. [14] present a learning procedure in which the
grasping quality is classified in five levels. The quality levels
are established depending on the grasping success after mov-
ing the end-effector several times. The classification rules are
set by using a pattern with nine features and impose a KNN
(k-nearest neighbor) voting algorithm. Finally, Pelossof
et al. [15] infer the grasping quality by means of SVM (Sup-
port Vector Machine) learning. Various grasping patterns are
introduced in the training step and the quality measurement is
then calculated. This technique has been applied to synthetic
super-ellipsoids.
From another point of view, Michel et al.’s method [16]
lies on the primary natural axes. The authors of this method
argue that for a human grasp, the natural axis is defined
through the position/orientation of the palm and the fingers
which surround the object. The strategy consists of drasti-
cally reducing the number of contact-points and maintaining
those that are consistent with the natural axes. Two contact-
points are considered, and the method is clearly focused on
imitating human grasps.
1.2 3D models in grasp synthesis
To date, little has appeared in literature in which 3D mod-
els are used to improve or simplify the grasping synthe-
sis problem in order to find the best contact-points. The
simplest approaches take a basic library of volumes, such
as spheres, cylinders, cones, cubes, and boxes for specific
applications. The method presented in [4] is designed to
grasp objects with four-finger grippers when the geomet-
rical model is available in advance. The experimentation is
123
Direction Kernels: using a simplified 3D model representation for grasping 353
reduced to four simple forms (sphere, cube, telephone and
banana). The works presented by Miller et al. [6,17] also
take 3D models corresponding to simple shapes to gener-
ate a set of pre-grasping initial positions which are further
refined. This approach allows the initial set of contact-points
to be reduced. Of the aforementioned techniques, none use a
formal 3D representation model in which the object’s local
and global properties can be explored, and free-shapes are not
dealt with. On the contrary, the few objects used are restricted
to polyhedral and simple shapes.
As was mentioned previously, this paper focuses on defin-
ing an easy interaction for an isolated object using the power
of an object model representation. We specifically intend
to provide reliable grasp solutions based on non-strict-local
geometrical properties of the object which are provided by
means of the MWS model.
Figure 2 illustrates the main differences between the
approaches mentioned above and ours. Techniques [4,6,17]
define the optimal grasp through a sampling process without
using a semantic model representation. In other words, the
3D model shown in Fig. 2a is simply seen as a mesh model
with an arbitrary topology that offers no additional informa-
tion. The power of the representation model is not therefore
exploited in the grasping process. In the force-brute search
algorithm, a grasp table is obtained by testing grasps from
different approach directions of the hand after force closure
checking. This usually takes several minutes and has to be
performed for each new object to be grasped. This may save
a considerable amount of time and resources in a real robot
interaction.
However, our grasping solution is obtained directly from
the MWS representation model (Fig. 2b, left) after consider-
ing a set of non-strict-local properties of the contact-points.
As it will be detailed in Sect. 3, up to 6 properties are eval-
uated. These features allow us to evaluate important aspects
around the hypothetic contact-points (e.g., extended curva-
ture, relative location on the surface, neighbor normal vec-
tors, safety distance, etc) along with global properties of the
object (e.g., principal directions, dimension, etc), and to then
generate a set of grasp starting positions.
This paper attempts to demonstrate the utility of the MWS
representation model in the grasping problem. Section 2
provides a brief introduction to the MWS model represen-
tation, and presents the Direction Kernels feature as the
principal concept in the whole approach. Section 3 defines the
grasp parameters and presents the grasping quality measure.
Section 4 is devoted to showing the experimental results in
two parts. In the first part, the candidate grasp points for a set
of 3D free-shapes are presented. Part two shows the perfor-
mance of our method in a real experimental setup. We present
the results of 40 grasp actions using a two-finger gripper on
board an RX 90 Stäubli robot. Conclusions, improvements
and future lines of research are presented in Sect. 5.
2 Simplified normal representation
2.1 A short introduction to MWS models
We have recently addressed our research towards the field of
geometric modeling in an attempt to solve robot-vision prob-
lems. In this section we aim to extract specific shape infor-
mation through a 3D representation model which is invariant
to solid transformations. A solid is usually represented as
a discrete mesh of nodes, and the features which character-
ize the solid are obtained by analyzing the geometry in local
areas. For example, high and low curvature points may define
sharp and flat local zones in the mesh. Such models recognize
the nodes as isolated items which are exclusively connected
to their neighbors, and consequently provide only local and
weak information.
In contrast to this local strategy, MWS models introduce a
new topology in meshes, which is able to connect each node
with the other nodes thus establishing a new inter-mesh rela-
tionship. This topology is maintained for all the nodes of a
canonical mesh which is fitted to the object surface. More-
over, all the MWS models come from the same mesh. This
mesh is obtained after the implosion of a tessellated spherical
mesh, which we shall denote as TI , over the object’s surface.
The fitted mesh, TR , therefore maintains the topology and
the number of nodes of TI .
In the aforementioned topology, a node is 3-connected
with its neighbors but is also recursively connected with its
neighbors’ neighbors. These structures appear to be concen-
tric rings (or wave-fronts) of nodes, and it is for this reason
that the whole structure is denominated as a Modeling Wave.
We can consequently state a multi-connectivity property in
the mesh depending on the wave-front level, and explore the
geometry of an object in a different manner, thus extracting
new descriptors and features. Finally, the MWS model stores
the object features in the canonical sphere TI . More infor-
mation concerning the MWS concept and properties can be
found in [18]. Figure 3 illustrates the principal stages through
which to achieve an MWS representation.
2.2 Direction Kernels
As was stated above, discrete values of 3D object features,
which are calculated on TR , can be mapped onto TI . For
example, it is possible to map local features such as discrete
curvature measurements, color features, distance to the ori-
gin of coordinates, etc. However, we are more concerned with
the fact that it is also possible to map global features, such as
n-connectivity features, the principal directions of the solid,
etc.
In this case, Direction Kernels are a set of global invari-
ant features that are defined over TI . The principal idea is
that the normal directions of the nodes of TR can be mapped
123
354 A. Adán et al.
Fig. 2 a Grasping strategies based on brute-force search: grasp
approach directions (left), grasp table from sampled grasps (middle)
and optimal grasp (right). b Our grasping strategy. Simplified model in
MWS format with the safety region and the contact points highlighted
(left). Visualization of the optimal grasp with the griper (right)
Fig. 3 MWS building process. After imploding a semi-regular tessellated mesh onto the object, a new 3D model with three-connectivity is obtained.
The model, called MWS, maintains the wave-front connectivity properties of the tessellated mesh
123
Direction Kernels: using a simplified 3D model representation for grasping 355
Fig. 4 Left Voting process in the tessellated sphere TI . The node NI, j
is voted when �u(NI, j ) is closest to any �u(NR,i ), NR,i ∈ TR . Right
Extraction of Direction Kernel. The number of votes of the nodes in the
hexagonal patch is symbolized by the size of the node. Definition of the
Amplitude and the Normal of a DK
onto TI . It is only possible to obtain a simplified map if the
zones of the sphere with a high density of normal vectors are
considered. Figure 4 illustrates this idea. This concept will
be formally introduced in the following paragraphs.
Let TR be the mesh of a given object. We obtain the nor-
mal vector �u(NR,i ) for each node of the mesh NR,i ∈ TR
by computing the vector which is perpendicular to the plane
formed by the three neighbor nodes of NR,i . All the com-
puted normal vectors vote their corresponding nodes in TI
so that for each node NI, j ∈ TI our model stores the num-
ber of nodes of TR whose normal vector is close to the
normal at NI, j , �u(NI, j ). These values are denominated as
g(NI, j ). That is to say, g(NI, j ) contains the number of votes
that normal �u(NI, j ) accumulates in the mapping process.
Note that we provide a global representation of the object
through the descriptor g by using the topological structure TI .
Figure 4 illustrates how the node NI, j is extensively voted
and is further able to generate a DK.
This representation may exhibit false or unimportant
information. For example, nodes near to vertices, edges or
zones with a high curvature gradient have unreliable normal
values and should be considered as unimportant or errone-
ous information for grasping tasks. Furthermore, very low g
values are relatively common in the majority of the tessel-
lated sphere TI . This information is also negligible, since it
represents only small parts (local areas) of the mesh TR (see
Fig. 5b).
Our goal is to discover the most important normal direc-
tions of the object. We have therefore simplified the original
representation by filtering out the unimportant information
and maintaining that which is meaningful in order to gener-
ate what we term as Simplified Normal Representation. The
procedure is as follows:
Definition 1 Let k, be a hexagonal patch of TI . Let k NI,s,
�u(
k NI,s
)
and g(
k NI,s
)
, s = 1, . . . 6 be the respective nodes
of patch k, their associated normal vectors and weights. Patch
k is a Direction Kernel (DK) if at least one of the following
criteria is satisfied:
I. g(
k NI,s
)
> 0, s = 1 . . . 6, in at least four nodes (1)
II.
6∑
s=1
g(
k NI,s
)
≥ λ (2)
λ being a parameter which depends on the mesh resolution.
We usually choose λ = 4% n, where n is the number of nodes
of TI . The Direction Kernel concept is extended to two more
definitions.
Definition 2 The amplitude of a Direction Kernel k is
defined as follows:
m(k) =
6∑
s=1
g(
k NI,s
)
(3)
Definition 3 The normal of a Direction Kernel k is defined
as follows:
�u(k) =
∑6s=1 g
(
k NI,s
)
�u(
k NI,s
)
m(k)(4)
This is the weighted average of the normals �u(
k NI,s
)
of the
patch k. Figure 4 (right) symbolizes the generation of a DK.
Note that Eq. (3) yields absolute values. However, the rel-
ative percentage value 100 · m(k)/n, where n is the number
of nodes, is used in practice. This simple representation is
general and can be used with any kind of 3D shape. It is eas-
ier to understand this type of representation for polyhedral
objects than for curved objects. In the first case, only a few
DKs with significant m(k) values are expected, the rest being
zero. For example, Fig. 5 shows a polyhedral object which
has seven DKs. However, for free-shape objects, a multitude
of DKs might appear (see Fig. 13c). On the other hand, since
the case of a DK neighborhood is possible (for example, two
DKs share a node of high weight g(NI, j )), an algorithm has
been introduced to definitively establish disjointed DKs in
the Simplified Normal Representation.
The key property of this kind of global representation,
which makes it particularly interesting for robot interaction
applications, consists of the fact that the set of Direction
Kernels are invariant to solid transformations. Let us assume
a Simplified Normal Representation of an object �, which has
123
356 A. Adán et al.
Fig. 5 a Data points of the object and mesh model TR . b Normal
vectors mapped onto TI . A meaningful region is marked with a circle and
unimportant information regarding the object appears with the values
of g 1 and 2. c Direction Kernel painted in different grey levels. d
Simplified Normal Representation: Correspondence between Direction
Kernel on TI and regions on TR
h Direction Kernels. There are three parameters that must be
conserved whatever the pose of � is:
1. The number of DKs: h.
2. The set of Amplitudes: m(k), k=1,…h
3. The set of angles between pairs of Normals: θi j =
(�u(i k), �u( j k)), i, j = 1, . . . .h
In practice, small oscillations may appear in cases 2 and
3 as a result of the use of discrete meshes. Figure 5 shows
an example of the different stages in a DK building process.
In part a we show the original geometric mesh with thou-
sands of points, along with the mesh TR with 1280 nodes.
Note that TR is then a simplified version with an MWS
topology. Part b illustrates the mapping process in which
the number of normal vectors associated with each node is
plotted onto TI . A meaningful region is marked with a circle.
Note that unimportant information corresponds to nodes with
low g values. Direction Kernels in different colors, together
with their corresponding Amplitudes, are shown in part c.
Figure 5d presents the regions associated with different
Direction Kernels, in which the color of their correspond-
ing DKs is maintained.
As has already been mentioned, since the mesh model
is obtained after the implosion of a tessellated spherical
mesh over the object’s surface, the fitted mesh maintains
the original topology and the properties of the MWS struc-
ture. The object model therefore maintains the same topology
with a tessellate sphere. Nevertheless, this does not neces-
sarily imply that the method must work with simple-shapes
or quasi-spherical shapes. In fact, a wide variety of shapes
have been used in our experiments. We can honestly say that
the approach is usually applied to objects with genus zero
(in other words, without holes), but we believe that it would
be possible to extend it to non-genus zero objects. This
question is dealt with at the end of Sect. 5 as a future extension
of the method.
3 Interaction statement
3.1 Feature extraction
Bear in mind that having obtained the Simplified Normal Rep-
resentation, we can then extract the nodes from the surface
of the object which correspond to each DK. Nodes which
come from one DK are usually connected in mesh TR . For
example, in polyhedral shapes, nodes belonging to the same
DK define polygonal regions in TR (Fig. 5d). Note also that
points originating in the Direction Kernel are potential can-
didates to be taken as grasp-points. This first intuitive idea
will be developed in detail in this section.
First, and by following a minimalistic viewpoint, our goal
is to synthesize a two-contact interaction. In order to select
a set of grasping points, we define six parameters related
to each pair of contact points on the object model. These
parameters are as follows.
(i) τ : Force-closure requirement The force-closure defini-
tion allows us to obtain that a grasp with at least two soft-
finger contacts is force-closure if and only if the angle β
between the two planes of contact is strictly less than the
angle of the friction cone 2θ (see Fig. 6a). The same force-
closure condition can be found in [2]. The line crossing P
and Q is inside the friction cones if β/2 < θ .
Let P, Q be two nodes of the model TR, �u p and �uq be
their respective normals, and let θ be the cone friction angle
and β be the angle between the two planes of contact. By
assuming a soft-finger contact model, we introduce a binary
parameter as follows:
τ(P, Q) =
{
1 if β ≤ 2θ
0 if β > 2θ(5)
(ii) σ : DK membership Given the Simplified Normal Repre-
sentation � of an object, we find the nodes which have been
123
Direction Kernels: using a simplified 3D model representation for grasping 357
P
Q
β/2
θ
β
Friction cone
(a) (b)
Fig. 6 a The soft-finger contact model. b Regions associated with DKs
voted to any DK. If a node N belongs to a DK then N will be
a potential contact point. Thus, for a pair of contact points,
this feature can be formalized through the parameter σ .
Let P, Q be two nodes of the model TR, P ∈ i k, Q ∈ j k.
We define:
σ(P, Q) =
{
1 if (i �= j and i k, j k ∈ �)
0 if (i = j or (i k /∈ � or j k /∈ �))(6)
This constraint has an influence on factors such as discard-
ing points of extreme curvature and the filtering out of iso-
lated contact-space. It is, therefore, a high frequency filter
on a 3D curvature variable. Figure 6b shows two examples
of the regions associated with Direction Kernels, which are
depicted in different colors.
(iii) α: Facing angle Let P, Q be two nodes of the model
TR and let �u p, �uq be their respective normals defined by the
position of their three neighbors in the mesh. Let the vector
�d =−→P Q. The angles αp and αq between �d and �u p, �uq can
be calculated by:
αp = ar cos(
−�nTp . �d
)
αq = ar cos(
�nTq . �d
) (7)
We define the facing angle for points P and Q as:
α(P, Q) = max(αp, αq) (8)
It is obvious that if α(P, Q) = 0, then P and Q ideally
face each other. In practice, we say that they face each other
if α(P, Q) ≤ δ, δ being a tolerance parameter that can
be associated with the grasping friction coefficient (that is,
related to the friction cone aperture).
Figure 7a shows pairs of points with different facing angle
values in which the normals have been plotted in blue and
green.
(iv) λ: Distance to the centroid Let O be the centroid of
the object and let P, Q be two nodes of the model TR .
The distance of the centroid is defined as the distance from
O to the straight line formed by P and Q. Formally:
λ(P, Q) = d(O,P Q) (9)
λ ∈ [0, 1], since our model is normalized and centered on
the centroid of the object.
(v) χ : Cone Curvature at the contact-points The Cone
Curvature (CC) is defined as a measure of curvature in the
MWS models at which a specific CC order can be imposed.
Roughly speaking, as the order increases, the CC provides a
measure of curvature for a wider region around the selected
contact point. The range of CC values lies in the interval
[−π/2, π/2], going from concave to convex. In our case we
use a CC or order 2 and 3. In other words, we use 2nd and
3rd wave-fronts around the nodes P and Q. A more detailed
description of the CC can be found in [19]. Figure 8 illustrates
the CC values for a set of orders following a color scale.
Given two nodes, P, Q, the parameter χ is defined as the
maximum value of order 2 and 3 CC of P and Q. In this case
we take the most adverse result for grasping purposes, which
corresponds to the highest value.
(vi) ρ: Safety distance This parameter is introduced to over-
come the difficulties involved in real grasping arising from:
– the uncertainty in the fingers’ positioning, which always
has an error range.
– the real size of the finger contact region, which is not
precise in a real case.
– the proximity of the contact points to the edges, which
may cause the gripper to slip.
In the MWS topology, the wave-front concept has also been
used to characterize the grasping security region. Thus, for a
node P , a security zone around P includes the complete con-
secutive WFs which belong to the same grasping region. For
a pair of grasping points P, Q, the safety distance parameter
is defined as:
ρ(P, Q) = min(ρ(P), ρ(Q)) (10)
123
358 A. Adán et al.
Fig. 7 a Example of
contact-points and the facing
angle value. b Example of
distance to the centroid
αp
αq
P
Q
λ
(a)
(b)
Fig. 8 Illustration of second,
fifth and tenth CCs calculated
in three models
Fig. 9 Examples of security
regions for several grasping
points. The second example
shows that the pair of points is
close to the edges
ρ(P) being the distance from P to the nearest node of the
largest complete WF. That is:
ρ(P) = min(
P P ′, P ′ ∈ W Fmax(P)
)
(11)
Since our model is normalized in a unitary sphere, ρ ∈ [0, 2].
Figure 9 shows some examples of security regions for sev-
eral points of the object on which the last complete WF is
marked.
3.2 Grasping evaluation
The parameters presented in the previous section are used
to define a grasping quality function. As a result, two opti-
mal contact points can eventually be selected for grasping
purposes.
The criterion of a grasping evaluation is implemented
through a non-linear function F depending on all the parame-
ters defined in the previous section. The function F is defined
as the product of five normalized functions, each of which
123
Direction Kernels: using a simplified 3D model representation for grasping 359
Fig. 10 a Graphical representation of functions Fα, Fλ, Fχ , Fρ . b Precision–recall curves for parameter selection experiments: b1 facing angle
parameter, b2 cone-curvature parameter, b3 safety distance parameter
is defined over one parameter. Each function models the
influence of each parameter on the grasping global quality
according to the specifications imposed on our environment.
Therefore, for each pair of grasping points (P, Q) the qual-
ity function is defined as follows:
F(P, Q) = Fτ .Fσ .Fα.Fλ.Fχ .Fρ (12)
being:
Fτ = τ
Fσ = σ
Fα = (e−aα), a > 1
Fλ = 1 −1
λ2
Fχ = e−bχ2
, b > 1
Fρ = (1 − e−cρ), c > 1
(13)
Figure 10a plots the above functions. The following com-
ments can be made about their expressions:
– Fτ is an on/off function. The inclusion of Fτ implies that
only force-closure grasps are evaluated.
– Fσ is an on/off discrete function which controls the mem-
bership to DK.
– Fα drastically penalizes the increase of the facing angle.
The parameter a > 1 regulates the sensitivity of this
function.
– Fλ smoothly penalizes the distance from the centroid of
the object to the grasping points. In our case this is not an
essential factor.
– Fχ drastically controls grasping points with high values
of CC and favors those whose values are close to zero.
Parameter b > 1 modulates this action. For a point of
CC near to π/2 (high convexity) the grasp will be highly
unstable. However, if CC is near to −π/2, although the
grasping stability is ensured, these points will, in prac-
tice, be inaccessible owing to the gripper’s dimension.
We therefore introduce a square term for χ .
– Fρ considerably increases the security in the global grasp-
ing function. Likewise, the parameter c > 1 controls the
sensibility of this characteristic.
Finally, the optimal grasp and the determination of its cor-
responding contact points are obtained by optimizing the
function F for all pairs of points in the mesh.
F(P, Q)opt = max(F(P, Q),∀P, Q ∈ TR) (14)
We conducted a set of experiments to determine the effect
of parameters a, b and c on algorithm performance. This
123
360 A. Adán et al.
study was evaluated by means of the F-measure since it
integrates Precision and Recall concepts. Our idea is to
use Precision and Recall to compare a pair of associated
query/ground-true contact-points. We have therefore adapted
these statistical measures to our environment, thus quanti-
fying the goodness of our method. Precision, Recall and
F-measure values can be calculated with the following
equations:
Precision =tp
tp + f p
Recall =tp
tp + f n(15)
Fβ = 2Precision.Recall
Precision + Recall
where tp, fp, and fn are the number of true positives, false
positives, and false negatives.
The statement of parameters a, b and c was carried out by
using a model database with 25 objects on which the ground-
truth contact points were manually established in advance.
This procedure is as follows.
Given a ground truth, we must first determine the asso-
ciation between a pair of points obtained by our algorithm
and the ground truth. This association is made by matching
each calculated contact-point with the closest ground truth
contact-point only in the case of their being in the same DK
region. One pair of detected contact points which are both
in the first wave front of their associated ground-truth points
thus count as true positives. If any of them are situated in
longer wave fronts, they then count as false positives. Finally,
when the query and ground truth contact-points are located
in different DK regions, no association occurs and this case
is labeled as a false negative. Once the labeling has been
completed for all the objects in the database, the number
of true positives, false positives and false negatives is com-
puted, Precision and Recall values are obtained from Eq. 15.
Finally, the parameter is set for the best F-measure.
Figure 10b highlights the precision–recall curves for those
experiments in which parameters a, b and c vary in the range
[0,10]. Based on these experiments, we set the parameters
as follows: facing angle parameter a = 3.9 cone-curvature
parameter b = 5.8 and safety distance parameter c = 4.1.
3.3 Grasping planner
Aspects such as robot sensing, perception and recognition
must obviously be considered in a real robot interaction.
Figure 11 shows an overview of all the agents involved in
a grasping procedure. The phases in which our 3D model
representation plays an important role are highlighted. In
general, our interaction system can be summarized in the
set of blocks shown in Fig. 11.
Fig. 11 Outline of processes in an automatic grasp. The phases on
which this paper is focused are highlighted
The first stage deals with the sensory processing. In
this stage, we first analyze the information retrieved by a
range finder sensor and find the position of the object in
the scene. We used the DGI-BS (Depth Gradient Image
Based on Silhouette) technique, which can obtain both
a complete model and a partial model of an object in
occluded scenes. This property allows us to recognize and
pose objects in complex scenes in which only incomplete
surfaces of the objects are available. The complete DGI-
BS version synthesizes surface information (through depth
images) and shape information (through contours) of the
whole object in a single image which is smaller than 1
mega pixel. Object pose is carried out by means of a sim-
ple matching algorithm in the DGI-BS space which yields
a correspondence point-to-point between scene and model.
Details of the DGI-BS technique can be found in references
[20,21].
The grasping synthesis technique presented in Sects. 3.1
and 3.2 represents an important part of the robot interac-
tion. However, there are more phases and tools which are
necessary to plan robot manipulation tasks in real envi-
ronments. For example, the scenario will determine the
robot’s path. The shape and size of the gripper and the
object’s surroundings may also influence the grasping selec-
tion. The optimal grasp obtained from our analysis might,
therefore, be unreliable as a result of collisions between
the gripper and the environment, or of kinematics prob-
lems (e.g., non-inverse kinematic solution at the grasping
location). In order to solve collision problems, our plan-
ner again takes information from 3D models of the whole
scenario. A short explanation of this procedure is shown in
Fig. 12.
The optimal grasp, which corresponds to the pair of grasp-
ing points (P, Q) that maximizes Eq. (12), is first simulated
in a virtual setting which includes the 3D models of scenario,
object, gripper and robot. Note that the object has been previ-
ously posed in the scenario thanks to the DGI-BS algorithm,
and that the coordinates of (P, Q) in the robot reference
system are therefore known.
123
Direction Kernels: using a simplified 3D model representation for grasping 361
Fig. 12 a Grasping planner diagram. b Path planning simulation tool
The inverse kinematic is calculated for (P, Q). If there
is no kinematic solution or path without colliding with the
environment, this wrist pose is rejected and the next pair of
candidate contact-points on the list is taken. This process
continues until a reliable wrist pose solution is obtained.
The next step consists of a path-planning algorithm which
guarantees a smooth and reliable path for the robot through
the scenario from the initial position to the grasping posi-
tion. We used a path planning for manipulation environments
through interpolated walks. Our approach is based on two
points. Firstly, unlike most sampling techniques, we have
defined a non-random local sampling, which greatly reduces
the computational time. Secondly, a set of interpolated
walks is obtained through cubic splines which guarantee
the smoothness and continuity of the walks. This technique
allows us to solve the collisions and to generate a semi-
directed distribution of the search area in the C-space. This
last characteristic is used to avoid local minima. Figure 12b
shows the trajectories generated by the planner algorithm
using W =5, which signifies that each time a collision occurs,
the local planner generates 5 new walks. Complete informa-
tion concerning path planning can be found in [22].
After all these grasping components have been calculated,
the grasp is simulated on a computer simulation tool and is
finally executed in the robotic cell. Implementation details
concerning both the simulation tools and the execution pro-
cess can be found in [23].
4 Experimental results
This section is devoted to showing the experimental results
in two parts. In the first part, the candidate grasp points for a
set of 3D free-shapes are presented, whereas part two shows
the performance of our method in a real experimental setup.
123
362 A. Adán et al.
Fig. 13 a Examples of absolute
optimal grasp. b Examples of
optimal grasps constructed from
pairs of regions associate with
DKs. c Examples of grasp on
free-form shapes
123
Direction Kernels: using a simplified 3D model representation for grasping 363
Table 1 Grasp evaluation test
Grasping
solution Initial grasp Stability test
Grasping
solution Initial Grasp Stability test
Grasping
solution Initial grasp Stability test
Grasping
solution Initial Grasp Stability test
1)
123
364 A. Adán et al.
Fig. 14 a Grasp-points obtained with our method. b Robot hand configuration found for the grasp-points. c Initial grasping. d Final grasping
4.1 Examples of grasp synthesis
The approach presented in this paper has been used to
carry out experiments with real objects which have been
synthesized with a MINOLTA VI910 laser scanner. The
Simplified Normal Representations are obtained in the first
stage. The functions Fτ , Fσ , Fα, Fλ, Fχ , Fρ and the quality
function F are then defined. The highest quality grasping has
been determined by following two strategies.
The contact point grasps are first sorted according to
function F , that is, regardless of their membership to the
specific DK. In this strategy, the first best candidates usually
correspond to the same pairs of DKs. This might be a dis-
advantage when the grasp is executed since the chosen pair
of zones may be inaccessible to the gripper. In this situation,
the system must explore the sorted grasp list until it chooses
the first executable grasps associated with other DK pairs.
Figure 13a shows the optimal grasping points for several
objects. The magnitude of parameters σ, α, λ, χ, ρ can
be visually evaluated. Note that the regions associated with
the DKs appear in different colors.
The second strategy consists of establishing all possible
pairs of DKs and choosing the best grasp for each pair of
associated zones. This method favors grasp execution plan-
ning, since the zones associated with DKs which cannot be
accessed by the robot are discarded beforehand, thus opti-
mizing the search procedure for only the accessible pairs.
Figure 13b illustrates this option, taking the three best grasps
for each pair of DKs.
This strategy has been applied to a wide set of 3D shapes.
The system not only works on polyhedral objects, but is also
capable of working on free-shape objects. Figure 13c shows
results for free-form objects. Note the increase in the num-
ber of DKs and the reduction of the safety region size for
the spinning top. Observe also that the grasping regions on
the dinosaur head are scattered owing to the surface’s curved
nature. However, note that a clear region can be perceived
in red at the base of the head, and that the marked security
distance does not surpass this region.
In order to test the quality of the grasps that our method
achieves, we have carried out a test bench with real grasps. In
this experiment we tested how stable the grasps of a set of ten
objects (see Table 1) in different locations were. The objects
chosen for this experiment were made of wood (objects 1,
2, 3, 8), plastic (objects 6, 7, 9, 10) and cardboard (objects
4 and 5). For each object at each location we first calculated
the grasping points (Fig. 14a) and we then obtained the hand
configuration for grasping points, bearing in mind that only
those grasps with a robot kinematics solution were checked
(Fig. 14b). The real grasp was then executed by the robot
(Fig. 14c). Finally, the robot was moved in order to verify
the stability of the grasp. A grasp is considered to be sta-
ble if the object is still grasped after the movement and its
location in the gripper has not altered. We have used a pneu-
matic parallel gripper, setting the maximum force applied to
1,000 mN. Rubber pads are used on the gripper’s fingers to
assume the soft-finger contact, as stated in Sect. 3.1. More
information about the experimental setting is included in the
following section.
Table 1 shows the results obtained during the grasp solu-
tion, initial grasp and final grasp for each object-location.
As can be seen, only one of the 26 grasps became unstable
after moving the robot. This indicates that the reliability of
our grasp method is higher than 96%.
4.2 Robot interaction in a robotic cell
The purpose of this section is to show the results of a set
of robot interaction tests, following the scheme shown in
Sect. 3.3 (Figs. 11 and 12). The interaction will consist of
the robot autonomously moving an object from position A
to position B. Figure 15a shows our experimental setup, in
which scene information is obtained via a structured light
sensor (Gray Range Finder) that provides the coordinates
123
Direction Kernels: using a simplified 3D model representation for grasping 365
Theoretical
grasp-points
Z-axis of the
end-effector
q1
q2
q4
q3
q5
q6
(a) (b) (c)
(e)(d)
(f)
(g)
Fig. 15 a Position of object in the experimental cell. b 3D data
acquisition of a partial view and spatial localization and recognition
algorithm. c Representation in the simulation environment. d Process
of localization of grasping points. e Calculation of paths. f Simulated
manipulation. g Real manipulation
123
366 A. Adán et al.
Table 2 Objects and poses in the experimental test
of the visible surface. Recognition and positioning of the
object was solved by following the method of Merchán et al.
[20,21]. This technique can be used to determine the pose of
a fixed object in the reference system of the scene.
Figure 15 shows the full sequence of steps in the auton-
omous manipulation tasks. Part a presents the experimental
setup, composed of the robot, the range finder sensor and the
scene. Note that the object is placed in front of the range sen-
sor. In part b (above) we show the points cloud of the object
in the scene coordinate system and part b (below) illustrates
the geometric model of the recognized object in the calcu-
lated pose. The model is superimposed onto the range data.
In order to carry out the grasping simulation, the synthetic
model is inserted into the virtualized setup (part c) and the
assignment of the best contact-points is made. Figure 15d
shows the contact-points and the Z-axis of the end-effector.
A simulation of the wrist pose and the gripper touching the
points is also presented. Part (e) concerns the path planning
for each DOF of the robot, and part (f) illustrates several
shots corresponding to the grasping simulation. Finally, the
grasping is executed in the real environment (part g).
The test was carried out on ten objects from our data-
base, and four random poses were taken for each of them.
The manipulator was a Staübli RX90 robot with a pneumatic
gripper and a gray range finder sensor that yielded the 3D
information from the scene. Table 2 shows the collection of
objects used in different poses, and Table 3 shows the results
according to the success in each stage (recognition, pose,
grasping synthesis, path planning and grasping execution).
The errors are classified as critical (in red) or non-critical (in
blue). A non-critical error is produced when an unforeseen
circumstance occurs in the real execution, although the com-
pletion of the rest of the execution takes place. The errors
are numbered within the table to facilitate their subsequent
analysis in the paragraph concerning error analysis.
The points cloud (in blue), provided by the range finder,
together with the complete model, are superimposed in the
calculated pose. Figure 16 (bottom) shows details of the
grasping execution stage.
Errors 1 and 2 have been overcome by improving the pre-
cision of the modeling of the gripper in the simulator. As was
confirmed in the virtual version, the opening of the gripper
was slightly greater than in the real version, which was why
these errors occurred. We intend to carry out research into
solutions to the errors caused by the grasping algorithm, such
as those in errors 3, 6, and 13, at a later date. The solution to
errors 5, 9, 11 and 12 can be found via a revision of the 3D
models and recognition algorithm used, or via a second shot
from another position of the sensor, which would allow us to
solve any ambiguous cases.
5 Conclusions and future research
3D object-model based grasping solutions are relatively new.
Only a few works on this subject, which do not really exploit
the power of the proposed models, can be found in jour-
nals and conferences. In this paper, we introduce a simplified
3D representation model which helps a robot manipulator to
make easy and reliable interactions with an object. We spe-
cifically propose a method which calculates the best pairs of
contact-points using MWS models.
A MWS model is generated through a three-connectivity
spherical mesh which is imploded on the object’s surface.
The topology imposed in the MWS model allows us to
define non-strict-local features which provide an extended
geometrical knowledge around the contact points. Thus, in
123
Direction Kernels: using a simplified 3D model representation for grasping 367
Table 3 Summary of results
Recognition Pose Grasping synthesis Path-planning Real I Robot interaction
O → A A → B O → A Grasping A → B
Object 1-a OK OK OK OK OK OK ERROR-1 –
Object 1-b OK OK OK OK OK OK ERROR-2 OK
Object 1-c OK OK OK ERROR-3 – – – –
Object 1-d OK OK OK OK OK OK OK OK
Object 2-a OK OK OK OK OK OK OK OK
Object 2-b OK OK OK OK OK OK OK OK
Object 2-c OK OK OK OK OK OK OK OK
Object 2-d OK OK OK OK OK OK OK OK
Object 3-a OK OK ERROR-4 – – – – –
Object 3-b OK OK OK OK OK OK OK OK
Object 3-c OK OK OK OK OK OK OK OK
Object 3-d OK OK OK OK OK OK OK OK
Object 4-a OK OK OK OK OK OK OK OK
Object 4-b OK ERROR-5 – – – – – –
Object 4-c OK OK OK OK OK OK OK OK
Object 4-d OK OK OK OK OK OK OK OK
Object 5-a OK OK OK ERROR-6 – – – –
Object 5-b OK OK ERROR-7 – – – – –
Object 5-c OK OK ERROR-8 – – – – –
Object 5-d OK OK OK OK OK OK OK OK
Object 6-a OK OK OK OK OK OK OK OK
Object 6-b OK OK OK OK OK OK OK OK
Object 6-c OK OK OK OK OK OK OK OK
Object 6-d ERROR-9* – – – – – – –
Object 7-a OK OK OK OK OK OK OK OK
Object 7-b OK OK OK OK OK OK OK OK
Object 7-c OK OK ERROR-10 – – – – –
Object 7-d OK OK OK OK OK OK OK OK
Object 8-a OK OK OK OK OK OK OK OK
Object 8-b OK ERROR-11 – – – – – –
Object 8-c ERROR-12 – – – – – – –
Object 8-d OK OK OK OK OK OK OK OK
Object 9-a OK OK OK OK OK OK OK OK
Object 9-b OK OK OK OK OK OK OK OK
Object 9-c OK OK OK ERROR-13 – – – –
Object 9-d OK OK OK OK OK OK OK OK
Object 10-a OK OK OK OK OK OK OK OK
Object 10-b OK OK OK OK OK OK OK OK
Object 10-c OK OK OK OK OK OK OK OK
Object 10-d OK OK OK OK OK OK OK OK
this framework, the optimal selection of gasping points is
based on a set of geometrical properties.
Direction Kernels are presented in the paper as non-strict-
local features which discriminate zones of the object in which
reliable grasps can take place. DKs are used as discrimina-
tive factor, and up to six other features are used in order to
precisely establish a contact points ranking, where this rank-
ing is imposed by a grasping quality function.
This approach has been tested on a wide set of objects
in different poses in a real environment. The system suc-
123
368 A. Adán et al.
Fig. 16 Top Recognition and pose of Object 10. Bottom Grasping execution
cessfully calculates the optimal grasping points which are
ordered according to the quality function F for two finger
grippers while maintaining the constraints of grasping secu-
rity. The correct performance of the method on free or smooth
shaped objects is original with regard to the grasping methods
normally used on polyhedral objects.
Future improvements to the method will be addressed in
various lines. Firstly, in order to improve the efficiency of the
algorithms and make them more robust, a more extended test
should be carried out in other scenarios and on other objects
over the next few months. Secondly, we aim to implement this
technique in an industrial environment in which a manipula-
tor will have to carry out an easy interaction with free-shapes.
For example, pick and place actions could be performed by
following the guidelines of our method, thus avoiding other
complex solutions based on purely kinematic and dynamic
requirements.
A further interesting aspect is that our strategy can be
extended to robotic grippers with more than two fingers. Note
that the DKs determine the potential safety grasping regions
on the object and that these are established whatever the num-
ber of contacts is. For more than two contact points, we would
simply need to maintain the requirements of force closure and
safety of the new grasps, which implies adapting the features
presented in Sect. 3.1. We consider this issue to be a future
development. In summary, we believe that this is a prom-
ising line for robot interaction that could be considered by
other researchers and developed for more complex grasping
problems in the future.
Finally, the extension of the method to non-genus objects
is a future research line on which we feel encouraged to work.
For a non-genus object, the method proposed here would
yield a coarse model of the object in which zones fitted to
the data and zones generated by holes could be clearly del-
imitated. In this respect, like other works [17,4] which use
coarse models, a simplified model would be used to calculate
the Direction Kernels by taking the first type of zones. This
is an idea on which we are currently working. Shape restric-
tions and simple shapes (spheres, cylinders, cones, cubes,
parallelepiped. . .) are used in most of the grasping references
cited in this paper [17,4,16], including simulated models and
topologies imposed by the users [6,9–11]. The method pre-
sented here is not restricted to a basic library of volumes. It
has been tested on a multitude of genus-zero shapes and we
believe that it could be adapted to non-genus-zero shapes in
the future.
Acknowledgments This research has been supported by the CICYT
Spanish projects DPI2009-14024-C02-01 and PEII11-0113-2590.
References
1. Cutkosky, M.R.: On grasp choice, grasp models and the
design of hands for manufacturing tasks. IEEE Trans. Robot.
Autom. 5(3), 269–279 (1989)
2. Nguyen, V.D.: Constructing stable grasps in 3D. In: IEEE Interna-
tional Conference on Robotics and Automation, vol. 4 (1987)
3. Ferrari, C., Canny, J.: Planning optimal grasps. IEEE ICRA, Nice,
pp. 2290–2295 (1992)
123
Direction Kernels: using a simplified 3D model representation for grasping 369
4. Borst, C., Fisher, M., Hirzinger, G.: A fast and robust grasps planner
for arbitrary 3D objects. IEEE ICRA 1999, Detroit, pp. 1890–1896
5. Pollard, N.: Synthesizing grasps from generalized prototypes. IEEE
ICRA, Minneapolis, pp. 2124–2130 (1996)
6. Miller, A.T., Allen, P.K.: Examples of 3D Grasp Quality compu-
tations. In: IEEE International Conference on Robotics and Auto-
mation (ICRA), pp. 1240–1246 (1999)
7. Ponce, J. Faverjon, B.: On computing three finger force-closure
grasps of polyhedral objects. IEEE Trans. Robot. Autom. 11(6),
868–881 (1995)
8. Ding, D. Liu Y., Wang S.: Computing 3D optimal form-closure
grasps. IEEE ICRA, San Francisco, pp. 3573–3578 (2000)
9. Chinellato, E., Fisher, R.B., Morales, A., Pobil, A.P.: Ranking
planar grasp configurations for a three-finger hand. In: IEEE
International Conference on Robotics and Automation (ICRA),
pp. 1133–1139 (2003)
10. Prado-Gardini, R., Suárez, R.: Heuristic approach to construct
3-finger force-closure grasp for polyhedral objects. In: 7th IFAC
Symposium on Robot Control (SYROCO’2003), pp. 387–392
(2003)
11. Sanz, P.J., Marín, R., Dabic, S.: Improving 2D visually-guided
grasping performance by jeans of new geometric computation tech-
niques. In: Intelligent Manipulation and Grasping, Genova, Italy,
pp. 559–654 (2004)
12. Kawarazaki, N., Hesegawa, T., Nishihara, Z.: Grasp planning algo-
rithm for a multifingered hand-arm robot. In: IEEE ICRA, Leuven,
pp. 933–939 (1998)
13. Fernandez, C., Vicente, A., Reinoso, O., Aracil, R.: A decision tree
based approach to grasp synthesis. In: Intelligent Manipulation and
Grasping, Genova, Italy, pp. 486–491 (2004)
14. Morales, A., Chinellato, E., Sanz, P.J., Fagg, A.H., del Pobil, A.P.:
Vision based planar grasp synthesis and reliability assessment for a
multifinger robot hand: a learning approach. In: Intelligent Manip-
ulation and Grasping, pp. 566–570 (2004)
15. Pelossof, R., Miller, A., Allen, P., Jebara, T.: An SVM learning
approach to robotic grasping. In: IEEE International Conference
on Robotics and Automation (ICRA), pp. 3215–3218 (2004)
16. Michel, C., Remond, C., Perdereau, V., Drouin, M.: A robotic plan-
ner based on the natural grasping axis. In: Intelligent Manipulation
and Grasping, Genova, Italy, pp. 492–497 (2004)
17. Miller, A., Knoop, S., Christensen, H.I., Allen, P.K.: Automatic
grasp planning using primitives. In: IEEE International Conference
on Robotics and Automation (ICRA), pp. 1824–1829 (2003)
18. Adán, A., Cerrada, C., Feliu, V.: Modeling wave set: definition
and application of a new topological organization for 3D object
modeling. Comput. Vis. Image Underst. 79, 281–307 (2000)
19. Adán, A., Adán, M.: A flexible similarity measure for 3D
shapes recognition. In: IEEE Transactions on Pattern Analysis and
Machine Intelligence (Nov. 2004)
20. Merchán, P., Adán, A.: Exploration Trees on Highly Complex
Scenes: A New Approach for 3D Segmentation. Pattern Recognit.
40(7), 1879–1898 (2007)
21. Merchan, P., Adán, A., Salamanca, S.: Recognition of free-form
objects in complex scenes using DGI-BS models. In: Third Interna-
tional Symposium on 3D Data Processing, Visualization and Trans-
mission, 3DPVT, Chapel Hill, USA (2006)
22. Vázquez, A.S., Torres, R., Adán, A., Cerrada, C.: Path planning for
manipulation environments through interpolated walks. In: Inter-
national Conference on Intelligent Robots and Systems (IROS’06),
Beijing, China
23. Vazquez, A.: Robot interaction in 3D environment: new approaches
on integration architectures, path planning and grasping. PhD
thesis, Castilla La Mancha University (2008)
Author Biographies
Antonio Adán received the MSc
degree in Physics from both Uni-
versidad Complutense of Madrid
and Universidad Nacional de
Educación a Distancia (UNED),
Spain in 1983 and 1990 respec-
tively. He received the Ph.D.
degree with honours in Industrial
Engineering. Since 1990 he is
an Associate Professor at Castilla
La Mancha University (UCLM)
and leader of the 3D Visual Com-
puting Group. His research inter-
ests are in Pattern Recognition,
3D Object Representation, 3D
Segmentation, 3D Sensors and
Robot Interaction on Complex Scenes. Along this time he has made
more than 100 international technical contributions on prestigious
journals and conferences. From 2009 to 2010 he was a Visiting Faculty
at the Robotics Institute, Carnegie Mellon University, Pittsburgh, PA,
USA. Dr. Adán was awarded of the Twenty Eighth Annual Pattern
Recognition Society Award corresponding to the best paper published
in Pattern Recognition Journal during 2001. Dr Adán is a member of
the Institute of Electrical and Electronic Engineers (IEEE).
Andrés S. Vázquez received
his MSc in Computer Science
from the University of Castilla-
La Mancha, Spain, in 2002. He
received his Ph.D. in Mecha-
tronics in 2008. He worked as
a research assistant for UNED
University from 2001 to 2002.
Since 2002, he has been work-
ing as a research assistant for the
Computing Vision and Robotics
Group at the School of Comput-
ing Science at Castilla La Man-
cha University (UCLM). He has
pursued a postdoctoral fellow-
ship of the Castilla-La Mancha
regional Government at the Robotics Institute, Carnegie Mellon Univer-
sity in 2008/2009. Currently he is an assistant professor and researcher,
and has been teaching Control Theory, Introduction to Computing and
Industrial Robotics since 2003.
Pilar Merchán received the
MSc degree in Physics and the
Ph.D. degree in Industrial Engi-
neering, from the Universidad de
Extremadura, Spain , in 1996 and
2007, respectively. She has work
as a researcher since 1997 and
as assistant professor since 2000
at Universidad de Extremadura.
She has made more than 50 inter-
national technical contributions
on prestigious journals and con-
ferences. Her research interests
are in complex scene segmenta-
tion and retrieval, pattern recog-
nition, 3D sensors and 3D scene modeling and representation.
123
370 A. Adán et al.
Ruben Heradio completed his
Bachelor of Computer Science
at the Polytechnic University
of Madrid (UPM), Spain, in
2001. He subsequently joined the
Department of Software Engi-
neering and Control Systems at
the National University of Dis-
tance Education (UNED), Spain,
where he received his Ph.D. in
2007. Currently, he works as a
teacher assistant in such Depart-
ment.
123