task-space inverse dynamics - github pages...inverse dynamics(tsid). tsid is a popular control...

Post on 11-Nov-2020

2 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Task-Space Inverse Dynamics

Quadratic-Programming based Control for Legged Robots

Andrea Del Prete

University of Trento

Introduction

This document explains the control framework known as Task-Space

Inverse Dynamics (TSID).

TSID is a popular control framework for legged robots.

It all started in 1987 with this paper by Oussama Khatib: “A unified

approach for motion and force control of robot manipulators: The

operational space formulation” [6]

Very active research topic between 2004 and 2015 [11, 7, 8, 10, 4, 3].

Now not so active anymore (i.e. problem solved), but widely used.

1

Introduction

This document explains the control framework known as Task-Space

Inverse Dynamics (TSID).

TSID is a popular control framework for legged robots.

It all started in 1987 with this paper by Oussama Khatib: “A unified

approach for motion and force control of robot manipulators: The

operational space formulation” [6]

Very active research topic between 2004 and 2015 [11, 7, 8, 10, 4, 3].

Now not so active anymore (i.e. problem solved), but widely used.

1

Introduction

This document explains the control framework known as Task-Space

Inverse Dynamics (TSID).

TSID is a popular control framework for legged robots.

It all started in 1987 with this paper by Oussama Khatib: “A unified

approach for motion and force control of robot manipulators: The

operational space formulation” [6]

Very active research topic between 2004 and 2015 [11, 7, 8, 10, 4, 3].

Now not so active anymore (i.e. problem solved), but widely used.

1

Introduction

This document explains the control framework known as Task-Space

Inverse Dynamics (TSID).

TSID is a popular control framework for legged robots.

It all started in 1987 with this paper by Oussama Khatib: “A unified

approach for motion and force control of robot manipulators: The

operational space formulation” [6]

Very active research topic between 2004 and 2015 [11, 7, 8, 10, 4, 3].

Now not so active anymore (i.e. problem solved), but widely used.

1

Schedule

1. Theory (≈ 1/1.5 hours)

2. Implementation (≈ 0.5/1 hour)

3. Coding (≈ 0.5/1 hour)

Options for coding:

• install TSID in Nicolas’s VM (recommended)

• go to https://github.com/stack-of-tasks/tsid/issues

• open issue 28 (should be the latest)

• execute list of commands I posted

• use my 11 GB VM (prepared with VMware Fusion)

• install TSID and dependencies on your machine

• TSID branch master → Pinocchio branch master (same as robotpkg

binaries)

• TSID branch pinocchio-v2 → Pinocchio branch devel

2

Schedule

1. Theory (≈ 1/1.5 hours)

2. Implementation (≈ 0.5/1 hour)

3. Coding (≈ 0.5/1 hour)

Options for coding:

• install TSID in Nicolas’s VM (recommended)

• go to https://github.com/stack-of-tasks/tsid/issues

• open issue 28 (should be the latest)

• execute list of commands I posted

• use my 11 GB VM (prepared with VMware Fusion)

• install TSID and dependencies on your machine

• TSID branch master → Pinocchio branch master (same as robotpkg

binaries)

• TSID branch pinocchio-v2 → Pinocchio branch devel

2

Schedule

1. Theory (≈ 1/1.5 hours)

2. Implementation (≈ 0.5/1 hour)

3. Coding (≈ 0.5/1 hour)

Options for coding:

• install TSID in Nicolas’s VM (recommended)

• go to https://github.com/stack-of-tasks/tsid/issues

• open issue 28 (should be the latest)

• execute list of commands I posted

• use my 11 GB VM (prepared with VMware Fusion)

• install TSID and dependencies on your machine

• TSID branch master → Pinocchio branch master (same as robotpkg

binaries)

• TSID branch pinocchio-v2 → Pinocchio branch devel

2

Schedule

1. Theory (≈ 1/1.5 hours)

2. Implementation (≈ 0.5/1 hour)

3. Coding (≈ 0.5/1 hour)

Options for coding:

• install TSID in Nicolas’s VM (recommended)

• go to https://github.com/stack-of-tasks/tsid/issues

• open issue 28 (should be the latest)

• execute list of commands I posted

• use my 11 GB VM (prepared with VMware Fusion)

• install TSID and dependencies on your machine

• TSID branch master → Pinocchio branch master (same as robotpkg

binaries)

• TSID branch pinocchio-v2 → Pinocchio branch devel

2

Notation & Definitions

The state of the system is denoted x .

The control inputs are denoted u.

The identity matrix is denoted I . The zero matrix is denoted 0. When

needed, the size of the matrix is written as index, e.g., I3.

A system is fully actuated if the number of actuators is equal to the

number of degrees of freedom (e.g., manipulator).

A system is under actuated if the number of actuators is less than the

number of degrees of freedom (e.g., legged robot, quadrotor).

3

Notation & Definitions

The state of the system is denoted x .

The control inputs are denoted u.

The identity matrix is denoted I . The zero matrix is denoted 0. When

needed, the size of the matrix is written as index, e.g., I3.

A system is fully actuated if the number of actuators is equal to the

number of degrees of freedom (e.g., manipulator).

A system is under actuated if the number of actuators is less than the

number of degrees of freedom (e.g., legged robot, quadrotor).

3

Notation & Definitions

The state of the system is denoted x .

The control inputs are denoted u.

The identity matrix is denoted I . The zero matrix is denoted 0. When

needed, the size of the matrix is written as index, e.g., I3.

A system is fully actuated if the number of actuators is equal to the

number of degrees of freedom (e.g., manipulator).

A system is under actuated if the number of actuators is less than the

number of degrees of freedom (e.g., legged robot, quadrotor).

3

Table of contents

1. Actuation Models

2. Task Models

3. Optimization-Based Control

4. Multi-Task Control

5. Computational Aspects

4

Actuation Models

Actuation Models

Everything starts with a model (not really, but here it does).

Appropriate model choice depends on both robot and task.

Let us discuss three models for the robot actuators:

• velocity source

• acceleration source

• torque source

5

Actuation Models

Everything starts with a model (not really, but here it does).

Appropriate model choice depends on both robot and task.

Let us discuss three models for the robot actuators:

• velocity source

• acceleration source

• torque source

5

Actuation Models

Everything starts with a model (not really, but here it does).

Appropriate model choice depends on both robot and task.

Let us discuss three models for the robot actuators:

• velocity source

• acceleration source

• torque source

5

Velocity Control

Assume motors are velocity sources.

• Good approximation for hydraulic actuators.

• Good approximation for electric motors only in certain conditions

(e.g., industrial manipulators, not for legged robots).

Robot state x is described by its configuration q.

Control inputs u are robot velocities vq.

Dynamic for fully-actuated systems is a simple integrator:

vq = u

6

Velocity Control

Assume motors are velocity sources.

• Good approximation for hydraulic actuators.

• Good approximation for electric motors only in certain conditions

(e.g., industrial manipulators, not for legged robots).

Robot state x is described by its configuration q.

Control inputs u are robot velocities vq.

Dynamic for fully-actuated systems is a simple integrator:

vq = u

6

Acceleration Control

Assume motors are acceleration sources.

• Good approximation for electric motors as long as large contact

forces are not involved.

Robot state x is described by its configuration q and its velocity vq:

x , (q, vq)

Control inputs u are robot accelerations vq.

Dynamic for fully-actuated systems is a double integrator:[vqvq

]=

[0 I

0 0

][q

vq

]+

[0

I

]u

7

Acceleration Control

Assume motors are acceleration sources.

• Good approximation for electric motors as long as large contact

forces are not involved.

Robot state x is described by its configuration q and its velocity vq:

x , (q, vq)

Control inputs u are robot accelerations vq.

Dynamic for fully-actuated systems is a double integrator:[vqvq

]=

[0 I

0 0

][q

vq

]+

[0

I

]u

7

Torque Control

Assume motors are torque sources. Good approximation for electric

motors.

Robot state x is described by its configuration q and its velocity vq:

x , (q, vq)

Control inputs u are motor torques τ .

8

Torque Control: Fully-Actuated Dynamic

The dynamic equation of a fully-actuated mechanical system is:

M(q)vq + h(q, vq) = τ + J(q)>f ,

where M(q) ∈ Rnv×nv is the mass matrix, h(q, vq) ∈ Rnv are the bias

forces, τ ∈ Rnv are the joint torques, f ∈ Rnf are the contact forces, and

J(q) ∈ Rnf×nv is the contact Jacobian.

Bias forces are sometimes decomposed in two components:

h(q, vq) = C (q, vq)vq + g(q)

• C (q, vq)vq contains Coriolis and centrifugal effects

• g(q) contains the gravity forces

9

Torque Control: Fully-Actuated Dynamic

The dynamic equation of a fully-actuated mechanical system is:

M(q)vq + h(q, vq) = τ + J(q)>f ,

where M(q) ∈ Rnv×nv is the mass matrix, h(q, vq) ∈ Rnv are the bias

forces, τ ∈ Rnv are the joint torques, f ∈ Rnf are the contact forces, and

J(q) ∈ Rnf×nv is the contact Jacobian.

Bias forces are sometimes decomposed in two components:

h(q, vq) = C (q, vq)vq + g(q)

• C (q, vq)vq contains Coriolis and centrifugal effects

• g(q) contains the gravity forces

9

Torque Control: Under-Actuated Systems

Underactuated systems (such as legged robots) have less actuators than

degrees of freedom (DoFs). Calling nva the number of actuators, and nvthe number of DoFs, we have nva < nv .

Assume elements of q are ordered, q , (qu, qa), where:

• qu ∈ Rnqu are the passive (unactuated) joints,

• qa ∈ Rnqa are the actuated joints.

Similarly, vq , (vu, va), where vu ∈ Rnvu and va ∈ Rnva .

S ,[0nva×nvu Inva

]is a selection matrix associated to the actuated

joints:

va = Svq

10

Torque Control: Under-Actuated Systems

Underactuated systems (such as legged robots) have less actuators than

degrees of freedom (DoFs). Calling nva the number of actuators, and nvthe number of DoFs, we have nva < nv .

Assume elements of q are ordered, q , (qu, qa), where:

• qu ∈ Rnqu are the passive (unactuated) joints,

• qa ∈ Rnqa are the actuated joints.

Similarly, vq , (vu, va), where vu ∈ Rnvu and va ∈ Rnva .

S ,[0nva×nvu Inva

]is a selection matrix associated to the actuated

joints:

va = Svq

10

Torque Control: Under-Actuated Systems

Underactuated systems (such as legged robots) have less actuators than

degrees of freedom (DoFs). Calling nva the number of actuators, and nvthe number of DoFs, we have nva < nv .

Assume elements of q are ordered, q , (qu, qa), where:

• qu ∈ Rnqu are the passive (unactuated) joints,

• qa ∈ Rnqa are the actuated joints.

Similarly, vq , (vu, va), where vu ∈ Rnvu and va ∈ Rnva .

S ,[0nva×nvu Inva

]is a selection matrix associated to the actuated

joints:

va = Svq

10

Torque Control: Under-Actuated Dynamic

The dynamic of an under-actuated mechanical system is:

M(q)vq + h(q, vq) = S>τ + J(q)>f ,

where, contrary to the fully-actuated case, τ ∈ Rnva .

This dynamic is often decomposed into unactuated and actuated parts:

Mu(q)vq + hu(q, vq) = Ju(q)>f

Ma(q)vq + ha(q, vq) = τ + Ja(q)>f(1)

where

M =

[Mu

Ma

]h =

[huha

]J =

[Ju Ja

](2)

11

Torque Control: Under-Actuated Dynamic

The dynamic of an under-actuated mechanical system is:

M(q)vq + h(q, vq) = S>τ + J(q)>f ,

where, contrary to the fully-actuated case, τ ∈ Rnva .

This dynamic is often decomposed into unactuated and actuated parts:

Mu(q)vq + hu(q, vq) = Ju(q)>f

Ma(q)vq + ha(q, vq) = τ + Ja(q)>f(1)

where

M =

[Mu

Ma

]h =

[huha

]J =

[Ju Ja

](2)

11

Task Models

Task-Function Approach

IDEA: Describe task to be performed (i.e. control objective) as a

function to minimize (similar to optimal control).

Without loss of generality, assume this function measures an error

between real and reference value of output y ∈ Rm:

e(x , u, t)︸ ︷︷ ︸error

= y(x , u)︸ ︷︷ ︸real

− y∗(t)︸ ︷︷ ︸reference

N.B.

Contrary to an optimal control cost function, e does not depend on the

state-control trajectory, but only on the instantaneous state-control value.

12

Task-Function Approach

IDEA: Describe task to be performed (i.e. control objective) as a

function to minimize (similar to optimal control).

Without loss of generality, assume this function measures an error

between real and reference value of output y ∈ Rm:

e(x , u, t)︸ ︷︷ ︸error

= y(x , u)︸ ︷︷ ︸real

− y∗(t)︸ ︷︷ ︸reference

N.B.

Contrary to an optimal control cost function, e does not depend on the

state-control trajectory, but only on the instantaneous state-control value.

12

Task-Function Approach

IDEA: Describe task to be performed (i.e. control objective) as a

function to minimize (similar to optimal control).

Without loss of generality, assume this function measures an error

between real and reference value of output y ∈ Rm:

e(x , u, t)︸ ︷︷ ︸error

= y(x , u)︸ ︷︷ ︸real

− y∗(t)︸ ︷︷ ︸reference

N.B.

Contrary to an optimal control cost function, e does not depend on the

state-control trajectory, but only on the instantaneous state-control value.

12

Task-Function Types

Consider three kinds of task functions:

• Affine functions of control inputs: e(u, t) = Auu − a(t)

• Nonlinear functions of robot velocities: e(vq, t) = y(vq)− y∗(t)

• Nonlinear functions of robot configuration: e(q, t) = y(q)− y∗(t)

Issue

Control inputs u can be instantaneously changed, but that is not the case

for the state x .

Solution

Impose dynamic of task function e(x , t) such that limt→∞ e(x , t) = 0

13

Task-Function Types

Consider three kinds of task functions:

• Affine functions of control inputs: e(u, t) = Auu − a(t)

• Nonlinear functions of robot velocities: e(vq, t) = y(vq)− y∗(t)

• Nonlinear functions of robot configuration: e(q, t) = y(q)− y∗(t)

Issue

Control inputs u can be instantaneously changed, but that is not the case

for the state x .

Solution

Impose dynamic of task function e(x , t) such that limt→∞ e(x , t) = 0

13

Task-Function Types

Consider three kinds of task functions:

• Affine functions of control inputs: e(u, t) = Auu − a(t)

• Nonlinear functions of robot velocities: e(vq, t) = y(vq)− y∗(t)

• Nonlinear functions of robot configuration: e(q, t) = y(q)− y∗(t)

Issue

Control inputs u can be instantaneously changed, but that is not the case

for the state x .

Solution

Impose dynamic of task function e(x , t) such that limt→∞ e(x , t) = 0

13

Velocity Task-Function

Consider a task function: e(vq, t) = y(vq)− y∗(t).

Let us impose a first-order linear dynamic:

e = −Ke∂y

∂vq︸︷︷︸Jacobian

vq − y∗ = −Ke

J︸︷︷︸Av

vq = y∗ − Ke︸ ︷︷ ︸a

(3)

We got an affine function of the accelerations vq.

N.B.

We could also impose a nonlinear dynamic, but in practice a linear

dynamic is ok for most cases.

14

Velocity Task-Function

Consider a task function: e(vq, t) = y(vq)− y∗(t).

Let us impose a first-order linear dynamic:

e = −Ke

∂y

∂vq︸︷︷︸Jacobian

vq − y∗ = −Ke

J︸︷︷︸Av

vq = y∗ − Ke︸ ︷︷ ︸a

(3)

We got an affine function of the accelerations vq.

N.B.

We could also impose a nonlinear dynamic, but in practice a linear

dynamic is ok for most cases.

14

Velocity Task-Function

Consider a task function: e(vq, t) = y(vq)− y∗(t).

Let us impose a first-order linear dynamic:

e = −Ke∂y

∂vq︸︷︷︸Jacobian

vq − y∗ = −Ke

J︸︷︷︸Av

vq = y∗ − Ke︸ ︷︷ ︸a

(3)

We got an affine function of the accelerations vq.

N.B.

We could also impose a nonlinear dynamic, but in practice a linear

dynamic is ok for most cases.

14

Velocity Task-Function

Consider a task function: e(vq, t) = y(vq)− y∗(t).

Let us impose a first-order linear dynamic:

e = −Ke∂y

∂vq︸︷︷︸Jacobian

vq − y∗ = −Ke

J︸︷︷︸Av

vq = y∗ − Ke︸ ︷︷ ︸a

(3)

We got an affine function of the accelerations vq.

N.B.

We could also impose a nonlinear dynamic, but in practice a linear

dynamic is ok for most cases.

14

Velocity Task-Function

Consider a task function: e(vq, t) = y(vq)− y∗(t).

Let us impose a first-order linear dynamic:

e = −Ke∂y

∂vq︸︷︷︸Jacobian

vq − y∗ = −Ke

J︸︷︷︸Av

vq = y∗ − Ke︸ ︷︷ ︸a

(3)

We got an affine function of the accelerations vq.

N.B.

We could also impose a nonlinear dynamic, but in practice a linear

dynamic is ok for most cases.

14

Velocity Task-Function

Consider a task function: e(vq, t) = y(vq)− y∗(t).

Let us impose a first-order linear dynamic:

e = −Ke∂y

∂vq︸︷︷︸Jacobian

vq − y∗ = −Ke

J︸︷︷︸Av

vq = y∗ − Ke︸ ︷︷ ︸a

(3)

We got an affine function of the accelerations vq.

N.B.

We could also impose a nonlinear dynamic, but in practice a linear

dynamic is ok for most cases.

14

Configuration Task-Function

Consider a task function: e(q, t) = y(q)− y∗(t).

Let us impose a second-order linear dynamic:

e = −Ke − De

Jvq + Jvq − y∗ = −Ke − De

J︸︷︷︸Av

vq = y∗ − Jvq − Ke − De︸ ︷︷ ︸a

(4)

Again, we got an affine function of the accelerations vq.

N.B.

We could also impose a nonlinear dynamic, but in practice a linear

dynamic is ok for most cases.

15

Configuration Task-Function

Consider a task function: e(q, t) = y(q)− y∗(t).

Let us impose a second-order linear dynamic:

e = −Ke − De

Jvq + Jvq − y∗ = −Ke − De

J︸︷︷︸Av

vq = y∗ − Jvq − Ke − De︸ ︷︷ ︸a

(4)

Again, we got an affine function of the accelerations vq.

N.B.

We could also impose a nonlinear dynamic, but in practice a linear

dynamic is ok for most cases.

15

Configuration Task-Function

Consider a task function: e(q, t) = y(q)− y∗(t).

Let us impose a second-order linear dynamic:

e = −Ke − De

Jvq + Jvq − y∗ = −Ke − De

J︸︷︷︸Av

vq = y∗ − Jvq − Ke − De︸ ︷︷ ︸a

(4)

Again, we got an affine function of the accelerations vq.

N.B.

We could also impose a nonlinear dynamic, but in practice a linear

dynamic is ok for most cases.

15

Configuration Task-Function

Consider a task function: e(q, t) = y(q)− y∗(t).

Let us impose a second-order linear dynamic:

e = −Ke − De

Jvq + Jvq − y∗ = −Ke − De

J︸︷︷︸Av

vq = y∗ − Jvq − Ke − De︸ ︷︷ ︸a

(4)

Again, we got an affine function of the accelerations vq.

N.B.

We could also impose a nonlinear dynamic, but in practice a linear

dynamic is ok for most cases.

15

Configuration Task-Function

Consider a task function: e(q, t) = y(q)− y∗(t).

Let us impose a second-order linear dynamic:

e = −Ke − De

Jvq + Jvq − y∗ = −Ke − De

J︸︷︷︸Av

vq = y∗ − Jvq − Ke − De︸ ︷︷ ︸a

(4)

Again, we got an affine function of the accelerations vq.

N.B.

We could also impose a nonlinear dynamic, but in practice a linear

dynamic is ok for most cases.

15

Configuration Task-Function

Consider a task function: e(q, t) = y(q)− y∗(t).

Let us impose a second-order linear dynamic:

e = −Ke − De

Jvq + Jvq − y∗ = −Ke − De

J︸︷︷︸Av

vq = y∗ − Jvq − Ke − De︸ ︷︷ ︸a

(4)

Again, we got an affine function of the accelerations vq.

N.B.

We could also impose a nonlinear dynamic, but in practice a linear

dynamic is ok for most cases.

15

Task-Function Types: Summary

Task functions can depend either on u, or on x , (q, vq).

Functions of u must be affine.

Functions of x can be nonlinear, but cannot be directly imposed.

• For functions of vq we can impose first derivative.

• For functions of q we can impose second derivative.

In any case, we end up with an affine function of vq and u:

g(y) ,[Av Au

]︸ ︷︷ ︸

A

[vqu

]︸ ︷︷ ︸

y

−a

16

Task-Function Types: Summary

Task functions can depend either on u, or on x , (q, vq).

Functions of u must be affine.

Functions of x can be nonlinear, but cannot be directly imposed.

• For functions of vq we can impose first derivative.

• For functions of q we can impose second derivative.

In any case, we end up with an affine function of vq and u:

g(y) ,[Av Au

]︸ ︷︷ ︸

A

[vqu

]︸ ︷︷ ︸

y

−a

16

Task-Function Types: Summary

Task functions can depend either on u, or on x , (q, vq).

Functions of u must be affine.

Functions of x can be nonlinear, but cannot be directly imposed.

• For functions of vq we can impose first derivative.

• For functions of q we can impose second derivative.

In any case, we end up with an affine function of vq and u:

g(y) ,[Av Au

]︸ ︷︷ ︸

A

[vqu

]︸ ︷︷ ︸

y

−a

16

Task-Function Types: Summary

Task functions can depend either on u, or on x , (q, vq).

Functions of u must be affine.

Functions of x can be nonlinear, but cannot be directly imposed.

• For functions of vq we can impose first derivative.

• For functions of q we can impose second derivative.

In any case, we end up with an affine function of vq and u:

g(y) ,[Av Au

]︸ ︷︷ ︸

A

[vqu

]︸ ︷︷ ︸

y

−a

16

Task-Function Types: Summary

Task functions can depend either on u, or on x , (q, vq).

Functions of u must be affine.

Functions of x can be nonlinear, but cannot be directly imposed.

• For functions of vq we can impose first derivative.

• For functions of q we can impose second derivative.

In any case, we end up with an affine function of vq and u:

g(y) ,[Av Au

]︸ ︷︷ ︸

A

[vqu

]︸ ︷︷ ︸

y

−a

16

Task-Function Types: Summary

Task functions can depend either on u, or on x , (q, vq).

Functions of u must be affine.

Functions of x can be nonlinear, but cannot be directly imposed.

• For functions of vq we can impose first derivative.

• For functions of q we can impose second derivative.

In any case, we end up with an affine function of vq and u:

g(y) ,[Av Au

]︸ ︷︷ ︸

A

[vqu

]︸ ︷︷ ︸

y

−a

16

Optimization-Based Control

Control as an Optimization Problem

IDEA: formulate control problem as an optimization problem (similar to

optimal control).

Key elements are:

• state: x , (q, vq)

• control: u , τ

• dynamic (no contacts): Mvq + h = S>τ

• task function to minimize: ||g(y)||2 , ||Ay − a||2

17

Control as an Optimization Problem

IDEA: formulate control problem as an optimization problem (similar to

optimal control).

Key elements are:

• state: x , (q, vq)

• control: u , τ

• dynamic (no contacts): Mvq + h = S>τ

• task function to minimize: ||g(y)||2 , ||Ay − a||2

17

Control as an Optimization Problem

IDEA: formulate control problem as an optimization problem (similar to

optimal control).

Key elements are:

• state: x , (q, vq)

• control: u , τ

• dynamic (no contacts): Mvq + h = S>τ

• task function to minimize: ||g(y)||2 , ||Ay − a||2

17

Control as an Optimization Problem

IDEA: formulate control problem as an optimization problem (similar to

optimal control).

Key elements are:

• state: x , (q, vq)

• control: u , τ

• dynamic (no contacts): Mvq + h = S>τ

• task function to minimize: ||g(y)||2 , ||Ay − a||2

17

Control as an Optimization Problem

IDEA: formulate control problem as an optimization problem (similar to

optimal control).

Key elements are:

• state: x , (q, vq)

• control: u , τ

• dynamic (no contacts): Mvq + h = S>τ

• task function to minimize: ||g(y)||2 , ||Ay − a||2

17

Task-Space Inverse Dynamics (TSID)

Formulate optimization problem to find control inputs that minimize task

function:minimizey=(vq,τ)

||Ay − a||2

subject to[M −S>

]y = −h

(5)

Equality constraints are affine and cost function is convex quadratic

→ Problem is a Quadratic Program (QP).

N.B.

To be precise, cost function is 2-norm of affine function, which is a

special kind of convex quadratic function (linear term A>a is in range

space of Hessian A>A) → Problem is a Least-Squares Problem (LSP).

18

Task-Space Inverse Dynamics (TSID)

Formulate optimization problem to find control inputs that minimize task

function:minimizey=(vq,τ)

||Ay − a||2

subject to[M −S>

]y = −h

(5)

Equality constraints are affine and cost function is convex quadratic

→ Problem is a Quadratic Program (QP).

N.B.

To be precise, cost function is 2-norm of affine function, which is a

special kind of convex quadratic function (linear term A>a is in range

space of Hessian A>A) → Problem is a Least-Squares Problem (LSP).

18

Task-Space Inverse Dynamics (TSID)

Formulate optimization problem to find control inputs that minimize task

function:minimizey=(vq,τ)

||Ay − a||2

subject to[M −S>

]y = −h

(5)

Equality constraints are affine and cost function is convex quadratic

→ Problem is a Quadratic Program (QP).

N.B.

To be precise, cost function is 2-norm of affine function, which is a

special kind of convex quadratic function (linear term A>a is in range

space of Hessian A>A) → Problem is a Least-Squares Problem (LSP).

18

TSID for Robots in Soft Contact

If system is in contact with environment, its dynamic must account for

contact forces f .

If contacts are soft, measured/estimated contact forces f can be easily

included:minimizey=(vq,τ)

||Ay − a||2

subject to[M −S>

]y = −h + J> f

(6)

19

TSID for Robots in Soft Contact

If system is in contact with environment, its dynamic must account for

contact forces f .

If contacts are soft, measured/estimated contact forces f can be easily

included:minimizey=(vq,τ)

||Ay − a||2

subject to[M −S>

]y = −h + J> f

(6)

19

TSID for Robots in Rigid Contact

If contacts are rigid, they constrain the motion.

Let us model the rigid

contact constraints as nonlinear functions

c(q) = 0 ⇐⇒ Contact points do not move

To express the constraints as functions of the problem variables we must

differentiate them twice:

Jvq = 0 ⇐⇒ Contact point velocities are null

Jvq + Jvq = 0 ⇐⇒ Contact point accelerations are null

Introduce contact forces and contact constraints in optimization problem:

minimizey=(vq,f ,τ)

||Ay − a||2

subject to

[J 0 0

M −J> −S>

]y =

[−Jvq−h

](7)

20

TSID for Robots in Rigid Contact

If contacts are rigid, they constrain the motion. Let us model the rigid

contact constraints as nonlinear functions

c(q) = 0 ⇐⇒ Contact points do not move

To express the constraints as functions of the problem variables we must

differentiate them twice:

Jvq = 0 ⇐⇒ Contact point velocities are null

Jvq + Jvq = 0 ⇐⇒ Contact point accelerations are null

Introduce contact forces and contact constraints in optimization problem:

minimizey=(vq,f ,τ)

||Ay − a||2

subject to

[J 0 0

M −J> −S>

]y =

[−Jvq−h

](7)

20

TSID for Robots in Rigid Contact

If contacts are rigid, they constrain the motion. Let us model the rigid

contact constraints as nonlinear functions

c(q) = 0 ⇐⇒ Contact points do not move

To express the constraints as functions of the problem variables we must

differentiate them twice:

Jvq = 0 ⇐⇒ Contact point velocities are null

Jvq + Jvq = 0 ⇐⇒ Contact point accelerations are null

Introduce contact forces and contact constraints in optimization problem:

minimizey=(vq,f ,τ)

||Ay − a||2

subject to

[J 0 0

M −J> −S>

]y =

[−Jvq−h

](7)

20

TSID for Robots in Rigid Contact

If contacts are rigid, they constrain the motion. Let us model the rigid

contact constraints as nonlinear functions

c(q) = 0 ⇐⇒ Contact points do not move

To express the constraints as functions of the problem variables we must

differentiate them twice:

Jvq = 0 ⇐⇒ Contact point velocities are null

Jvq + Jvq = 0 ⇐⇒ Contact point accelerations are null

Introduce contact forces and contact constraints in optimization problem:

minimizey=(vq,f ,τ)

||Ay − a||2

subject to

[J 0 0

M −J> −S>

]y =

[−Jvq−h

](7)

20

QP vs Pseudo-inverses

So far we have seen only Equality-Constrained LSP (ECLSP).

Unconstrained LSP can be solved using pseudo-inverses, for instance:

y∗ = argminy||Ay − a||2 ⇐⇒ y∗ = A†a

Also ECLSP can be solved using pseudo-inverses, for instance:

y∗ = argminy

||Ay − a||2 ⇐⇒ y∗ = B†b + NB(ANB)†(a− AB†b)

subject to By = b

where NB = I − B†B is the null-space projector of B.

QUESTION: if we can solve ECLSP with pseudo-inverses, why should we

use a QP solver?

21

QP vs Pseudo-inverses

So far we have seen only Equality-Constrained LSP (ECLSP).

Unconstrained LSP can be solved using pseudo-inverses, for instance:

y∗ = argminy||Ay − a||2 ⇐⇒ y∗ = A†a

Also ECLSP can be solved using pseudo-inverses, for instance:

y∗ = argminy

||Ay − a||2 ⇐⇒ y∗ = B†b + NB(ANB)†(a− AB†b)

subject to By = b

where NB = I − B†B is the null-space projector of B.

QUESTION: if we can solve ECLSP with pseudo-inverses, why should we

use a QP solver?

21

QP vs Pseudo-inverses

So far we have seen only Equality-Constrained LSP (ECLSP).

Unconstrained LSP can be solved using pseudo-inverses, for instance:

y∗ = argminy||Ay − a||2 ⇐⇒ y∗ = A†a

Also ECLSP can be solved using pseudo-inverses, for instance:

y∗ = argminy

||Ay − a||2 ⇐⇒ y∗ = B†b + NB(ANB)†(a− AB†b)

subject to By = b

where NB = I − B†B is the null-space projector of B.

QUESTION: if we can solve ECLSP with pseudo-inverses, why should we

use a QP solver?

21

QP vs Pseudo-inverses

So far we have seen only Equality-Constrained LSP (ECLSP).

Unconstrained LSP can be solved using pseudo-inverses, for instance:

y∗ = argminy||Ay − a||2 ⇐⇒ y∗ = A†a

Also ECLSP can be solved using pseudo-inverses, for instance:

y∗ = argminy

||Ay − a||2 ⇐⇒ y∗ = B†b + NB(ANB)†(a− AB†b)

subject to By = b

where NB = I − B†B is the null-space projector of B.

QUESTION: if we can solve ECLSP with pseudo-inverses, why should we

use a QP solver?

21

Inequality Constraints

Main benefit of QP solvers (over pseudo-inverses) is that they can handle

inequality constraints.

We can account for any inequality affine in problem variables y , such as:

• joint torque bounds: τmin ≤ τ ≤ τmax

• (linearized) force friction cones: Bf ≤ 0

• joint position-velocity bounds (after nontrivial transformation into

acceleration bounds [1]): vminq ≤ vq ≤ vmax

q

22

Inequality Constraints

Main benefit of QP solvers (over pseudo-inverses) is that they can handle

inequality constraints.

We can account for any inequality affine in problem variables y , such as:

• joint torque bounds: τmin ≤ τ ≤ τmax

• (linearized) force friction cones: Bf ≤ 0

• joint position-velocity bounds (after nontrivial transformation into

acceleration bounds [1]): vminq ≤ vq ≤ vmax

q

22

Multi-Task Control

Multi-Objective Optimization

Complex robots are typically redundant with respect to the main task

they must perform

, for instance:

• a 7-DoF manipulator that has to control its end-effector placement

(6 DoFs) has 1 DoF of redundancy

• an 18-DoF biped robot that has to control the placement of its two

feet (12 DoFs) has 6 DoFs of redundancy

Redundancy can be used to execute secondary tasks, but how to

incorporate them in the optimization problem?

23

Multi-Objective Optimization

Complex robots are typically redundant with respect to the main task

they must perform, for instance:

• a 7-DoF manipulator that has to control its end-effector placement

(6 DoFs) has 1 DoF of redundancy

• an 18-DoF biped robot that has to control the placement of its two

feet (12 DoFs) has 6 DoFs of redundancy

Redundancy can be used to execute secondary tasks, but how to

incorporate them in the optimization problem?

23

Multi-Objective Optimization

Complex robots are typically redundant with respect to the main task

they must perform, for instance:

• a 7-DoF manipulator that has to control its end-effector placement

(6 DoFs) has 1 DoF of redundancy

• an 18-DoF biped robot that has to control the placement of its two

feet (12 DoFs) has 6 DoFs of redundancy

Redundancy can be used to execute secondary tasks, but how to

incorporate them in the optimization problem?

23

Multi-Objective Optimization

Complex robots are typically redundant with respect to the main task

they must perform, for instance:

• a 7-DoF manipulator that has to control its end-effector placement

(6 DoFs) has 1 DoF of redundancy

• an 18-DoF biped robot that has to control the placement of its two

feet (12 DoFs) has 6 DoFs of redundancy

Redundancy can be used to execute secondary tasks, but how to

incorporate them in the optimization problem?

23

Weighted Multi-Objective Optimization

Assume robot must perform N tasks, each defined by a task function

gi (y) = ||Aiy − ai ||2 i = 1 . . .N

Simplest strategy: sum all functions using user-defined weights wi :

minimizey=(vq,f ,τ)

N∑i=1

wigi (y)

subject to

[J 0 0

M −J> −S>

]y =

[−Jvq−h

]

PROS Problem remains standard computationally-efficient LSP.

CONS Finding proper weights can be hard, too large/small weights can

lead to numerical issues.

24

Weighted Multi-Objective Optimization

Assume robot must perform N tasks, each defined by a task function

gi (y) = ||Aiy − ai ||2 i = 1 . . .N

Simplest strategy: sum all functions using user-defined weights wi :

minimizey=(vq,f ,τ)

N∑i=1

wigi (y)

subject to

[J 0 0

M −J> −S>

]y =

[−Jvq−h

]

PROS Problem remains standard computationally-efficient LSP.

CONS Finding proper weights can be hard, too large/small weights can

lead to numerical issues.

24

Weighted Multi-Objective Optimization

Assume robot must perform N tasks, each defined by a task function

gi (y) = ||Aiy − ai ||2 i = 1 . . .N

Simplest strategy: sum all functions using user-defined weights wi :

minimizey=(vq,f ,τ)

N∑i=1

wigi (y)

subject to

[J 0 0

M −J> −S>

]y =

[−Jvq−h

]

PROS Problem remains standard computationally-efficient LSP.

CONS Finding proper weights can be hard, too large/small weights can

lead to numerical issues.

24

Weighted Multi-Objective Optimization

Assume robot must perform N tasks, each defined by a task function

gi (y) = ||Aiy − ai ||2 i = 1 . . .N

Simplest strategy: sum all functions using user-defined weights wi :

minimizey=(vq,f ,τ)

N∑i=1

wigi (y)

subject to

[J 0 0

M −J> −S>

]y =

[−Jvq−h

]

PROS Problem remains standard computationally-efficient LSP.

CONS Finding proper weights can be hard, too large/small weights can

lead to numerical issues.

24

Hierarchical Multi-Objective Optimization

Alternative strategy: order task functions according to priority, that is

• task 1 is infinitely more important than task 2

• . . .

• task N-1 is infinitely more important than task N

Solve a sequence (cascade) of N optimization problems, from i = 1:

g∗i = minimizey=(vq,f ,τ)

gi (y)

subject to

[J 0 0

M −J> −S>

]y =

[−Jvq−h

]gj(y) = g∗j ∀j < i

PROS Finding priorities is easier than finding weights.

CONS Solving several QPs can be too computationally expensive.

25

Hierarchical Multi-Objective Optimization

Alternative strategy: order task functions according to priority, that is

• task 1 is infinitely more important than task 2

• . . .

• task N-1 is infinitely more important than task N

Solve a sequence (cascade) of N optimization problems, from i = 1:

g∗i = minimizey=(vq,f ,τ)

gi (y)

subject to

[J 0 0

M −J> −S>

]y =

[−Jvq−h

]gj(y) = g∗j ∀j < i

PROS Finding priorities is easier than finding weights.

CONS Solving several QPs can be too computationally expensive.

25

Hierarchical Multi-Objective Optimization

Alternative strategy: order task functions according to priority, that is

• task 1 is infinitely more important than task 2

• . . .

• task N-1 is infinitely more important than task N

Solve a sequence (cascade) of N optimization problems, from i = 1:

g∗i = minimizey=(vq,f ,τ)

gi (y)

subject to

[J 0 0

M −J> −S>

]y =

[−Jvq−h

]gj(y) = g∗j ∀j < i

PROS Finding priorities is easier than finding weights.

CONS Solving several QPs can be too computationally expensive.

25

Hierarchical Multi-Objective Optimization

Alternative strategy: order task functions according to priority, that is

• task 1 is infinitely more important than task 2

• . . .

• task N-1 is infinitely more important than task N

Solve a sequence (cascade) of N optimization problems, from i = 1:

g∗i = minimizey=(vq,f ,τ)

gi (y)

subject to

[J 0 0

M −J> −S>

]y =

[−Jvq−h

]gj(y) = g∗j ∀j < i

PROS Finding priorities is easier than finding weights.

CONS Solving several QPs can be too computationally expensive.

25

Hierarchical Multi-Objective Optimization

Alternative strategy: order task functions according to priority, that is

• task 1 is infinitely more important than task 2

• . . .

• task N-1 is infinitely more important than task N

Solve a sequence (cascade) of N optimization problems, from i = 1:

g∗i = minimizey=(vq,f ,τ)

gi (y)

subject to

[J 0 0

M −J> −S>

]y =

[−Jvq−h

]gj(y) = g∗j ∀j < i

PROS Finding priorities is easier than finding weights.

CONS Solving several QPs can be too computationally expensive.

25

Computational Aspects

Computational Complexity of TSID

TSID needs to solve a QP at each control loop (embedded optimization,

same spirit as MPC).

→ Limited computation time (1-10 ms).

For nv DoFs, nva motors, and nf contact constraints the QP has:

• nv + nva + nf variables (≈ 70 for humanoid)

• nv + nf equality constraints (≈ 40 for humanoid)

• nv + nva + 43nf inequality constraints (assuming friction cones are

approximated with 4-sided pyramids)

Computational cost dominated by Hessian (Cholesky) decomposition:

O(n3), with n the number of variables.

QUESTIONS

• Can we solve such a problem in 1 ms?

• Is there a way to speed up computation?

26

Computational Complexity of TSID

TSID needs to solve a QP at each control loop (embedded optimization,

same spirit as MPC). → Limited computation time (1-10 ms).

For nv DoFs, nva motors, and nf contact constraints the QP has:

• nv + nva + nf variables (≈ 70 for humanoid)

• nv + nf equality constraints (≈ 40 for humanoid)

• nv + nva + 43nf inequality constraints (assuming friction cones are

approximated with 4-sided pyramids)

Computational cost dominated by Hessian (Cholesky) decomposition:

O(n3), with n the number of variables.

QUESTIONS

• Can we solve such a problem in 1 ms?

• Is there a way to speed up computation?

26

Computational Complexity of TSID

TSID needs to solve a QP at each control loop (embedded optimization,

same spirit as MPC). → Limited computation time (1-10 ms).

For nv DoFs, nva motors, and nf contact constraints the QP has:

• nv + nva + nf variables (≈ 70 for humanoid)

• nv + nf equality constraints (≈ 40 for humanoid)

• nv + nva + 43nf inequality constraints (assuming friction cones are

approximated with 4-sided pyramids)

Computational cost dominated by Hessian (Cholesky) decomposition:

O(n3), with n the number of variables.

QUESTIONS

• Can we solve such a problem in 1 ms?

• Is there a way to speed up computation?

26

Computational Complexity of TSID

TSID needs to solve a QP at each control loop (embedded optimization,

same spirit as MPC). → Limited computation time (1-10 ms).

For nv DoFs, nva motors, and nf contact constraints the QP has:

• nv + nva + nf variables (≈ 70 for humanoid)

• nv + nf equality constraints (≈ 40 for humanoid)

• nv + nva + 43nf inequality constraints (assuming friction cones are

approximated with 4-sided pyramids)

Computational cost dominated by Hessian (Cholesky) decomposition:

O(n3), with n the number of variables.

QUESTIONS

• Can we solve such a problem in 1 ms?

• Is there a way to speed up computation?

26

Computational Complexity of TSID

TSID needs to solve a QP at each control loop (embedded optimization,

same spirit as MPC). → Limited computation time (1-10 ms).

For nv DoFs, nva motors, and nf contact constraints the QP has:

• nv + nva + nf variables (≈ 70 for humanoid)

• nv + nf equality constraints (≈ 40 for humanoid)

• nv + nva + 43nf inequality constraints (assuming friction cones are

approximated with 4-sided pyramids)

Computational cost dominated by Hessian (Cholesky) decomposition:

O(n3), with n the number of variables.

QUESTIONS

• Can we solve such a problem in 1 ms?

• Is there a way to speed up computation?

26

Reformulating Optimization Problem

IDEA: Exploit structure of problem to make computation faster.

Equality constraints have special structure: J 0 0

Mu −J>u −0

Ma −J>a −I

vqfτ

=

−Jvq−hu−ha

Identity matrix is easy to invert → We can easily express τ as affine

function of other variables.vqfτ

︸ ︷︷ ︸

y

=

I 0

0 I

Ma −J>a

︸ ︷︷ ︸

D

[vqf

]︸ ︷︷ ︸

y

+

0

0

ha

︸ ︷︷ ︸

d

27

Reformulating Optimization Problem

IDEA: Exploit structure of problem to make computation faster.

Equality constraints have special structure: J 0 0

Mu −J>u −0

Ma −J>a −I

vqfτ

=

−Jvq−hu−ha

Identity matrix is easy to invert → We can easily express τ as affine

function of other variables.vqfτ

︸ ︷︷ ︸

y

=

I 0

0 I

Ma −J>a

︸ ︷︷ ︸

D

[vqf

]︸ ︷︷ ︸

y

+

0

0

ha

︸ ︷︷ ︸

d

27

Reformulating Optimization Problem

IDEA: Exploit structure of problem to make computation faster.

Equality constraints have special structure: J 0 0

Mu −J>u −0

Ma −J>a −I

vqfτ

=

−Jvq−hu−ha

Identity matrix is easy to invert → We can easily express τ as affine

function of other variables.vqfτ

︸ ︷︷ ︸

y

=

I 0

0 I

Ma −J>a

︸ ︷︷ ︸

D

[vqf

]︸ ︷︷ ︸

y

+

0

0

ha

︸ ︷︷ ︸

d

27

Reformulating Optimization Problem

Original problem:

minimizey

||Ay − a||2

subject to By ≤ b J 0 0

Mu −J>u −0

Ma −J>a −I

vqfτ

=

−Jvq−hu−ha

Use y = Dy + d to reformulate problem [5]:

minimizey

||ADy + Ad − a||2

subject to BDy ≤ b − Bd[J 0

Mu −J>u

][vqf

]=

[−Jvq−hu

]

We have removed nva variables and nva equality constraints.

28

Reformulating Optimization Problem

Original problem:

minimizey

||Ay − a||2

subject to By ≤ b J 0 0

Mu −J>u −0

Ma −J>a −I

vqfτ

=

−Jvq−hu−ha

Use y = Dy + d to reformulate problem [5]:

minimizey

||ADy + Ad − a||2

subject to BDy ≤ b − Bd[J 0

Mu −J>u

][vqf

]=

[−Jvq−hu

]

We have removed nva variables and nva equality constraints.

28

Reformulating Optimization Problem

Original problem:

minimizey

||Ay − a||2

subject to By ≤ b J 0 0

Mu −J>u −0

Ma −J>a −I

vqfτ

=

−Jvq−hu−ha

Use y = Dy + d to reformulate problem [5]:

minimizey

||ADy + Ad − a||2

subject to BDy ≤ b − Bd[J 0

Mu −J>u

][vqf

]=

[−Jvq−hu

]

We have removed nva variables and nva equality constraints. 28

Reformulating Optimization Problem: Can We Do Better?

Can we improve even more?

In theory, yes:

• for floating-base robots, remove first 6 variables of vq exploiting

structure of first 6 columns of Mu

• remove (either all [7, 9] or some [2]) force variables by projecting

dynamics in null space of J

BUT these tricks either limit the expressiveness of the problem, or lead to

small improvements (while making the software more complex).

My opinion: probably not worth it!

29

Reformulating Optimization Problem: Can We Do Better?

Can we improve even more?

In theory, yes:

• for floating-base robots, remove first 6 variables of vq exploiting

structure of first 6 columns of Mu

• remove (either all [7, 9] or some [2]) force variables by projecting

dynamics in null space of J

BUT these tricks either limit the expressiveness of the problem, or lead to

small improvements (while making the software more complex).

My opinion: probably not worth it!

29

Reformulating Optimization Problem: Can We Do Better?

Can we improve even more?

In theory, yes:

• for floating-base robots, remove first 6 variables of vq exploiting

structure of first 6 columns of Mu

• remove (either all [7, 9] or some [2]) force variables by projecting

dynamics in null space of J

BUT these tricks either limit the expressiveness of the problem, or lead to

small improvements (while making the software more complex).

My opinion: probably not worth it!

29

Reformulating Optimization Problem: Can We Do Better?

Can we improve even more?

In theory, yes:

• for floating-base robots, remove first 6 variables of vq exploiting

structure of first 6 columns of Mu

• remove (either all [7, 9] or some [2]) force variables by projecting

dynamics in null space of J

BUT these tricks either limit the expressiveness of the problem, or lead to

small improvements (while making the software more complex).

My opinion: probably not worth it!

29

From Euclidian Spaces to Lie Groups

So far we have assumed output function y(x , u) ∈ Rm.

What if instead y(x , u) ∈ SE (3)? (very common in practice for y(q))

SOLUTION Represent SE(3) elements using homogeneous matrices

y ∈ R4×4 and redefine error function:

e(q, t) = log(y∗(t)−1y(q)),

where log is the pseudo-inverse operation of the matrix exponential (i.e.

exponential map): it transforms a displacement into a twist.

30

From Euclidian Spaces to Lie Groups

So far we have assumed output function y(x , u) ∈ Rm.

What if instead y(x , u) ∈ SE (3)? (very common in practice for y(q))

SOLUTION Represent SE(3) elements using homogeneous matrices

y ∈ R4×4 and redefine error function:

e(q, t) = log(y∗(t)−1y(q)),

where log is the pseudo-inverse operation of the matrix exponential (i.e.

exponential map): it transforms a displacement into a twist.

30

From Euclidian Spaces to Lie Groups

So far we have assumed output function y(x , u) ∈ Rm.

What if instead y(x , u) ∈ SE (3)? (very common in practice for y(q))

SOLUTION Represent SE(3) elements using homogeneous matrices

y ∈ R4×4 and redefine error function:

e(q, t) = log(y∗(t)−1y(q)),

where log is the pseudo-inverse operation of the matrix exponential (i.e.

exponential map): it transforms a displacement into a twist.

30

References i

A. Del Prete.

Joint Position and Velocity Bounds in Discrete-Time

Acceleration / Torque Control of Robot Manipulators.

IEEE Robotics and Automation Letters, 3(1), 2018.

A. Del Prete, N. Mansard, F. Nori, G. Metta, and L. Natale.

Partial Force Control of Constrained Floating-Base Robots.

In Intelligent Robots and Systems (IROS 2014), IEEE International

Conference on, 2014.

A. Del Prete, F. Nori, G. Metta, and L. Natale.

Prioritized Motion-Force Control of Constrained

Fully-Actuated Robots: ”Task Space Inverse Dynamics”.

Robotics and Autonomous Systems, 63:150–157, 2015.

31

References ii

A. Escande, N. Mansard, and P.-B. Wieber.

Hierarchical Quadratic Programming: Fast Online

Humanoid-Robot Motion Generation.

International Journal of Robotics Research, 33(7):1006–1028, 2014.

A. Herzog, N. Rotella, S. Mason, F. Grimminger, S. Schaal, and

L. Righetti.

Momentum control with hierarchical inverse dynamics on a

torque-controlled humanoid.

Autonomous Robots, 40(3):473–491, 2016.

O. Khatib.

A unified approach for motion and force control of robot

manipulators: The operational space formulation.

IEEE Journal on Robotics and Automation, 3(1):43–53, feb 1987.

32

References iii

M. Mistry, J. Buchli, and S. Schaal.

Inverse dynamics control of floating base systems using

orthogonal decomposition.

2010 IEEE International Conference on Robotics and Automation,

(3):3406–3412, may 2010.

L. Righetti, J. Buchli, M. Mistry, M. Kalakrishnan, and S. Schaal.

Optimal distribution of contact forces with inverse dynamics

control.

The International Journal of Robotics Research, (January), jan 2013.

L. Righetti, J. Buchli, M. Mistry, and S. Schaal.

Inverse dynamics control of floating-base robots with external

constraints: A unified view.

2011 IEEE International Conference on Robotics and Automation,

pages 1085–1090, may 2011.

33

References iv

L. Saab, O. E. Ramos, N. Mansard, P. Soueres, and J.-y. Fourquet.

Dynamic Whole-Body Motion Generation under Rigid

Contacts and other Unilateral Constraints.

IEEE Transactions on Robotics, 29(2):346–362, 2013.

L. Sentis and O. Khatib.

Synthesis of whole-body behaviors through hierarchical control

of behavioral primitives.

International Journal of Humanoid Robotics, 2(4):505–518, 2005.

34

Historical Notes: Dynamically-Consistent Pseudo-Inverses

Early literature on TSID is rich of misleading claims, supported by

convoluted math.

For instance, when using pseudo-inverses, it was believed that the only

way to ensure consistency with dynamic was to use M−1 as weight

matrix.

This has been shown not to be the case, but not everybody is aware

of/agrees with this, so...beware!

35

Historical Notes: Dynamically-Consistent Pseudo-Inverses

Early literature on TSID is rich of misleading claims, supported by

convoluted math.

For instance, when using pseudo-inverses, it was believed that the only

way to ensure consistency with dynamic was to use M−1 as weight

matrix.

This has been shown not to be the case, but not everybody is aware

of/agrees with this, so...beware!

35

Historical Notes: Dynamically-Consistent Pseudo-Inverses

Early literature on TSID is rich of misleading claims, supported by

convoluted math.

For instance, when using pseudo-inverses, it was believed that the only

way to ensure consistency with dynamic was to use M−1 as weight

matrix.

This has been shown not to be the case, but not everybody is aware

of/agrees with this, so...beware!

35

top related