receding horizon control methods in financial engineering
Post on 12-Apr-2022
8 Views
Preview:
TRANSCRIPT
Primbs 1
Receding Horizon Control Methods in
Financial Engineering
James A. PrimbsManagement Science and Engineering
Stanford University
CRFMS Seminar, UCSB
May 21, 2007
Primbs 2
Motivation and Background
OutlineSemi-Definite Programming Formulation
Conclusions and Future Work
Numerical Examples
Receding Horizon Control
Theoretical Properties
Primbs 3
Asset Price and Wealth Dynamics in Discrete Time:
Let denote the price of asset i at time k. )(kSi
ni ...1)())(1()1( kSkwkS iiii
Where i -- Expected Return
iw -- iid Noise Term
0][ iwE
][ TwwE
nw
w
w 1
Primbs 4
ni ...1)())(1()1( kSkwkS iiii
)()()()()1()1(1
kukwrkWrkW i
l
i
ifif
Let ui(k) denote the dollar amount invested in Si(k)
Assume that we can invest in of the Si(k).nl
Let rf be a risk free rate of interest.
Let W(k) denote your wealth at time k.
Asset Price and Wealth Dynamics in Discrete Time:
Primbs 5
ni ...1)())(1()1( kSkwkS iiii
)()()()()1()1(1
kukwrkWrkW i
l
i
ifif
Asset Price and Wealth Dynamics in Discrete Time:
Primbs 6
ni ...1)())(1()1( kSkwkS iiii
)()()()()1()1(1
kukwrkWrkW i
l
i
ifif
n
i
ii kSkI1
)()( An index is a weighted average of n stocks:
The index tracking problem is to trade l<n of the stocks and a
risk free bond in order to track the index as “closely as possible”.
0
22 ))()((mink
k
ukIkWE
One possible measure of how closely we track the index
is given by an infinite horizon discounted quadratic cost:
Finance Problem #1: Index Tracking
Primbs 7
ni ...1)())(1()1( kSkwkS iiii
)()()()()1()1(1
kukwrkWrkW i
l
i
ifif
0
22 ))()((mink
k
ukIkWE
Limits on short selling: 0)( kui
Limits on wealth invested: )()( kWku ii
Value-at-Risk: 1))()(Pr( kIkW
Finance Problem #1: Index Tracking
etc…
Primbs 8
Finance Problem #2: Dynamic Hedging of Options
x
xStock
time 0 time NStrike KStrike K S(N)
Option
ni ...1)())(1()1( kSkwkS iiii
)()()()()1()1(1
kukwrkWrkW i
l
i
ifif
))(( KNSTrade a portfolio to replicate an option payoff
Primbs 9
Finance Problem #2: Dynamic Hedging of Options
time 0 time TStrike K
x
Stock
Strike K S(N)
Option
x
ni ...1)())(1()1( kSkwkS iiii
)()()()()1()1(1
kukwrkWrkW i
l
i
ifif
))(( KNSTrade a portfolio to replicate an option payoff
Finance Problem #2: Dynamic Hedging of Options
ni ...1)())(1()1( kSkwkS iiii
)()()()()1()1(1
kukwrkWrkW i
l
i
ifif
u
max
)]([)]([ NWVarNWE
)])(()([)])(()([ KNSNWVarKNSNWE
It could also be subject to short selling constraints,
transaction costs, etc.
for a well chosen γ !
Linear Systems with Multiplicative Noise
ni ...1)())(1()1( kSkwkS iiii
)()()()()1()1(1
kukwrkWrkW i
l
i
ifif
)(
)(
)(
)(
1
kW
kS
kS
kxn
q
i
iii kwkuDkxCkBukAxkx1
)()()()()()1(
Primbs 12
0 )(
)(
)(
)(min
k
T
u ku
kxM
ku
kxE
subject to:
q
i
iii kwkuDkxCkBukAxkx1
)()()()()()1(
00
0
R
QM
0Q
where
I will study an expectation constrained SLQ type problem
)(
)(
)(
)(
)(
)(
ku
kxf
ku
kxH
ku
kxE T
T
dckubkxaP TT 1))()((
Ideally, I would also consider chance constraints
but these are too difficult computationally…
Primbs 13
In this talk, I will present:
Newly developed SDP based stochastic receding horizon
control methods for constrained linear systems with
multiplicative noise.
Theoretical and practical challenges in stochastic
formulations of constrained RHC.
Numerical results for financial engineering problems.
Primbs 14
Willems and Willems, ‟76, El Ghaoui, ‟95, Ait Rami and Zhou, ‟00, Yao et.
al., ‟01, Kleinman, ‟69, Wonham, ‟67, ‟68, McLane, ‟71
Unconstrained SLQ Problem
References:
Stochastic Receding Horizon Control:
Dynamic resource allocation: (Castanon and Wohletz, ‟02)
Portfolio Optimization: (Herzog et al, ‟06, Herzog „06)
Dynamic Hedging: (Meindl and Primbs, ‟04)
Supply Chains: (Seferlis and Giannelos, ‟04)
Control Theory (van Hessem and Bosgra, ‟01,‟02,‟03,‟04)
(Couchman et. al. ‟06, Li et. al. ‟02)
(Yan and Bitmead, ‟05, Primbs „07)
Stoch. Programming Approach: (Felt, ‟03)
Operations Management: (Chand, Hsu, and Sethi, ‟02)
Primbs 15
Motivation and Background
OutlineSemi-Definite Programming Formulation
Conclusions and Future Work
Numerical Examples
Receding Horizon Control
Theoretical Properties
Primbs 16
Ideally, one would solve the optimal infinite horizon problem...
solve...
resolve... Horizon N
Horizon N
solve...
resolve... Horizon N
implement
initial control
action
implement
initial control
action
Instead, receding horizon control repeatedly solves finite
horizon problems...
Primbs 17
resolve... Horizon N
Horizon N
solve...
resolve... Horizon N
implement
initial control
action
implement
initial control
action
Instead, receding horizon control repeatedly solves finite
horizon problems...
So, RHC involves 2 steps:
1) Solve finite horizon optimizations on-line
2) Implement the initial control action
Primbs 18
Motivation and Background
OutlineSemi-Definite Programming Formulation
Conclusions and Future Work
Numerical Examples
Receding Horizon Control
Theoretical Properties
Primbs 19
resolve... Horizon N
Horizon N
solve...
resolve... Horizon N
implement
initial control
action
implement
initial control
action
Instead, receding horizon control repeatedly solved finite
horizon problems...
So, RHC involves 2 steps:
1) Solve finite horizon optimizations on-line
2) Implement the initial control action
Primbs 20
In receding horizon control, we consider a finite horizon optimal
control problem.
We will impose an open loop plus linear feedback structure:
),|0()|0( kuku NN
)]|()|()[|()|()|( kixkixkiKkiukiu NNNN
)]|([)|( )( kixEkix NkxN m
N kiu )|(where and
N denote the horizon length
Horizon N
solve...
10),|( NikiuN denote the predicted control
10),|( NikiuN
NikixN 0),|( denote the predicted state
Note that: )()|0( kxkxN
NikixN 0),|(
From the current state x(k), let:
Primbs 21
Constraints will be in the form of quadratic expectation constraints
)|(
)|(
)|(
)|(
)|(
)|()(
kiu
kixf
kiu
kixH
kiu
kixE
N
NT
N
N
T
N
N
kx
X
N
TX
N
XT
Nkx kixfkixHkixE )|()()|()|()(
10 Ni
Ni 1
Note that state-only constraints are not imposed at time i=0.
C(x(k),N)
We will use quadratic expectation constraints (instead of
probabilistic constraints) in the on-line optimizations.
Primbs 22
Receding Horizon On-Line Optimization:
1
0
)()|(),|(
)|()|()|(
)|(
)|(
)|(min))((
N
i
N
T
N
N
N
T
N
N
kxkKku
N kNxkNxkiu
kixM
kiu
kixEkxV
N
subject to:
q
j
jNjNjNNN iwkiuDkixCkiBukiAxkix1
)()|()|()|()|()|1(
)|()|()( kNxkNxE N
T
Nkx
),|0()|0( kuku NN )]|()|()[|()|()|( kixkixkiKkiukiu NNNN
)),(( NkxP
)),(())|(),|(( Nkxkukx NN C
We denote the the optimizing predicted state and control sequence by
).|( and )|( ** kixkiu NN
The receding horizon control policy for state x(k) is ).|0(* kuN
Primbs 23
This problem has a linear-quadratic structure to it.
Hence, it essentially depends on the mean and variance
of the controlled dynamics.
For convenience, let )|()(),|()( kixixkiuiu NN
]))()())(()([()( )(
T
kx ixixixixEi
)()()1( iuBixAix mean satisfies:
covariance satisfies:
TiBKAiiBKAi ))()(())(()1(
q
j
T
jjjj iuDixCiuDixC1
))()())(()((
q
j
T
jjjj iKDCiiKDC1
))()(())((
Primbs 24
Receding Horizon On-Line Optimization as an SDP (for q=1):
1
0)|(),|(
))(())((min))((N
i
X
kKkuN NPTriMPTrkxV
subject to:
)(NPTr X
)),(( NkxRHC
)()()1( iuBixAix
0
100))()((
0)(0))()((
00)())()((
)()()()()()()1(
T
T
T
iuDixC
iiDUiC
iiBUiA
iuDixCiDUiCiBUiAi
0
10)()(
*)()()(
**)(
iuix
iiUi
iP
TT
T
**
*)()(
iPiP
X
)(
)()(
iu
ixfiHPTr T
XTXX ixfiPHTr )()()(
10 Ni
Ni 1
Primbs 25
By imposing structure in the on-line optimizations we are
able to:
Formulate them as semi-definite programs.
Use closed loop feedback over the prediction
horizon.
Incorporate constraints in an expectation form.
Primbs 26
resolve... Horizon N
Horizon N
solve...
resolve... Horizon N
implement
initial control
action
implement
initial control
action
Instead, receding horizon control repeatedly solved finite
horizon problems...
So, RHC involves 2 steps:
1) Solve finite horizon optimizations on-line
2) Implement the initial control action
Primbs 27
Motivation and Background
OutlineSemi-Definite Programming Formulation
Conclusions and Future Work
Numerical Examples
Receding Horizon Control
Theoretical Properties
Primbs 28
resolve... Horizon N
Horizon N
solve...
resolve... Horizon N
implement
initial control
action
implement
initial control
action
Instead, receding horizon control repeatedly solved finite
horizon problems...
So, RHC involves 2 steps:
1) Solve finite horizon optimizations on-line
2) Implement the initial control action
Primbs 29
Three important questions:
What can be said about the performance of the receding horizon
strategy versus the optimal infinite horizon strategy?
Performance:
What can be said about the satisfaction of constraints over the
infinite horizon under the receding horizon strategy?
Constraint Satisfaction:
Stability: Does the receding horizon approach guarantee asymptotic
stability?
Primbs 30
To characterize stability properties for hard constraints, we
3) Address feasibility issues…
1) Impose a terminal cost at the end of the finite horizon.
1
0
)()|(
)|()|()|(
)|(
)|(
)|(min))((
N
i
N
T
N
N
N
T
N
N
kxku
N kNxkNxkiu
kixM
kiu
kixEkxV
N
xTΦx is a Stochastic Lyapunov Function
2) Impose a terminal constraint at the end of the horizon.
)|()|()( kNxkNxE N
T
NkxTerminal Constraint:
Primbs 31
A feasibility issue.
Since we have stochastic dynamics, there is some
probability that we may encounter infeasible states for the
on-line optimization problems.
Two ways to deal with this infeasibility:
We take the second route...
Stop the system when/if it goes infeasible.
Define a control policy when infeasible state are encountered.
Primbs 32
Consider the following policy:
N
)0(x
Let be the stopping time: Nkxk )( min
)( xVx NNDefine:
}{}{
* 1)(1)|0()(~kkN kukuku
And define the control policy:
where )|0(* kuN
)(ku
is the receding horizon based policy with finite
horizon cost
is the optimal unconstrained policy with infinite
horizon cost.
)(xVN
)(xV un
RHC policy is usedSwitch to optimal
unconstrained
Primbs 33
resolve... Horizon N
Horizon N
solve...
implement
initial control
action
]11),|([ * NikiuN
More Feasibility Issues
Is this feasible?
time
k
k+1
xF y ),( NxRHC
]11),0|([ * NiiuN corresponding to
is feasible for )1,( NyRHC
Fx is the describes the set of states where the solution at x will be feasible at the
next time step. The probability of this set will play a role in our stability
results!
Primbs 34
Theorems:
Assume that for all Nx we have
||)(||)( xGPQxx c
xx
T
where is a class K function and .
Then under the control policy 0)( kx with probability one.
The proof involves showing that V(k) given by
}{}{ 1))((1))(()( k
un
kN kxVkxVkV
is a stochastic Lyapunov function (i.e. is a supermartingale).
Stability:
,~u
c
Nxx FG )(
Primbs 35
For all Nx
Performance:
Theorems:
)()(}){()(0
)( xVxVkGPxV unrhc
k
c
kxxN
where )(xV rhc
is the infinite horizon cost associated with u~
Note that when feasibility is not an issue (i.e. Px(Fx)=1), then the
finite horizon cost is a bound on the infinite horizon performance.
This result follows from the stability result.
Primbs 36
Theorems:
Let be such that forNx we have .0 ,0)( xFPQxx c
xx
T
Then
Constraint Satisfaction:
)())((sup
0
xVkxVP N
Nk
x
Thus, the random paths stay in with probability at least
N
)(1
xVN
and hence the receding horizon policy is applied over the
infinite horizon with at least this probability. Furthermore,
under the pure receding horizon policy u*(0|k) we have that
)(1
)|0(
)(
)|0(
)(
)|0(
)(***
xV
ku
kxf
ku
kxH
ku
kxP NT
T
x
A similar result can be stated for the state-only constraints, but with respect
to modified feasibility sets that impose state-only constraints at i=0.
Primbs 37
To circumvent these feasibility issues, one may use a
soft constraint approach.
)|(
)|(
)|(
)|(
)|(
)|()(
kiu
kixf
kiu
kixH
kiu
kixE
N
NT
N
N
T
N
N
kx
Replace hard
constraint
with a soft
constraint
)|(
)|(
)|(
)|(
)|(
)|()(
kiu
kixf
kiu
kixH
kiu
kixE
N
NT
N
N
T
N
N
kx
and penalize
in objective
1
0
)()|( )|(
)|(
)|(
)|(min
N
i
T
N
N
T
N
N
kxku kiu
kixM
kiu
kixE
N
This can be solved as an SDP, and an always feasible terminal
constraint can be used to guarantee stability with probability 1.
(See Primbs „07 CDC submission)
Primbs 38
Motivation and Background
OutlineSemi-Definite Programming Formulation
Conclusions and Future Work
Numerical Examples
Receding Horizon Control
Theoretical Properties
Primbs 39
Example Problem:
)()()()()()1( kwkDukCxkBukAxkx
98.01.0
1.002.1A
01.005.0
01.0B
04.00
004.0C
008.004.0
004.0D
Dynamics
Cost Parameters
10
02Q
200
05R
Primbs 40
Example Problem:
State Constraint
3.2)]()(2[ 21 kxkxE
Optimal Unconstrained Cost to go:
)()())(( kxkxkxV T
3889.547929.5
7929.50331.41
Terminal Constraint
45)]|()|([)( kNxkNxE T
kx
Horizon
10N
Primbs 41
Level Sets of the State and Terminal Constraints.
Primbs 42
75 Random Simulations from Initial Condition [-0.3,1.2].
Uncontrolled Dynamics
Primbs 43
75 Random Simulations from Initial Condition [-0.3,1.2].
Optimal Unconstrained Dynamics
Primbs 44
75 Random Simulations from Initial Condition [-0.3,1.2].
RHC Dynamics (Hard Constraint)
Primbs 45
RHC Dynamics (Soft Constraint)
75 Random Simulations from Initial Condition [-0.3,1.2].
Primbs 46
Ex #1: Index Tracking Example:
Five Stocks: IBM, 3M, Altria, Boeing, AIG
Means, variances and covariances estimated from 15 years of
weekly data beginning in 1990 from Yahoo! finance.
Initial prices and wealth assumed to be $100.
Risk free rate of 5%
Horizon of N = 5 with time step equal to 1 month.
Primbs 47
The index is an equally weighted average of the 5 stocks.
Initial value of the index is assumed to be $100.
The index will be tracked with a portfolio of the first 3
stocks: IBM, 3M, and Altria.
Ex #1: Index Tracking Example:
We place a constraint that the fraction invested in 3M
cannot exceed 10%.
Five Stocks: IBM, 3M, Altria, Boeing, AIG
Primbs 48
Index – Green, Optimal Unconstrained - Cyan, RHC - Red
IndexRHC
Optimal
Primbs 49
Optimal Unconstrained Allocations
Blue – IBM, Red – 3M, Green - Altria
RHC Allocations
Primbs 50
Index – Green, Optimal Unconstrained - Cyan, RHC - Red
Index
RHCOptimal
Primbs 51
Optimal Unconstrained Allocations
Blue – IBM, Red – 3M, Green - Altria
RHC Allocations
Primbs 52
Ex #2: Dynamic Hedging
Dynamic Hedging of a European Call Option under Trans. Costs
Initial price of stock is $10. Strike price of $10.
Risk free rate of 5%. Expiration in 2 weeks.
RHC horizon of N = 5 with time step equal to 1 day.
Initial wealth equal to Black-Scholes price.
Underlying asset follows geometric Brownian motion
SdzSdtdS
μ=9.16% and σ=30.66% (estimated from IBM)
Different levels of proportional transaction costs.
resolve... Horizon N
Horizon N
solve...
resolve... Horizon N
implement
initial control
action
implement
initial control
action
Expiration
Primbs 54
)()()()()()1()1(11
kukwrkkWrkW i
l
i
ifi
l
i
if
To implement transaction costs, adjust wealth dynamics:
Transaction costs
)1()1()()( kukuk iiii where
is an approximation of the expected transaction cost.
Primbs 55
time 0 time NStrike K
xStock
Strike K S(N)
Option
x
Recall Dynamic Hedging Picture
Primbs 56
S(N)
W(N)
Black-Scholes vs. RHC
0% Transaction Cost
Primbs 57
S(N)
W(N)
Black-Scholes vs. RHC
1% Transaction Cost
Primbs 58
S(N)
W(N)
Black-Scholes vs. RHC
2% Transaction Cost
Primbs 59
S(N)
W(N)
Black-Scholes vs. RHC
3% Transaction Cost
Primbs 60
S(N)
W(N)
Black-Scholes vs. RHC
4% Transaction Cost
Primbs 61
S(N)
W(N)
Black-Scholes vs. RHC
5% Transaction Cost
Primbs 62
S(N)
W(N)
Leland vs. RHC
0% Transaction Cost
Primbs 63
S(N)
W(N)
1% Transaction Cost
Leland vs. RHC
Primbs 64
S(N)
W(N)
2% Transaction Cost
Leland vs. RHC
Primbs 65
S(N)
W(N)
3% Transaction Cost
Leland vs. RHC
Primbs 66
S(N)
W(N)
4% Transaction Cost
Leland vs. RHC
Primbs 67
S(N)
W(N)
5% Transaction Cost
Leland vs. RHC
Primbs 68
This can easily be applied to…
a 5 dimensional basket option with
transaction costs,
short selling constraints,
restrictions on which assets can be traded,
etc.
Primbs 69
The basket is an equally weighted average of the 5 stocks.
Initial value of all stocks and strike is assumed to be $10.
Initial wealth of hedging portfolio is Method of Moments
price (uses first two moments of basket in Black-Scholes
formula).
Ex #2: Dynamic Hedging of a Basket Option:
Different levels of proportional transaction costs.
Five Stocks: IBM, 3M, Altria, Boeing, AIG
Primbs 70
Basket Value at N
W(N)
Method of Moments vs. RHC
0% Transaction Cost
5 Dimensional Basket Option
Primbs 71
Basket Value at N
W(N)
Method of Moments vs. RHC
1% Transaction Cost
5 Dimensional Basket Option
Primbs 72
Basket Value at N
W(N)
Method of Moments vs. RHC
2% Transaction Cost
5 Dimensional Basket Option
Primbs 73
Motivation and Background
OutlineSemi-Definite Programming Formulation
Conclusions and Future Work
Numerical Examples
Receding Horizon Control
Theoretical Properties
Primbs 74
Conclusions
By exploiting and imposing problem structure, constrained stochastic
receding horizon control can be implemented in a computationally
tractable manner for a number of problems.
Preliminary results suggest that stochastic receding horizon control
can address a number of financial engineering problems with success.
As future work, we are looking at formulations to address a wider
class of dynamics, and still remain computationally tractable.
Primbs 75
References
J. Primbs, “ Portfolio Optimization Applications of Stochastic Receding Horizon
Control”, To appear ACC 2007.
J. Primbs, “ Stochastic Receding Horizon Control of Constrained Linear Systems with
State and Control Multiplicative Noise”, To appear ACC 2007.
J. Primbs, “ A Soft Constraint Approach to Stochastic Receding Horizon Control”,
Submitted to CDC 2007.
J. Primbs, S. Mudchanatongsuk, and W. Wong “ A Receding Horizon Control
Formulation of European Basket Option Hedging”, Submitted to FEA2007.
top related