Download - Probabilistic Reasoning over Time
Probabilistic Reasoning Probabilistic Reasoning over Timeover Time
Russell and Norvig: ch 15
Temporal Probabilistic Temporal Probabilistic AgentAgent
environmentagent
?
sensors
actuators
t1, t2, t3, …
Time and Uncertainty
The world changes, we need to track and predict itExamples: diabetes management, traffic monitoringBasic idea: copy state and evidence variables for each time stepXt – set of unobservable state variables at time t
e.g., BloodSugart, StomachContentst
Et – set of evidence variables at time t e.g., MeasuredBloodSugart, PulseRatet, FoodEatent
Assumes discrete time steps
States and Observations
Process of change is viewed as series of snapshots, each describing the state of the world at a particular timeEach time slice involves a set or random variables indexed by t:
1. the set of unobservable state variables Xt
2. the set of observable evidence variable Et
The observation at time t is Et = et for some set of values et
The notation Xa:b denotes the set of variables from Xa to Xb
Stationary Process/Markov Assumption
Markov Assumption: Xt depends on some previous XisFirst-order Markov process: P(Xt|X0:t-1) = P(Xt|Xt-1)kth order: depends on previous k time stepsSensor Markov assumption:P(Et|X0:t, E0:t-1) = P(Et|Xt)
Assume stationary process: transition model P(Xt|Xt-
1) and sensor model P(Et|Xt) are the same for all tIn a stationary process, the changes in the world state are governed by laws that do not themselves change over time
Which mode is right is determined by domain.Eg.
If a particle is executing a random walk along the x-axis, changing its position by +/-1 at each time step, then using the x-coordinate as the state gives a first-order Markov process.
Complete Joint Distribution
Given: Transition model: P(Xt|Xt-1) Sensor model: P(Et|Xt) Prior probability: P(X0)
Then we can specify complete joint distribution:
t
1iii1ii0t1t10 )X|E(P)X|X(P)X(P)E,...,E,X,...,X,X(P
Example
Raint-1
Umbrellat-1
Raint
Umbrellat
Raint+1
Umbrellat+1
Rt-1 P(Rt|Rt-1)
TF
0.70.3
Rt P(Ut|Rt)
TF
0.90.2
Assumption modificationfirst-order Markov process is used in previous exampleThere are two possible fixes if the approximation proves too inaccurate:1. Increasing the order of the Markov process model.
Use second-order model by adding Raint-2 as a parent of Rain, which might give slightly more accurate predictions (for example, in Palo ,Alto it very rarely rains more than two days in a row).
2. Increasing the set of state variables. For example, we could add Season{ to allow us to
incorporate historical records of rainy seasons, or we could add Temperaturet, Humidityt and Pressuret to allow us to use a physical model of rainy conditions.
Note
adding state variables might improve the system's predictive power but also increases the prediction requirements we now have to predict the new
variables as well. add new sensors (Eg. measurements of
temperature and pressure
Example: robot wandering
Problem: tracking a robot wandering randomly on the X-Y planestate variables: position and velocity the new position can be calculated by
Newton's laws the velocity may change unpredictably
Eg. battery-powered robot’s velocity may be effected by a battery exhaustion
Example: robot wandering
Battery exhaustion depends on how much power was used by all previous maneuvers, the Markov property is violated.Add charge level Batteryt as state variable new model to predict Batteryt from
Batteryt-1 and the velocity New sensors for charge level
Inference Tasks
Filtering or monitoring: P(Xt|e1,…,et)computing current belief state, given all evidence to date
Prediction: P(Xt+k|e1,…,et) computing prob. of some future state
Smoothing: P(Xk|e1,…,et) computing prob. of past state (hindsight)(0≤k ≤t)
Most likely explanation: arg maxx1,..xtP(x1,…,xt|e1,…,et)given sequence of observation, find sequence of states that is most likely to have generated those observations.
Examples
Filtering: What is the probability that it is raining today, given all the umbrella observations up through today?Prediction: What is the probability that it will rain the day after tomorrow, given all the umbrella observations up through today?Smoothing: What is the probability that it rained yesterday, given all the umbrella observations through today?Most likely explanation: if the umbrella appeared the first three days but not on the fourth, what is the most likely weather sequence to produce these umbrella sightings?
Filtering & PredictionWe use recursive estimation to compute P(Xt+1 | e1:t+1) as a function of et+1 and P(Xt | e1:t)We can write this as follows:
P(et+l lXt+1) is obtainable directly from the sensor model
This leads to a recursive definition f1:t+1 = FORWARD(f1:t:t,et+1)
property). Markov the(using)|()|()|(
evidence). ofproperty Markov (by the)|()|(
rule) Bayes' (using)|(),|(
evidence) theup (dividing),|()|(
:1111
:1111
:11:111
1:111:11
txtttttt
tttt
ttttt
ttttt
exPxXPXeP
eXPXeP
eXPeXeP
eeXPeXP
Example: rain & umbrella p. 543
Model,P(R0) =<0.5,0.5>
Smoothing
Compute P(Xk|e1:t) for 0<= k < t
Using a backward message bk+1:t = P(Ek+1:t | Xk), we obtain
P(Xk|e1:t) = f1:kbk+1:t
The backward message can be computed using
This leads to a recursive definition Bk+1:t = BACKWARD(bk+2:t,ek+1:t)
)X|x(P)x|e(P)x|e(Pb kkx
kt:kkkt:k
k
112111
1
(15.7)
Example from R&N: p. 545
computing the smoothed estimate for the probability of rain at t = 1, given the umbrella observations on days 1 and 2.
Pluged into E.15.8
According to E.15.7
)X|x(P)x|e(P)x|e(Pb kkx
kt:kkkt:k
k
112111
1
Probabilistic Temporal Models
Hidden Markov Models (HMMs)Kalman FiltersDynamic Bayesian Networks (DBNs)
Hidden Markov Models (HMMs)
Further simplification:Only one state variable.We can use matrices, now.Ti,j = P(Xt=j|Xt-1=i) Tij is the probability of a transition from
state i to state j. transition matrix for the umbrella
world:
Hidden Markov Models (HMMs)
sensor model:diagonal matrix Ot
diagonal entries are given by the values P(et lXt = i)
Other entried:0 Umbrella world
Hidden Markov Models (HMMs)
Example: speech recognition