Probabilistic Robotics Chapter 2 Leon F. Palafox
Probability Distributions
Probability
p(x)
Probability of x
p ( x, y )
Probability of x AND y
Joint Distribution
p ( x, y ) = p ( x ) p ( y )
p( x | y) =
p ( x, y ) p( y)
If they are independent
Total Probability
Total Probability
p( x) = ∑ p( x | y ) p( y )
Discrete Case
y
Bayes Rule
p( x | y ) =
p( y | x) p( x) = p( y)
p( y | x) p( x) ∑ x ' p( y | x' ) p( x' )
p( x | y ) = ηp( y | x) p ( x)
We use a normalizing factor
Bayes Rule
Example • • • • •
P ( D ) = 0.005, since 0.5%. This is the prior probability of D. P(N), This is 1 − P(D), or 0.995. P(+|D) This is 0.99, since the test is 99% accurate. P(+|N), This is 0.01, since the test will produce a false positive for 1% of nonusers. P(+), This is 0.0149 or 1.49%, which is found by adding the probability that a true positive result will appear (= 99% x 0.5% = 0.495%) plus the probability that a false positive will appear (= 1% x 99.5% = 0.995%). This is the prior probability of +.
State
State •Information over the environment and robot •May change over time or remain static •Will be denoted with the variable
xt
•Is called Complete if is the best predictor of the future
State Variables •Robot pose •Actuators •Robot Velocity •Features in its surroundings
Belief
Definition Robot Internal knowledge of the state The state cannot be measured directly
Distributions
bel ( xt ) = p ( xt | z1:t , u1:t )
After measurement
bel ( xt ) = p( xt | z1:t −1 , u1:t −1 ) Before Measurement
Bayes Algorithm
Bayes Algorithm
Step 3 processes the control U. Is the probability that U indices a change in the states. Step 4 Observe the state and thus generate a new set of beliefs
Bayes Algorithm
Example A robot finds a door and is able to open the door using its manipulator.
Initial State
If the robot does NOT know the initial state of the door. (Which is open). Suppose the door is open and the robot uses its manipulator.
Bayes Algorithm
Probabilities bel ( X 0 = open) = 0.5 bel ( X 0 = close) = 0.5 p ( Z t = sense _ open | X t = is _ open) = 0.6 p ( Z t = sense _ closed | X t = is _ open) = 0.4 Very noisy sensors p ( Z t = sense _ open | X t = is _ closed ) = 0.2 p ( Z t = sense _ closed | X t = is _ closed ) = 0.8 p ( X t = is _ open | U t = push, X t −1 = is _ open) = 1 p ( X t = is _ closed | U t = push, X t −1 = is _ open) = 0 p ( X t = is _ open | U t = push, X t −1 = is _ closed ) = 0.8 p ( X t = is _ closed | U t = push, X t −1 = is _ closed ) = 0.2 p ( X t = is _ open | U t = do _ nothing , X t −1 = is _ open) = 1 p ( X t = is _ closed | U t = do _ nothing , X t −1 = is _ open) = 0 We are not p ( X t = is _ open | U t = do _ nothing , X t −1 = is _ closed ) = 0 anything p( X t = is _ closed | U t = do _ nothing , X t −1 = is _ closed ) = 1
doing
Bayes Algorithm
Solution bel ( x1 ) = ∑ p ( x1 | u1 , x 0 )bel ( x0 ) = p ( X 1 | U t = do _ nothing , X 0 = is _ open)bel ( X 0 = is _ open) + X0
p ( X 1 | U t = do _ nothing , X 0 = is _ closed )bel ( X 0 = is _ closed )
bel ( X 1 = open) = 0.5 bel ( X 1 = closed ) = 0.5 bel ( x1 ) = ηp( Z 1 = sense _ open | x1 )bel ( x1 )
bel ( X 1 = open) = 0.75 bel ( X 1 = open) = 0.25 bel ( X 2 = open) = 0.95 bel ( X 2 = closed ) = 0.05 bel ( X 2 = open) = 0.983
bel ( X 2 = open) = 0.017
Final Notes
Notes
• The states are presented in a Markov Chain – Future data are independent if one knows the current state.
• Even though Bayes filter does have future events that may violate Markov Assumption, it proves to be robust. • Overall points to take into account: – Computational efficiency – Accuracy of the approximation – Ease of implementation
Bayes Algorithm
Solution bel ( x1 ) = ∑ p ( x1 | u1 , x 0 )bel ( x0 ) = p ( X 1 | U t = do _ nothing , X 0 = is _ open)bel ( X 0 = is _ open) + X0
p ( X 1 | U t = do _ nothing , X 0 = is _ closed )bel ( X 0 = is _ closed )
bel ( X 1 = open) = 0.5 bel ( X 1 = closed ) = 0.5 bel ( x1 ) = ηp( Z 1 = sense _ open | x1 )bel ( x1 )
bel ( X 1 = open) = 0.75 bel ( X 1 = open) = 0.25 bel ( X 2 = open) = 0.95 bel ( X 2 = closed ) = 0.05 bel ( X 2 = open) = 0.983
bel ( X 2 = open) = 0.017