CS 486/686 Assignment 2
Winter 2022
(130 marks)
Blake VanBerlo
Due Date: 11:59 PM ET on Wednesday, March 2, 2022
Changes
- v1.1: in Q2 description, updated probability of unintended action to (1?γ)/3 - v1.2:
in Q2 description, updated example for robot transition probabilities
1
CS 486/686 Winter 2022 Assignment 2
Academic Integrity Statement
If your written submission on Learn does not include this academic integrity statement with
your signature (typed name), we will deduct 5 marks from your final assignment mark.
I declare the following statements to be true:
The work I submit here is entirely my own.
I have not shared and will not share any of my code with anyone at any point.
I have not posted and will not post my code on any public or private forum or website.
I have not discussed and will not discuss the contents of this assessment with anyone
at any point.
I have not posted and will not post the contents of this assessment and its solutions
on any public or private forum or website.
I will not search for assessment solutions online.
I am aware that misconduct related to assessments can result in significant penalties,
possibly including failure in the course and suspension. This is covered in Policy 71:
https://uwaterloo.ca/secretariat/policies-procedures-guidelines/policy-71.
Failure to accept the integrity policy will result in your assignment not being graded.
By typing or writing my full legal name below, I confirm that I have read and understood
the academic integrity statement above.
Blake VanBerlo 2022 v1.2 Page 2 of 11
CS 486/686 Winter 2022 Assignment 2
Instructions
Submit any written solutions in a file named writeup.pdf to the A2 Dropbox on Learn.
If your written submission on Learn does not contain one file named writeup.pdf, we
will deduct 5 marks from your final assignment mark.
Submit any code to Marmoset at https://marmoset.student.cs.uwaterloo.ca/.
Grades for the programming component are determined by the unit tests in the “As-
signment 2 (Final)” project on Marmoset The “Assignment 2 (Weeks 1 & 2)” and
“Assignment 2 (Week 3)” projects contain ungraded public test suites meant to help
you debug, and they are only temporarily available.
No late assignment will be accepted. This assignment is to be done individually.
I strongly encourage you to complete your write-up in LaTeX, using this source file.
If you do, in your submission, please replace the author with your name and student
number. Please also remove the due date, the Instructions section, and the Learning
goals section. Thanks!
Lead TAs:
– Question 1: Kelechi Ogueji (kelechi.ogueji@uwaterloo.ca)
– Question 2: Steven Lawrence (steven.lawrence@uwaterloo.ca)
The TAs’ office hours will be scheduled on MS Teams.
Learning goals
Inference in Bayesian Networks
Define factors. Manipulate factors using the operations restrict, sum out, multiply and
normalize.
Trace the execution of and implement the variable elimination algorithm for calculating
a prior or a posterior probability given a Bayesian network.
Inference in Hidden Markov Models
Construct a hidden Markov model given a complex scenario.
Perform filtering and smoothing by executing the forward-backward algorithm.
Blake VanBerlo 2022 v1.2 Page 3 of 11
CS 486/686 Winter 2022 Assignment 2
1 Independence and Inference in Bayesian Networks
(30 marks)
1. Recall that random variables X and Y are conditionally independent, given a third
variable Z if and only if:
P (X|Y ∧ Z) = P (X|Z) (1)
P (Y |X ∧ Z) = P (Y |Z) (2)
P (X ∧ Y |Z) = P (X|Z)P (Y |Z) (3)
Show that Equation 3 follows from Equations 1 and 2. Justify each step of your proof.
Hint: you may find some of the probability rules from Lecture 6 to be useful.
Marking Scheme: (6 marks)
(4 marks) Correct proof
(2 marks) Each step is justified
2. Not all the random variables in a Bayesian networks are always required to answer a
probabilistic query. In fact, all variables that are not ancestors of query variables or
evidence variables are irrelevant to the query. Let Q = {Q1, . . . , Qm} be the set of
query variables and e = {e1, . . . , en} be the set of evidence variables. Prove that the
Variable Elimination Algorithm (VEA) returns the same distribution for some query
P (Q|E) if all irrelevant variables are pruned from the Bayesian network.
Hint: try a direct proof. Show that a particular ordering of hidden variable elimination
results in the same final distribution returned by the normalization operation.
Marking Scheme: (10 marks)
(8 marks) Correct proof
(2 marks) Proof is succinct and easy to understand
3. Below is a Bayesian network that appears in a recent review of uncertainties in Bayesian
networks with discrete variables Rohmer, (2020). The network is a probabilistic model
of brain cancer diagnosis. Note that the example is meant to be illustrative, as the
network is simple and the conditional probability tables are fictional. The random
Blake VanBerlo 2022 v1.2 Page 4 of 11
CS 486/686 Winter 2022 Assignment 2
variables are defined below:
Random Variable Definition
MC Metastatic cancer
B Brain tumour
CT CT scan is positive for brain tumour
SH Severe headache
ISC Increased serum calcium
C Patient falls into coma
Execute the Variable Elimination Algorithm to determine the probability that a pa-
tient has a brain tumour, given that their blood work shows an increased calcium
concentration, and they do not report having severe headaches. Eliminate variables
in reverse alphabetical order. For example, you would eliminate MC before ISC and
CT before C.
You may denote False as 0 and True as 1. First, define your factors and write them
in tabular format. For example, write a factor containing random variables A and B
as follows:
f1(A, B):
A B Value
0 0 0.6
0 1 0.4
1 0 0.8
1 1 0.4
Write out each VEA operation, along with the factor that is produced by the operation.
For example, if you were to restrict f1(A,B) to A = 0, you would write:
?Blake VanBerlo 2022 v1.2 Page 5 of 11
CS 486/686 Winter 2022 Assignment 2
Restrict f1(A, B) to A=0 to produce f2(B).
f2(B):
B Value
0 0.6
1 0.4
Hint: before you start executing VEA, apply what you proved in part (b) of this
question to simplify the problem.
Marking Scheme: (14 marks)
(4 marks) Initial factor definition
(6 marks) Correct VEA operations
(2 marks) Correct final answer
(2 marks) Correct formatting of factors and VEA operations
Blake VanBerlo 2022 v1.2 Page 6 of 11
CS 486/686 Winter 2022 Assignment 2
2 Robot Localization as a Hidden Markov Model
(100 marks)
You will implement the forward backward algorithm to perform inference in a complex grid
environment. We will model the robot’s interactions with the environment as a hidden
Markov model (HMM). Given a description of the environment, you will implement the
transition and sensor probability distributions. You will also implement forward recursion,
backward recursion, and the Forward-Backward Algorithm (FBA). The goal is to apply FBA
to infer the robot’s location at some timestep, given a series of actions and observations.
2.1 The Robot Environment
Let’s consider a complex robot localization problem. This problem is similar to an example
in one of our course textbooks: “Artificial Intelligence: a Modern Approach” by Russell and
Norvig (see Figure 1). In the 3rd edition, check out section 15.3.2, page 582, Figure 15.7. In
the 4th edition, check out section 14.3.2, page 477, Figure 14.7.
Figure 1: The robot grid environment from “Artificial Intelligence: a Modern Approach”. The
robot is in state 31.
A robot is in a grid environment and it has a correct map of the environment. We will use
(y, x) or (row, column) to refer to each cell. y is the row value starting with y = 0 at the
top. x is the column value starting with x = 0 at the left.
The robot has four available actions: Up, Right, Down, and Left. Each action is intended
to move the robot by 1 cell in the corresponding direction. Unfortunately, the robot’s
navigational system is broken. When it executes an action, it will move in the intended
direction with probability γ. The robot moves in one of the other directions with probability
(1 ? γ)/3. For example, if γ = 0.85, and the robot executes the Right action, it will move
right with probability 0.85, left with probability 0.05, up with probability 0.05, and right
with probability 0.05. If the robot is in a cell adjacent to a wall and it moves in the direction
of that wall, it remains in the same location. As an example, consider the grid environment
in Figure 2: if the robot is in state 0 and tries to move right, then it transitions to state 1
with probability 0.85, state 7 with probability 0.05, and stays in state 0 with probability 0.1
(because moving left or up results in the robot remaining stationary).
Blake VanBerlo 2022 v1.2 Page 7 of 11
CS 486/686 Winter 2022 Assignment 2
(a) The current state (b) The robot’s belief of its current state
Figure 2: Visualization of a grid environment. Each empty cell is a possible location that the
robot can be in. The red cell denotes the current location of the robot. The grid environment with
the robot’s belief of its location overlaid onto each empty cell. The darker the green, the higher
the probability that the robot assigns to that cell.
The robot’s task is to determine its current location. To do this, the robot will make use
of four noisy sensors that report whether there is an inner or outer wall in each of the four
directions (up, right, left, down). At each time step, the robot receives a sensor reading,
which is an integer in {0, 1}. If we convert the integer to a 4-bit binary number, then each
bit gives the presence (bit 1) or absence (bit 0) of a wall in the up, right, down, and left
directions respectively. For example, the integer 9 corresponds to the binary number 1001.
The leftmost digit 1 indicates that there is a wall above the agent. The second digit 0 from
the left indicates that there is no wall to the right of the agent. The third digit 0 from the
left indicates that there is no wall below the agent. The rightmost digit 1 indicates that
there is a wall to the left of the agent.
Each sensor is noisy and flips the bit with a probability of 0 ≤ ? ≤ 1. The errors occur
independently for the four sensor directions. For instance, suppose that the correct sensor
reading is 1001 and the actual sensor reading is 1110. The sensors were correct in one
direction and incorrect in three directions. The probability of observing this sensor reading
is (1? ?)?3.
The robot starts at an unknown location. At time step 0, we will assume a uniform distri-
bution over all the empty cells.
We can model the robot’s situation as a HMM. Figure 3 gives a Bayesian network represent-
ing the scenatio. The states correspond to the location of the robot and the observations
Blake VanBerlo 2022 v1.2 Page 8 of 11
CS 486/686 Winter 2022 Assignment 2
correspond to the 4-bit sensor reading. Since the robot has the choice of selecting different
actions, the robot’s next state depends on the current state and the action taken during
the current timestep. So the transition probability distribution is P (Sk+1|Sk ∧ Ak). The
sensor probability distribution is P (Ok|Sk). Both can be specified given the above problem
description.
S0 S1 S2 St?2 St?1
A0 A1 A2 At?2
. . . . . .
. . .
. . .
O0 O1 O2 Ot?1 Ot?1
Figure 3: A Bayesian network representing the HMM for the robot localization problem with t
timesteps
2.2 Locating the Robot with FBA
You will implement the forward backward algorithm (FBA) to perform inference in the robot
environment introduced above.
We have provided four python files. Please read the detailed comments in the provided files
carefully.
1. env.py: Contains a base Environment class. Do not change this file.
2. grid env.py: Contains a GridEnvironment class that extends the base Environment,
implementing the dynamics from the robot environment. You must
implement the create_trans_probs() and create_obs_probs()
methods in the GridEnvironment class. Do not change the
function signatures. Do not change anything else in this file!
3. fba.py: Contains empty functions for FBA operations. You will implement
all of the functions in this file. Do not change the function
signatures.
4. robot.py: Provides a demonstration of how to define a GridEnvironment object
and visualize the robot’s state and belief. Feel free to change this file
as you desire.
Blake VanBerlo 2022 v1.2 Page 9 of 11
CS 486/686 Winter 2022 Assignment 2
Please complete the following tasks.
1. Implement the empty functions in grid_env.py and fba.py. Zip and submit these
two files on Marmoset.
Marking Scheme: (90 marks)
Unit tests:
create_trans_probs
(1 public test + 4 secret tests) * 3 marks = 15 marks
create_obs_probs
(1 public test + 4 secret tests) * 3 marks = 15 marks
forward_recursion
(1 public test + 4 secret tests) * 4 marks = 20 marks
backward_recursion
(1 public test + 4 secret tests) * 5 marks = 25 marks
? fba
(1 public test + 4 secret tests) * 3 marks = 15 marks
2. Once you have implemented the functions, you can play with the robot environment.
In robot.py, we have provided some code to create the robot environment. Add the
code for running FBA, and then run the code to complete the following activities.
We highly recommend using the provided visualize_state() and visualize_belief()
functions to visualize the robot’s location and the robot’s beliefs.
(a) For each value of ? in {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8,
0.9, 1.0}, execute at least 10 trials of FBA, using a different random seed for each
trial number. So in total, you will execute at least 11 × 10 = 110 trials. In each
trial, the robot executes the following 10 actions: Right, Right, Up, Up, Up, Up,
Left, Right, Down, Right. Compute the robot’s belief distribution for its location
at the final timestep, k = 9. Use the environment from the textbook (Figure 1)
Setting the initial state to 31 and γ = 0.85.
Plot the maximum probability in the robot’s belief distribution, averaged across
each trial, against the value of ?. Your x-axis should be ? and your y-axis should
be the average of the maximum probability across the trials for each value of ?.
In no more than 4 sentences, explain what your experiment indicates about the
effect of the sensor noise (?) on the robot’s confidence in its location.
Marking Scheme: (5 marks)
(3 marks) Reasonably correct plot.
Blake VanBerlo 2022 v1.2 Page 10 of 11
CS 486/686 Winter 2022 Assignment 2
(2 marks) Reasonable conclusion based on the plot.
(b) For each value of γ in {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}, execute at least
10 trials of FBA, using a different random seed for each trial number. So in total,
you will execute at least 11 × 10 = 110 trials. In each trial, the robot executes
the following 10 actions: Right, Right, Up, Up, Up, Up, Left, Right, Down, Right.
Compute the robot’s belief distribution for its location at the final timestep, k = 4.
Use the environment from the textbook (Figure 1) Setting the initial state to 31
and ? = 0.2.
Plot the maximum probability in the robot’s belief distribution, averaged across
each trial, against the value of γ. Your x-axis should be γ and your y-axis should
be the average of the maximum probability across the trials for each value of γ.
In no more than 4 sentences, explain what your experiment indicates about the
effect of the action noise (γ) on the robot’s confidence in its location at timestep
k = 4.
Marking Scheme: (5 marks)
(3 marks) Reasonably correct plot.
(2 marks) Reasonable conclusion based on the plot.