1 Written [50 Points]
1.1 Designing Agents [5.5 Points]
Tic-tac-toe is a game for two players who take turns marking the spaces in a three-by-three grid with X or O. The game’s objective is to be the !rst in placing three of their markers in a horizontal, vertical, or diagonal row (see Figure 1).
Figure 1: A game of Tic-Tac-Toe where the O player has won, as it has three markers in a diagonal row.
You are now tasked with developing an AI algorithm for a tic-tac-toe agent. The agent is a robot that can play against another agent (human or robot) on a piece of paper using a pen. The agent must be capable of:
• detecting whether a marker is a X or O
• drawing a marker in one of the nine grids
• detecting the empty grids and markers already present on the paper
• using this information to determine if it has won, lost, or tied the game
Note that the agent is not just playing one time, but rather, is a robot that is supposed to be able to play multiple games as many times as needed. Now, answer the following questions - note that some of these cases might be ambiguous, so justify your answers and explicitly state any assumptions
(a) Create the PEAS description for the task environment, where the task environment includes all the games the agent plays.
(b) Identify the characteristics of the environment and provide a brief justi!cation as to why you characterized the environment in that way
As a reminder, the characteristics of the environments are as follows:
• Fully observable vs. partially observable
• Single-agent vs. multi-agent
• Deterministic vs. nondeterministic
• Episodic vs. sequential
• Static vs. dynamic
• Discrete vs. continuous
• Known vs. unknown
1.2 Hands-On Search [18 Points]
Consider the search tree shown in Figure 2, including nodes A through K. The state of node A is our initial state (shown in orange). Nodes F and K share the same state and are valid goals (shown in blue). The label by an edge is the cost associated with the path between the edge’s vertices.
Assume that when a node is expanded, its children are added to the frontier in an order such that when the nodes are popped, they are visited from the left to the right. That is, for breadth-!rst search, the child from the leftmost branch is added in the !rst. For depth-!rst search, the child from the rightmost branch is added in the !rst (See Figure 3.11 in the R&N textbook).
For frontiers that use a priority queue, nodes with the same priority (path-cost, heuristic value, or the sum of path-cost and heuristic value) will be popped in alphabetical order. For instance, if node B and node C have the same priority, node B is popped !rst, followed by node C.
Figure 2: The Search Tree, with Path Costs
Figure 3: Heuristic Values for the States
An admissible heuristic function is provided to you in Figure 3, which gives the value of the heuristic function for each node’s state. For the following search methods, list the order in which nodes were expanded from the frontier (i.e., all nodes that you called EXPAND(node) on as seen in the pseudocode in lectures) from the start till the goal is reached. Please show your work for partial credits.
(a) Breadth-First Search
(b) Depth–First Search
(c) Iterative Deepening Search
(d) Uniform. Cost Search
(e) Greedy Best–First Search using the heuristic function provided
(f) A–Star Search using the heuristic function provided
1.3 Fruit Sorting Machine [8 Points]
A farm produces n di"erent types of fruit, which they want to ship out in a single batch of n crates, each containing only one type of fruit; right now, however, each crate is loaded randomly. The factory just installed a robot arm mounted on a linear track over the crate storage area which they want to use to sort the crates. The crates are all laid out in a line, and the arm can take three actions:
1. move itself left or right by one crate,
2. grab a fruit of a specific type from the crate below it, or
3. place the fruit it’s holding onto the crate below it.
Note that the arm can only hold one piece of fruit at a time!
Your job is to design a search algorithm which will find the shortest sequence of actions which will result in each crate containing only one kind of fruit. Thankfully, you know the initial contents of each crate ahead of time. Provide clear and detailed descriptions for each of the elements below. You don’t need to provide implementation details unless you want to (e.g., you don’t need to say “I would use a hashmap for x”), but your descriptions should include enough detail that it’s clear you considered how the algorithm might be implemented.
(a) State space: The discrete pieces of information which define the state space. In other words, when considering a particular state, what are all the things you need to know?
(b) Actions: How each of the three actions generates a successor state from an explored state, in terms of your state representation.
(c) Goal test: What must be true for a state to be a goal state, in terms of your state representation.
(d) Search algorithm: Which (non-heuristic) search algorithm is optimal for this task, and the data structure which defines its frontier. Justify your choice. Consider both space and time complexity. Hint: think about the structure of your state space and whether a reached set would be beneficial or not.
1.4 Heuristics [4.5 Points]
1. Consider a grid world with a single goal, where the allowed moves are LEFT, RIGHT, UP, and DOWN (you are unable to move diagonally). The cost of each move LEFT, RIGHT, UP and DOWN is 10. Let M(s) be the Manhattan distance from state s to the goal, and let E(s) be the Euclidean distance from s to the goal. M(s) refers to the minimum number of allowed moves needed to get from s to the goal. Formally, the Manhattan distance between two states s = (x1, y1) and t = (x2, y2) is defined as
M(s, t) = |x1 - x2| + |y1 - y2|.
You have the following function h :
Note, the same random number r is generated any time h(s) is given input s. Is h a consistent heuristic? Justify your answer.
1.5 Hill Climbing [9.5 Points]
Your friend guards each of their files with a 2-digit PIN code, but since they keep forgetting it, they decide to implement an AI agent that uses hill climbing to find it automatically. Each PIN comprises two integers, P1 and P2 , each of which can take values from 0 to 9. They implement the following agent - the agent automatically submits a candidate PIN, and if it’s correct, the file unlocks. If the guessed PIN is incorrect, the agent receives a score that measures how good the guess was. If the two digits entered are x and y, then the score returned is
The agent then uses a hill-climbing algorithm to attempt to find the PIN by maximizing the heuristic function. The algorithm generates neighbors by creating all guesses that are diferent by one digit. In other words, if the current state is (x, y), the neighbors generated are (x - 1, y), (x +1, y), (x, y - 1), and (x, y + 1). The algorithm doesn’t generate any states outside the valid (0,0) to (9,9) range. In the event of a tie in the h value, the state with the lowest x value is selected.
(a) Assume that the initial guess for the PIN is (5,5) and that the actual PIN is (1,2). List the steps taken by the hill-climbing algorithm till it terminates. If it does not, explain why.
(b) Is this algorithm guaranteed to find the correct PIN code? Explain your answer. Feel free to use a graphing tool like GeoGebra to help you solve this.
(c) Now assume that the algorithm is hard-coded to terminate after n steps. Does your answer to part (b) change if
(i) n = 10
(ii) n = 20
(d) Suppose we modify the heuristic a bit so that it’s now:
Is this version of hill-climbing search guaranteed to find a solution? Explain your answer.
1.6 Adversarial Search [4.5 points]
Figure 4: Game tree for Section 3
(a) Apply minimax to this tree to find the value of the root node. Please provide the values for each node in the table below.
|
Node
|
A
|
B
|
C
|
D
|
E
|
F
|
G
|
H
|
I
|
J
|
K
|
L
|
M
|
N
|
O
|
|
Value
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(b) Apply αβ pruning to this tree (left to right) and list all the nodes that would be pruned and the node parent’s alpha and beta values. For example, if you decide to prune node Q1, report the alpha and beta values for node L. If you decide to prune M, Q3, and Q4, report the values for node F for M and leave it blank for Q3 and Q4. Make sure to list all the nodes that can be pruned, including all the children nodes.
2 Programming [50 Points]
2.1 Setup
All programming assignments for this course use Python. If you’re new to Python or not very experienced with it, please come to o#ce hours; we’re happy to help! We recommend also checking out the Python tutorials shared on Canvas.
We highly recommend creating a new virtual environment for the assignment under the Cod- ing folder. There is also information on setting up a virtual environment on Canvas. You can then simply install the requirements by running the command
python -m pip install -r requirements.txt
2.2 Problem Statement
In this programming section, you’ll be coding an agent that searches a maze. More specifically, you’ll be implementing the following search algorithms (graph search) -
• BFS (NOT best-first-search with FIFO queue)
• DFS
• UCS
• A-Star
Unlike lectures and previous examples, these mazes may (or may not) have multiple goals. As such, we’ll be modifying our heuristic to be the Euclidean distance to the closest goal.
Complete the Agent.py !le and submit it to Gradescope. You will be evaluated using the Autograder on 10 test mazes on every algorithm. In Agent .py, you must complete the following functions
i heuristic_function : returning the minimum Euclidean (straight line) distance between the AStar Node’s coordinate and the end coordinates
ii the three comparator functions for A-star nodes __eq__, __lt__, __gt__
iii goal_test
iv expand_node
v best-first
vi bfs
vii dfs
viii ucs
ix astar
For simplicity, please add nodes to the frontier directly instead of updating an existing node.
Additionally, in order to apply DFS e"ectively to the maze domain of homework 1, and to be consistent with the other searches, we’ve already asked you to implement DFS as a graph search. More specifically, you’ll need to use best-first search - with late goal test, an if statement that checks reached and looks for a better path-cost, etc.
The best-first search pseudocode from lecture may be helpful. When frontier is a stack, that best-first search is the DFS we’re asking you to write for this assignment. The "path cost" for DFS is a step count. (That is, like BFS, DFS does not consider the actual path cost but only the number of steps.) Thus, frontier and reached would need to be updated if we arrive at a state a second time via a lower step count. If you attempt to implement a tree-like search, you will have an infinite loop for any graph with cycles.
Please go through Maze.py and Agent.py before you begin! The files are annotated with TODO in every location where you need to add code and have multiple comments, tips, and recommendations for how to proceed. util.py is there to help with type checking in IDEs, please feel free to ignore it. In addition, you will never need to modify maze .py or util.py.
Maze structure
Each test case is a text file (saved as a .test file, but you can open it in any text editor). Here’s what Test case 1 looks like:
The first line of input lists the width (w), and the height (h) of the maze.
The second input line lists the cost of going left, right, up, and down correspondingly. The next three lines delineate the elements in the grid.‘A’denotes the starting point of the agent.‘B’ denotes a goal state.‘ .’denotes a spot where the agent can freely land, while‘*’denotes a wall.
2.3 Testing locally
We’ve provided four test cases that you can use to test your implementation. You can run it with python LocalTest.py {TESTNUMBER} {SEARCH ALGORITHM}.
That being said, we have not provided solutions for them. The LocalTest .py file will show you what the maze looks like and the path your agent found. You can solve the maze and determine if your algorithm works correctly.
There are no hidden test cases - your score on Gradescope is the score you can expect to receive for this section.
An example of what you should see after running LocalTest .py for Test1 on BFS and DFS can be seen in Figure 5. The red dot is the start position, the yellow star is the goal, and the green arrows indicate the path found by the algorithm. The grey cells indicate walls, while the cells are shaded based on how early on into the algorithm’s execution they are expanded. The darker the shade of blue, the earlier it was expanded. White cells have not been expanded.
Figure 5: Expected paths found by BFS and DFS on test case 1