首页 >
> 详细

Last Modified: May 24, 2022

CS 179: Introduction to Graphical Models: Spring 2022

Homework 7

Due Date: Friday, June 3, 2022

The submission for this homework should be a single PDF file containing all of the relevant code, figures, and any

text explaining your results. When coding your answers, try to write functions to encapsulate and reuse code,

instead of copying and pasting the same code multiple times. This will not only reduce your programming efforts,

but also make it easier for us to understand and give credit for your work. Show and explain the reasoning

behind your work!

In this homework, we will run a simple variational auto-encoder (VAE) model and explore its resulting

representation.

To simplify the effort for this homework, a template containing much of the required code for the assignment

is provided in a Jupyter ipynb file.

Part 1: Build the VAE & load data (30 points)

First, download the template Jupyter notebook and look over the provided code. You can run the code locally or

on Google Colab, as you prefer. The VAE is defined by two quantities. First, an encoder defines the variational

distribution q(Z|X = x), which we express as a Gaussian distribution,

Z ∼ N Z;µ(x),ν(x)I

(µ,logν) = W2 α(W1 x )

where α(·) is a ReLU activation function and is the usual matrix-vector dot product. In other words, (µ,logν)

are expressed by a two-layer neural network with ReLU activation on the hidden layer, and linear activation on the

output.

The decoder defines p(X|Z = z), which we express as a Gaussian distribution,

X ∼ N X; ¯µ(z),ω

¯µ(z) = σ( V2 α( V1

z ) )

where σ(·) is the logistic function, so that X is modeled as a two-layer neural network transformation of z, plus a

fixed amount of Gaussian noise on the pixel values.

The loss is given by the divergence between ˆp(X)q(Z|X) and p(Z)p(X|Z), where we assume p(Z) is a basic

unit Gaussian, which we estimate by sampling from q(Z|X). Given samples {(x

(i)

, z

(i)

)}, our estimated loss is

1

m

k

¯µ(z

(i)

) − x

(i)

k 2 +

1

2

detν(x

(i)

) + k µ(x

(i)

)k

2 − log detν(x

(i)

) − 1

Data Our data consist of a small sample of hand images (mine, but inspired by Josh Tenenbaum’s IsoMap

experiment), located at

https://sli.ics.uci.edu/extras/cs179/data/frames.txt

Load the data, then shift and scale the values to be between zero and one:

1 data -= data.min(1,keepdims=True)

2 data /= data.max(1,keepdims=True)

3 data = torch.tensor(data).float()

Part 2: Train the model (30 points)

You have also been provided with a function train which computes the gradient of a mini-batch of data and

updates the parameter values. Use this function to train your model:

1 vae = VAE()

2 optim = Adam(vae.parameters(), lr=0.0001)

3 train(vae, data, optim, batch = 16, epochs=500)

Homework 7 UC Irvine 1/ 2

CS 179: Introduction to Graphical Models Spring 2022

This may be a bit slow. The train function also calls another provided function, plot_scatter , which plots

a scatter of images (preventing any overlapping images), so you can visualize the two-dimensional latent space Z

which is being used to capture the variability in the images.

Part 3: Visualize reconstructions (30 points)

Note: if you are short on time, or want to compare your results, you can obtain my trained VAE at

https://sli.ics.uci.edu/extras/cs179/data/vae.pkl

and load using pickle ,

1 with open('vae.pkl','rb') as fh: vae = pickle.load(fh)

However, if you have trained your own in part (2), please use yours for this question as well.

(a) Select 6 random images from the data set. Encode and then decode each image, and show ( imshow ) the

original image and reconstructed image, ¯µ(µ( x ) ). (Note: these will not look that great; the reconstruction

network may be too simple to do a good job, or perhaps we just don’t have enough data in this small data

set. But they should be recognizable.)

(b) Now select 10 points in a linear path across the distribution (say, from z = (−3,0) to z = (3,0). For each

latent location, decode ¯µ(z). Interpret the resulting sequence of images.

Part 4: Work on your projects! (10 points)

Good luck with your projects, and your finals for any other classes!

Homework 7 UC Irvine 2/ 2

联系我们

- QQ：99515681
- 邮箱：99515681@qq.com
- 工作时间：8:00-21:00
- 微信：codinghelp

- 代写ece 380、辅导java/Python编程 2023-02-23
- 代做econ5102、Python/Java程序辅导 2023-02-23
- Cisc 360代写、C/C++程序设计辅导 2023-02-23
- 代写stat 411、辅导java/Python编程 2023-02-22
- Comp90048代做、辅导java，C/C++编程 2023-02-22
- 代写ma1510、辅导c++/Java编程 2023-02-21
- Csci561编程代做、辅导python/C++程序 2023-02-21
- Econ 178代写、辅导r编程语言 2023-02-20
- 代做ecmm461、C/C++编程设计辅导 2023-02-20
- Msmk7021代写、C++，Python编程辅导 2023-02-20
- Comp5400代做、辅导java/Python程序 2023-02-20
- Cse214代写、辅导c/C++，Java程序 2023-02-20
- 代写math5965、辅导r编程设计 2023-02-20
- 代写comp9012、辅导python程序语言 2023-01-23
- Comp9414: Artificial Intelligence Assi... 2023-01-05
- Comp9444 Assignment 1 Neural Networks ... 2023-01-05
- Final Assignment - Apply All Your Skil... 2023-01-05
- Data7202 Statistical Methods For Data ... 2023-01-04
- Comp307/Aiml420 Assignment 4: Planning... 2023-01-04
- Comp3170 Assignment 3 Moonlit Forest 2023-01-04