首页 >
> 详细

FS19 STT481: Homework 5

(Due: Wednesday, Dec. 4th, beginning of the class.)

100 points total

1. (20 pts) We now fit a GAM to predict Salary in the Hitters dataset.

First, we remove the observations for whom the salary information is unknown, and then we split the data

set into a training set and a test set by using the following command lines.

library(ISLR)

data("Hitters")

Hitters <- Hitters[!is.na(Hitters$Salary),]

set.seed(10)

train <- sample(nrow(Hitters), 200)

Hitters.train <- Hitters[train, ]

Hitters.test <- Hitters[-train, ]

(a) Using log(Salary) (log-transformation of Salary) as response and the other variables as the predictors,

perform forward stepwise selection on the training set in order to identify a satisfactory model that

uses just a subset of the predictors.

(b) Fit a GAM on the training data, using log(Salary) as the response and the features selected in the

previous step as the predictors. Plot the results, and explain your findings.

(c) Evaluate the model obtained on the test set. Try difference tuning parameters (if you are using

smoothing splines s() then try different df’s; if you are using local regression lo() then try different

span’s) and explain the results obtained.

(d) For which variables, if any, is there evidence of a non-linear relationship with the response?

2. (40 pts) This question relates to the Credit data set. (Regression problem).

First, we split the data set into a training set and a test set by using the following command lines.

library(ISLR)

data("Credit")

set.seed(15)

Credit <- Credit[,-1] # remove ID column

train <- sample(nrow(Credit), 300)

Credit.train <- Credit[train, ]

Credit.test <- Credit[-train, ]

(a) Fit a tree to the training data, with Balance as the response and the other variables. Use the summary()

function to produce summary statistics about the tree, and describe the results obtained. What is the

training MSE? How many terminal nodes does the tree have?

(b) Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal

nodes, and interpret the information displayed.

(c) Create a plot of the tree, and interpret the results.

(d) Predict the response on the test data. What is the test MSE?

(e) Apply the cv.tree() function to the training set in order to determine the optimal tree size.

(f) Produce a plot with tree size on the x-axis and cross-validated error on the y-axis.

(g) Which tree size corresponds to the lowest cross-validated error?

(h) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If

cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal

nodes.

(i) Compare the training MSEs between the pruned and unpruned trees. Which is higher?

(j) Compare the test MSEs between the pruned and unpruned trees. Which is higher?

(k) Fit a bagging model to the training set with Balance as the response and the other variables. Use

1,000 trees (ntree = 1000). Use the importance() function to determine which variables are most

important.

(l) Use the bagging model to predict the response on the test data. Compute the test MSE.

(m) Fit a random forest model to the training set with Balance as the response and the other variables. Use

1,000 trees (ntree = 1000). Use the importance() function to determine which variables are most

important.

(n) Use the random forest to predict the response on the test data. Compute the test MSE.

(o) Fit a boosting model to the training set with Balance as the response and the other variables. Use 1,000

trees, and a shrinkage value of 0.01 (λ = 0.01). Which predictors appear to be the most important?

(p) Use the boosting model to predict the response on the test data. Compute the test MSE.

(q) Fit a GAM to the training set with Balance as the response and the other variables, and use the GAM

to predict the response on the test data. Compute the test MSE.

(r) Compare the test MSEs between the unpruned trees, pruned trees, bagging, random forest, boosting,

and GAM. Which performs the best?

3. (40 pts) This question relates to the OJ data set. (Classification problem).

First, we split the data set into a training set and a test set by using the following command lines.

library(ISLR)

data("OJ")

set.seed(10)

train <- sample(nrow(OJ), 800)

OJ.train <- OJ[train, ]

OJ.test <- OJ[-train, ]

(a) Fit a tree to the training data, with Purchase as the response and the other variables. Use the

summary() function to produce summary statistics about the tree, and describe the results obtained.

What is the training error rate? How many terminal nodes does the tree have?

(b) Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal

nodes, and interpret the information displayed.

(c) Create a plot of the tree, and interpret the results.

(d) Predict the response on the test data, and produce a confusion matrix comparing the test labels to the

predicted test labels. What is the test error rate?

(e) Apply the cv.tree() function to the training set in order to determine the optimal tree size.

(f) Produce a plot with tree size on the x-axis and cross-validated classification error rate on the y-axis.

(g) Which tree size corresponds to the lowest cross-validated classification error rate?

(h) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If

cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal

nodes.

(i) Compare the training error rates between the pruned and unpruned trees. Which is higher?

(j) Compare the test error rates between the pruned and unpruned trees. Which is higher?

(k) Fit a bagging model to the training set with Purchase as the response and the other variables as

predictors. Use 1,000 trees (ntree = 1000). Use the importance() function to determine which

variables are most important.

(l) Use the bagging model to predict the response on the test data. Compute the test error rates.

(m) Fit a random forest model to the training set with Purchase as the response and the other variables

as predictors. Use 1,000 trees (ntree = 1000). Use the importance() function to determine which

variables are most important.

(n) Use the random forest to predict the response on the test data. Compute the test error rates.

(o) Fit a boosting model to the training set with Purchase as the response and the other variables as

predictors. Use 1,000 trees, and a shrinkage value of 0.01 (λ = 0.01). Which predictors appear to be

the most important?

(p) Use the boosting model to predict the response on the test data. Compute the test error rates.

(q) Fit a logistic regression to the training set with Purchase as the response and the other variables as

predictors, and predict on the test data. Compute the test error rates.

(r) Rank the significance of the coefficients of the logistic regression. Is the result consistent with (k)?

(s) Compare the test error rates between the unpruned trees, pruned trees, bagging, random forest, boosting,

and logistic regression. Which performs the best?

联系我们

- QQ：99515681
- 邮箱：99515681@qq.com
- 工作时间：8:00-23:00
- 微信：codinghelp

- 代写cs3014 Google Analytics Customer Rev 2020-01-21
- 代写cmpsc121 Structs代写留学生c/C++实验... 2020-01-21
- 代写mis6326 Data Management调试存储过程作业、数据库编 2020-01-21
- 代写msci 581作业、代做marketing Analytics作业、P 2020-01-20
- Software课程作业代做、代写java，C/C++程序设计作业、Pyth 2020-01-20
- Tcss 372作业代做、代写python，Java编程语言作业、代做c/C 2020-01-20
- Emergency Facilities作业代写、代写r编程设计作业、R课程 2020-01-18
- Cis 413/513作业代做、代写data Structures作业、Ja 2020-01-18
- 代写ia626留学生作业、Python程序设计作业调试、代做data课程作业 2020-01-18
- Mat00027i作业代写、Java程序语言作业调试、Mathematica 2020-01-17
- 代做kt Model作业、代写java，Python编程设计作业、代做c/C 2020-01-17
- Data Set课程作业代做、代写r程序语言作业、Ltcret留学生作业代做 2020-01-17
- 代写rstudio留学生作业、代做r编程设计作业、代写r课程设计作业代做数据 2020-01-17
- 代写cs2250 Delimiter Matching代做数据结... 2020-01-16
- 代写cs12b Edit Distance帮写java实验作业... 2020-01-16
- 代写mins325 Filereader And Filewriter代... 2020-01-16
- 代写cosi131 Tunnels帮写java实验作业 2020-01-16
- 代写inm312 Balancebit Software代写留学... 2020-01-16
- 代写cs61b Maze Solver代写java课程设计 2020-01-16
- Program留学生作业代做、C/C++编程语言作业代写、代做java，Py 2020-01-14