首页 > > 详细

辅导 Problem 1: Building a convolutional neural network (ConvNet) to classify images of fruits and veg

Problem 1: Building a convolutional neural network (ConvNet) to classify images of fruits and vegetables into their respective classes. (5 points)

You ar encouraged to use PyTorch,TensorFlow and the kerasAPl,but any other deplearning librar (in Python,Juli or Matlob)would be aceptable for hishomework asinment.

Data:The Fruits-360 dataset contains 100×100 images for 131 different varieties of fruits and vegetables.This dataset with 90,483 images could be downloaded from Kaggle datasets (https://www.kaggle.com/moltean/fruits B³).The training and testing images can be extracted from the downloaded fle.Please follow the steps below for data processing.

·Tensorflow and Keras:The tensorflow.keras.utils.image dataset from directory class from Keras generates batches of tensor image data and is capable of real-time data transformation.An image generator should be created for both training and testing dataset followed by an iterator that reads and processes image data in an iterative manner.The generators should read all classes of fruit images located in the Training and Testing.

·PyTorch:Use torchvision.datasets.ImageFolder to load images from the Training and Testing directories.

·tensofou,keres.preprocesing-ing.Tmoge.Dotacenerato o torcbvisio.tronsfors.compose can be employed for image normalization and data augmentation (optional)

·The original 100×100 images should be scaled down into 75×75 resolution images with these generators.

·Divide the training set into training and validation sets with 85%and 15%images of the training set in each,respectively.

·The entire training and testing dataset should also be divided into mini-batches of size 1000 and shufled using the seed value of 42.

·The training and testing data generators are used to train and evaluate the ConvNet.

Architecture:Define a Sequential model,wherein the layers are stacked sequentially and each layer has exactly one input tensor and one output tensor.Please build a ConvNet by adding the layers to the Sequential model using the configuration below.For each of the layers,initialize the kernel weights from a Glorot uniform distribution and set the random seed to 99.Additionally,initialize the bias vector as a zero vector.In this architecture,you may use different dropout values [0.1,0.3,0.5] and report the impact of dropout values on model performance.

Conv2D

Filters:64 Kernel size:(3,3)Strides:(1,1)Padding:no padding Activation: ReLU

MaxPooling2D

Pool size:(2,2)Strides:None Padding:no padding

· Conv2D

Filters:128 Kernel size:(3,3)Strides:(1,1)Padding:no padding Activation: ReLU

· BatchNormalization

Momentum:0.99 Epsilon:0.001

Dropout

Rate:[0.1,0.3,0.5]

MaxPooling2D

Pool size:(2,2)Strides:None Padding:no padding

Flatten

Dense

Units:256 Activation:ReLU

Dense

Units:131 Activation:Softmax

The performance of the CNN model is notably impacted by the number of convolutional layers it employs.In the preceding design,two convolutional layers were integrated.Kindly introduce an additional convolutional layer(as depicted in the updated architecture below)and elaborate on the roles of convolutional layers.

Conv2D

Filters:64 Kernel size:(3,3)Strides:(1,1)Padding:no padding Activation: ReLU

MaxPooling2D

Pool size:(2,2)Strides:None Padding:no padding

Conv2D

Filters:128 Kernel size:(3,3)Strides:(1,1)Padding:no padding Activation: ReLU

MaxPooling2D

Pool size:(2,2)Strides:None Padding:no padding

Conv2D

Filters:256 Kernel size:(3,3)Strides:(1,1)Padding:no padding Activation: ReLU

BatchNormalization

Momentum:0.99 Epsilon:0.001

· Dropout

Rate:0.3

MaxPooling2D

Pool size:(2,2)Strides:None Padding:no padding

Flatten

Dense

Units:512 Activation:ReLU

Dense

Units:131 Activation:Softmax

Training:The model is compiled by specifying the optimizer,the loss function and metrics to be recorded at each step of the training process.The ADAM optimizer should minimize the categorical cross entropy.The ConvNet model can be trained and evaluated with the previously created data generators.The training step size can be calculated by dividing the number of images in the generator with the batch size for training and testing data,respectively.

Deliverables:Please report the training and validation accuracy after the training process is carried out for 50 epochs (you can train for 20 epochs if the training is time consuming),in addition to the achieved accuracy levels on the test dataset.Also, plot the loss curves for both training and validation datasets.Discuss the functions of dropout values and the number of convolutional layers in relation to the CNN model performance.Please make sure to submit your working code files along with the final results and the plots.

Bonus(+1):A skip connection in a neural network is a connection that skips one or more layers and connects to a later layer.Residual Networks(ResNets)have popularized the use of skip connections to address the vanishing gradient problem,and  hence enabling the training of deeper networks.Your task for tthis bonus part is to integrate such a skip connection,any types of skip connections are acceptable.For instance,linking the output of the first layer convolutional directly to the input of the last convolutional layer in your model architecture.Based on your results,analyze and discuss any improvements or effects this change has on the model's performance


联系我们
  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp
热点标签

联系我们 - QQ: 99515681 微信:codinghelp
程序辅导网!