Problem 2: Sentiment Analysis using ConvNets (5 points)
ConvNets while renowned for their prowess in image processing,have also demonstrated strong capabilities in handling sequential data such as text.In this problem,you will be applying these principles of CNNs to a classic problem in natural
language processing -sentiment analysis.
Data:
Tensorflow and Keras:The IMDB dataset,provided by Keras,contains movie reviews that are labeled as positive or negative.The dataset comprises 50,000 reviews split evenly into 25,000 for training and 25,000 for testing.The data can be loaded using importing imdb from tensorflow.keras.datasets with the imdb.Load data() method with a vocabulary size of 2000.
PyTorch:Use torchtext.datasets.IMDB()from the TorchText library to load the IMDB dataset(which contains movie reviews labeled as positive or negative),and then utilize appropriate TorchText processing functions to handle a vocabulary size of 2000
for tokenization and numericalization.
Processing:
Each review in the dataset is already pre-processed and encoded as a sequence of word indexes.A mapping between words and their corresponding indexes is provided using the imdb.get_word_index()method.
For consistent input to the model,your task is to pad the reviews or truncate them to a uniform length.This can be achieved using the pad_sequences method from Keras to convert all reviews to a length of 300 words using the maxlen argument in the pad_sequences method.
Architecture:
The architecture of the convolutional neural network model for this problem is as follows:
1.Embedding Layer:
Input Vocabulary Size:2000 words
Embedding Dimension:16
Input Length:300 words
2.Conv1D Layer:
Filters:128
Kernel Size:3
Activation:ReLU
Stride:1
Padding:Valid
3.GlobalMaxPooling1D Layer
4.Dense Layer:
Units:1
Activation:Sigmoid
Training:
The model should be compiled using the 'binary_crossentropy'as the loss function and'adam'optimizer.Additionally,'accuracy'should be assigned as the main metric.A subset of the training data(1000 samples)should be set aside as a validation set, while the rest should be used for training.The model should be trained for a total of 30(or 10)epochs,with a batch size of 32.After training,the model should be evaluated on the test data to obtain the final accuracy score.This will give a measure of how well the model can generalize to unseen reviews.
Visualization:
Plot the accuracy and loss for both training and validation datasets across epochs to analyze the performance of the model over epochs.
Deliverables:
1.Model Accuracy and Loss Curves:A detailed report of the performance of the model,focusing on accuracy and loss curves.
2.Analysis of Model Performance:A thorough analysis should be conducted to discuss the results obtained from the model.This analysis should include(1)Whether the model overfits or underfits the training data. (2)Examination of the loss and accuracy curves to identify potential indicators of the model's behavior(such as plateaus or sharp changes).
3.Code and Resources:Please make sure to submit your working code fles along with the fnal results and the plots.
4.Bonus(+1)Model Optimization:Consider experimenting with other architectures or hyperparameters to further optimize the model's performance.Discuss the outcomes of your experiments and the effect of different parameters on the accuracy
and loss
Note:Ensure proper spliting between training and validation sets and make sure to shufle the data before training to ensure random distribution.