Assignment 3 Description
In this assignment you will implement recurrent networks, and apply them to image captioning on Microsoft COCO. We will also introduce the TinyImageNet dataset, and use a pretrained model on this dataset to explore different applications of image gradients.
The goals of this assignment are as follows:
- Understand the architecture of recurrent neural networks (RNNs) and how they operate on sequences by sharing weights over time
- Understand the difference between vanilla RNNs and Long-Short Term Memory (LSTM) RNNs
- Understand how to sample from an RNN at test-time
- Understand how to combine convolutional neural nets and recurrent nets to implement an image captioning system
- Understand how a trained convolutional network can be used to compute gradients with respect to the input image
- Implement and different applications of image gradients, including saliency maps, fooling images, class visualizations, feature inversion, and DeepDream.
Setup
Get the code as a zip file here Download here. As for the dependencies:
[Option 1] Use Anaconda: The preferred approach for installing all the assignment dependencies is to use Anaconda Links to an external site., which is a Python distribution that includes many of the most popular Python packages for science, math, engineering and data analysis. Once you install it you can skip all mentions of requirements and you are ready to go directly to working on the assignment.
[Option 2] Manual install, virtual environment: If you do not want to use Anaconda and want to go with a more manual and risky installation route you will likely want to create a virtual environment Links to an external site. for the project. If you choose not to use a virtual environment, it is up to you to make sure that all dependencies for the code are installed globally on your machine. To set up a virtual environment, run the following:
cd assignment3
sudo pip install virtualenv # This may already be installed
virtualenv .env # Create a virtual environment
source .env/bin/activate # Activate the virtual environment
pip install -r requirements.txt # Install dependencies
# Work on the assignment for a while ...
deactivate # Exit the virtual environment
Download data: Once you have the starter code, you will need to download the processed MS-COCO dataset, the TinyImageNet dataset, and the pretrained TinyImageNet model. Run the following from the assignment3
directory:
cd cs294_129/datasets
./get_coco_captioning.sh
./get_tiny_imagenet_a.sh
./get_pretrained_model.sh
Compile the Cython extension: Convolutional Neural Networks require a very efficient implementation. We have implemented of the functionality using Cython
Links to an external site.; you will need to compile the Cython extension before you can run the code. From the cs294_129
directory, run the following command:
python setup.py build_ext --inplace
Start IPython: After you have the data, you should start the IPython notebook server from the assignment3
directory. If you are unfamiliar with IPython, you should read our IPython tutorial
Links to an external site..
NOTE: If you are working in a virtual environment on OSX, you may encounter errors with matplotlib due to the issues described here
Links to an external site.. You can work around this issue by starting the IPython server using the start_ipython_osx.sh
script from the assignment3
directory; the script assumes that your virtual environment is named .env
.
Q1: Image Captioning with Vanilla RNNs (40 points)
The IPython notebook RNN_Captioning.ipynb
will walk you through the implementation of an image captioning system on MS-COCO using vanilla recurrent networks.
Q2: Image Captioning with LSTMs (35 points)
The IPython notebook LSTM_Captioning.ipynb
will walk you through the implementation of Long-Short Term Memory (LSTM) RNNs, and apply them to image captioning on MS-COCO.
Q3: Image Gradients: Saliency maps and Fooling Images (10 points)
The IPython notebook ImageGradients.ipynb
will introduce the TinyImageNet dataset. You will use a pretrained model on this dataset to compute gradients with respect to the image, and use them to produce saliency maps and fooling images.
Q4: Image Generation: Classes, Inversion, DeepDream (15 points)
In the IPython notebook ImageGeneration.ipynb
you will use the pretrained TinyImageNet model to generate images. In particular you will generate class visualizations and implement feature inversion and DeepDream.
Q5: Do something extra! (up to +10 points)
Given the components of the assignment, try to do something cool. Maybe there is some way to generate images that we did not implement in the assignment?
Check your Submission
You can check your submission on a standard Python implementation in Virtual Box. First download and install virtual box from here (Links to an external site.). Then grab this zip file, save and unzip it.
- Open VirtualBox and click the 'New' button.
- Select the following options in the VM creation wizard that appears:
-
Name and operating system
- Type: Linux
- Version: Ubuntu (64-bit)
- Memory size: at least 1024 MB, preferably half the physical memory on your machine.
-
Hard drive
- Use an existing virtual drive file; select the disk image (
.vdi
file) you unzipped
- Use an existing virtual drive file; select the disk image (
-
CPU cores
- Under Settings→System→Processor allocate half the machine's cores to the virtual machine
-
Name and operating system
Then you can start your virtual machine, and test the assignment inside it. You can mount directories from your host machine or use the network to copy the assignment into the VM. The account name is "deep" and has password "deep". The account has sudo access. Sorry, only Python 2.7 support for now.