tensorflow-for-deep-learnin.../Deep_Art/examples/dream.ipynb
2024-11-17 16:18:55 -08:00

537 lines
16 KiB
Text

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "AeF9mG-COZNE"
},
"source": [
"## Loading DNN model\n",
"In this notebook we are going to use a [GoogLeNet](https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet) model trained on [ImageNet](http://www.image-net.org/) dataset, which is set in the ```model_path``` variable:\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "RMhGdYHuOZM8"
},
"source": [
"# Deep Dreams (with Caffe) adapted by marina von steinkirch\n",
"\n",
"This notebook demonstrates how to use the [Caffe](http://caffe.berkeleyvision.org/) neural network framework to \n",
"produce \"dream\" visuals shown in the \n",
"[Google Research blog post](http://googleresearch.blogspot.ch/2015/06/inceptionism-going-deeper-into-neural.html). **#deepdream**\n",
"\n",
"## Dependencies\n",
"\n",
"* Standard Python scientific stack: [NumPy](http://www.numpy.org/), [SciPy](http://www.scipy.org/), [PIL](http://www.pythonware.com/products/pil/), [IPython](http://ipython.org/). \n",
"* [Caffe](http://caffe.berkeleyvision.org/) deep learning framework ([installation instructions](http://caffe.berkeleyvision.org/installation.html)).\n",
"* Google [protobuf](https://developers.google.com/protocol-buffers/) library that is used for Caffe model manipulation.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab_type": "code",
"collapsed": false,
"id": "Pqz5k4syOZNA"
},
"outputs": [],
"source": [
"import os\n",
"import numpy as np\n",
"import scipy.ndimage as nd\n",
"import PIL.Image\n",
"from cStringIO import StringIO\n",
"from IPython.display import clear_output, Image, display\n",
"from google.protobuf import text_format\n",
"\n",
"import caffe\n",
"\n",
"# GPU support for CUDA and Caffe.\n",
"caffe.set_mode_gpu()\n",
"# Select GPU device if multiple devices exist.\n",
"caffe.set_device(0)\n",
"\n",
"# Set the background image to build the dream on top of it.\n",
"BACKGROUND_IMG = 'd2.jpg'\n",
"\n",
"# Set the image to control the dream on.\n",
"CONTROL_IMAGE = 'd3.jpg'\n",
"\n",
"def showarray(a, fmt='jpeg'):\n",
" \"\"\" Display the image in the notebook. \"\"\"\n",
" a = np.uint8(np.clip(a, 0, 255))\n",
" f = StringIO()\n",
" PIL.Image.fromarray(a).save(f, fmt)\n",
" display(Image(data=f.getvalue()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab_type": "code",
"collapsed": false,
"id": "i9hkSm1IOZNR"
},
"outputs": [],
"source": [
"model_path = '../models/bvlc_googlenet/'\n",
"net_fn = os.path.join(model_path, 'deploy.prototxt')\n",
"param_fn = os.path.join(model_path, 'bvlc_googlenet.caffemodel')\n",
"\n",
"# Patching model to be able to compute gradients.\n",
"# Note that you can also manually add \"force_backward: true\" line to \"deploy.prototxt\".\n",
"model = caffe.io.caffe_pb2.NetParameter()\n",
"text_format.Merge(open(net_fn).read(), model)\n",
"model.force_backward = True\n",
"open('tmp.prototxt', 'w').write(str(model))\n",
"\n",
"net = caffe.Classifier('tmp.prototxt', param_fn,\n",
" mean = np.float32([104.0, 116.0, 122.0]), # ImageNet mean, training set dependent\n",
" channel_swap = (2,1,0)) # the reference model has channels in BGR order instead of RGB\n",
"\n",
"# A couple of utility functions for converting to and from Caffe's input image layout.\n",
"def preprocess(net, img):\n",
" return np.float32(np.rollaxis(img, 2)[::-1]) - net.transformer.mean['data']\n",
"\n",
"def deprocess(net, img):\n",
" return np.dstack((img + net.transformer.mean['data'])[::-1])"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "UeV_fJ4QOZNb"
},
"source": [
"## Producing dreams"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "9udrp3efOZNd"
},
"source": [
"The \"dream\" images is just a gradient ascent process that tries to maximize the **L2** norm of activations of a particular DNN layer. \n",
"\n",
"Here are a few simple tricks that we found useful for getting good images:\n",
"\n",
"* offset image by a random jitter,\n",
"* normalize the magnitude of gradient ascent steps,\n",
"* apply ascent across multiple scales (octaves),\n",
"\n",
"First, we implement a basic gradient ascent step function, applying the first two tricks:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab_type": "code",
"collapsed": false,
"id": "pN43nMsHOZNg"
},
"outputs": [],
"source": [
"def objective_L2(dst):\n",
" dst.diff[:] = dst.data \n",
"\n",
"def make_step(net, step_size=1.5, end='inception_4c/output', \n",
" jitter=32, clip=True, objective=objective_L2):\n",
" '''Basic gradient ascent step.'''\n",
"\n",
" src = net.blobs['data'] # input image is stored in Net's 'data' blob\n",
" dst = net.blobs[end]\n",
"\n",
" ox, oy = np.random.randint(-jitter, jitter+1, 2)\n",
" src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift\n",
" \n",
" net.forward(end=end)\n",
" objective(dst) # specify the optimization objective\n",
" net.backward(start=end)\n",
" g = src.diff[0]\n",
" # apply normalized ascent step to the input image\n",
" src.data[:] += step_size/np.abs(g).mean() * g\n",
"\n",
" src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image\n",
" \n",
" if clip:\n",
" bias = net.transformer.mean['data']\n",
" src.data[:] = np.clip(src.data, -bias, 255-bias) "
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "nphEdlBgOZNk"
},
"source": [
"Next we implement an ascent through different scales. We call these scales \"octaves\"."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab_type": "code",
"collapsed": false,
"id": "ZpFIn8l0OZNq"
},
"outputs": [],
"source": [
"def deepdream(net, base_img, iter_n=10, octave_n=4, octave_scale=1.4, \n",
" end='inception_4c/output', clip=True, **step_params):\n",
" \n",
" # prepare base images for all octaves\n",
" octaves = [preprocess(net, base_img)]\n",
" for i in xrange(octave_n-1):\n",
" octaves.append(nd.zoom(octaves[-1], (1, 1.0/octave_scale,1.0/octave_scale), order=1))\n",
" \n",
" src = net.blobs['data']\n",
" detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details\n",
" for octave, octave_base in enumerate(octaves[::-1]):\n",
" h, w = octave_base.shape[-2:]\n",
" if octave > 0:\n",
" # upscale details from the previous octave\n",
" h1, w1 = detail.shape[-2:]\n",
" detail = nd.zoom(detail, (1, 1.0*h/h1,1.0*w/w1), order=1)\n",
"\n",
" src.reshape(1,3,h,w) # resize the network's input image size\n",
" src.data[0] = octave_base+detail\n",
" for i in xrange(iter_n):\n",
" make_step(net, end=end, clip=clip, **step_params)\n",
" \n",
" # visualization\n",
" vis = deprocess(net, src.data[0])\n",
" if not clip: # adjust image contrast if clipping is disabled\n",
" vis = vis*(255.0/np.percentile(vis, 99.98))\n",
" showarray(vis)\n",
" print octave, i, end, vis.shape\n",
" clear_output(wait=True)\n",
" \n",
" # extract details produced on the current octave\n",
" detail = src.data[0]-octave_base\n",
" # returning the resulting image\n",
" return deprocess(net, src.data[0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "QrcdU-lmOZNx"
},
"source": [
"Now we are ready to let the neural network reveal its dreams! Let's take a Dali image as a starting point:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab_type": "code",
"collapsed": false,
"executionInfo": null,
"id": "40p5AqqwOZN5",
"outputId": "f62cde37-79e8-420a-e448-3b9b48ee1730",
"pinned": false
},
"outputs": [],
"source": [
"img = np.float32(PIL.Image.open(BACKGROUND_IMG))\n",
"showarray(img)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "Z9_215_GOZOL"
},
"source": [
"Running the next code cell starts the detail generation process. You may see how new patterns start to form, iteration by iteration, octave by octave."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab_type": "code",
"collapsed": false,
"executionInfo": null,
"id": "HlnVnDTlOZOL",
"outputId": "425dfc83-b474-4a69-8386-30d86361bbf6",
"pinned": false
},
"outputs": [],
"source": [
"_=deepdream(net, img)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "Rp9kOCQTOZOQ"
},
"source": [
"The complexity of the details generated depends on which layer's activations we try to maximize. Higher layers produce complex features, while lower ones enhance edges and textures, giving the image an impressionist feeling:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab_type": "code",
"collapsed": false,
"executionInfo": null,
"id": "eHOX0t93OZOR",
"outputId": "0de0381c-4681-4619-912f-9b6a2cdec0c6",
"pinned": false
},
"outputs": [],
"source": [
"_=deepdream(net, img, end='inception_3b/5x5_reduce')"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "rkzHz9E8OZOb"
},
"source": [
"We encourage readers to experiment with layer selection to see how it affects the results. Execute the next code cell to see the list of different layers. You can modify the `make_step` function to make it follow some different objective, say to select a subset of activations to maximize, or to maximize multiple layers at once. There is a huge design space to explore!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab_type": "code",
"collapsed": false,
"id": "OIepVN6POZOc"
},
"outputs": [],
"source": [
"net.blobs.keys()"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "vs2uUpMCOZOe"
},
"source": [
"What if we feed the `deepdream` function its own output, after applying a little zoom to it? It turns out that this leads to an endless stream of impressions of the things that the network saw during training. Some patterns fire more often than others, suggestive of basins of attraction.\n",
"\n",
"We will start the process from the same sky image as above, but after some iteration the original image becomes irrelevant; even random noise can be used as the starting point."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab_type": "code",
"collapsed": false,
"id": "IB48CnUfOZOe"
},
"outputs": [],
"source": [
"!mkdir frames\n",
"frame = img\n",
"frame_i = 0"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab_type": "code",
"collapsed": false,
"id": "fj0E-fKDOZOi"
},
"outputs": [],
"source": [
"h, w = frame.shape[:2]\n",
"s = 0.05 # scale coefficient\n",
"\n",
"for i in xrange(1):\n",
" frame = deepdream(net, frame)\n",
" PIL.Image.fromarray(np.uint8(frame)).save(\"frames/%04d.jpg\"%frame_i)\n",
" frame = nd.affine_transform(frame, [1-s,1-s,1], [h*s/2,w*s/2,0], order=1)\n",
" frame_i += 1"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "XzZGGME_OZOk"
},
"source": [
"Be careful running the code above, it can bring you into very strange realms!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab_type": "code",
"collapsed": false,
"executionInfo": null,
"id": "ZCZcz2p1OZOt",
"outputId": "d3773436-2b5d-4e79-be9d-0f12ab839fff",
"pinned": false
},
"outputs": [],
"source": [
"Image(filename='frames/0029.jpg')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Controlling dreams\n",
"\n",
"The image detail generation method described above tends to produce some patterns more often the others. One easy way to improve the generated image diversity is to tweak the optimization objective. Here we show just one of many ways to do that. Let's use one more input image. We'd call it a \"*guide*\"."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"guide = np.float32(PIL.Image.open(CONTROL_IMAGE))\n",
"showarray(guide)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that the neural network we use was trained on images downscaled to 224x224 size. So high resolution images might have to be downscaled, so that the network could pick up their features. The image we use here is already small enough.\n",
"\n",
"Now we pick some target layer and extract guide image features."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"end = 'inception_3b/output'\n",
"h, w = guide.shape[:2]\n",
"src, dst = net.blobs['data'], net.blobs[end]\n",
"src.reshape(1,3,h,w)\n",
"src.data[0] = preprocess(net, guide)\n",
"net.forward(end=end)\n",
"guide_features = dst.data[0].copy()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Instead of maximizing the L2-norm of current image activations, we try to maximize the dot-products between activations of current image, and their best matching correspondences from the guide image."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def objective_guide(dst):\n",
" x = dst.data[0].copy()\n",
" y = guide_features\n",
" ch = x.shape[0]\n",
" x = x.reshape(ch,-1)\n",
" y = y.reshape(ch,-1)\n",
" A = x.T.dot(y) # compute the matrix of dot-products with guide features\n",
" dst.diff[0].reshape(ch,-1)[:] = y[:,A.argmax(1)] # select ones that match best\n",
"\n",
"_=deepdream(net, img, end=end, objective=objective_guide)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This way we can affect the style of generated images without using a different training set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"colabVersion": "0.3.1",
"default_view": {},
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.11"
},
"views": {}
},
"nbformat": 4,
"nbformat_minor": 0
}