日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > pytorch >内容正文

pytorch

吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 4 Art Generation with Neural Style Transfer

發(fā)布時間:2025/3/21 pytorch 28 豆豆
生活随笔 收集整理的這篇文章主要介紹了 吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 4 Art Generation with Neural Style Transfer 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

吳恩達deeplearning.ai課程作業(yè),自己寫的答案。
補充說明:
1. 評論中總有人問為什么直接復制這些notebook運行不了?請不要直接復制粘貼,不可能運行通過的,這個只是notebook中我們要自己寫的那部分,要正確運行還需要其他py文件,請自己到GitHub上下載完整的。這里的部分僅僅是參考用的,建議還是自己按照提示一點一點寫,如果實在卡住了再看答案。個人覺得這樣才是正確的學習方法,況且作業(yè)也不算難。
2. 關于評論中有人說我是抄襲,注釋還沒別人詳細,復制下來還運行不過。答復是:做伸手黨之前,請先搞清這個作業(yè)是干什么的。大家都是從GitHub上下載原始的作業(yè),然后根據(jù)代碼前面的提示(通常會指定函數(shù)和公式)來編寫代碼,而且后面還有expected output供你比對,如果程序正確,結果一般來說是一樣的。請不要無腦噴,說什么跟別人的答案一樣的。說到底,我們要做的就是,看他的文字部分,根據(jù)提示在代碼中加入部分自己的代碼。我們自己要寫的部分只有那么一小部分代碼。
3. 由于實在很反感無腦噴子,故禁止了下面的評論功能,請見諒。如果有問題,請私信我,在力所能及的范圍內會盡量幫忙。

注:作業(yè)中要使用vgg的預訓練模型imagenet-vgg-verydeep-19.mat,下面給出百度云鏈接:
鏈接:https://pan.baidu.com/s/1slYrvqt 密碼:6cy1

Deep Learning & Art: Neural Style Transfer

Welcome to the second assignment of this week. In this assignment, you will learn about Neural Style Transfer. This algorithm was created by Gatys et al. (2015) (https://arxiv.org/abs/1508.06576).

In this assignment, you will:
- Implement the neural style transfer algorithm
- Generate novel artistic images using your algorithm

Most of the algorithms you’ve studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you’ll optimize a cost function to get pixel values!

import os import sys import scipy.io import scipy.misc import matplotlib.pyplot as plt from matplotlib.pyplot import imshow from PIL import Image from nst_utils import * import numpy as np import tensorflow as tf%matplotlib inline

1 - Problem Statement

Neural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely, a “content” image (C) and a “style” image (S), to create a “generated” image (G). The generated image G combines the “content” of the image C with the “style” of image S.

In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S).

Let’s see how you can do this.

2 - Transfer Learning

Neural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning.

Following the original NST paper (https://arxiv.org/abs/1508.06576), we will use the VGG network. Specifically, we’ll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the earlier layers) and high level features (at the deeper layers).

Run the following code to load parameters from the VGG model. This may take a few seconds.

model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat") print(model) {'conv1_2': <tf.Tensor 'Relu_17:0' shape=(1, 300, 400, 64) dtype=float32>, 'conv5_4': <tf.Tensor 'Relu_31:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv3_2': <tf.Tensor 'Relu_21:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv4_4': <tf.Tensor 'Relu_27:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv5_1': <tf.Tensor 'Relu_28:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv1_1': <tf.Tensor 'Relu_16:0' shape=(1, 300, 400, 64) dtype=float32>, 'conv5_2': <tf.Tensor 'Relu_29:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv2_1': <tf.Tensor 'Relu_18:0' shape=(1, 150, 200, 128) dtype=float32>, 'conv5_3': <tf.Tensor 'Relu_30:0' shape=(1, 19, 25, 512) dtype=float32>, 'input': <tf.Variable 'Variable_1:0' shape=(1, 300, 400, 3) dtype=float32_ref>, 'conv4_1': <tf.Tensor 'Relu_24:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv3_4': <tf.Tensor 'Relu_23:0' shape=(1, 75, 100, 256) dtype=float32>, 'avgpool2': <tf.Tensor 'AvgPool_6:0' shape=(1, 75, 100, 128) dtype=float32>, 'conv3_1': <tf.Tensor 'Relu_20:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv4_2': <tf.Tensor 'Relu_25:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv3_3': <tf.Tensor 'Relu_22:0' shape=(1, 75, 100, 256) dtype=float32>, 'avgpool5': <tf.Tensor 'AvgPool_9:0' shape=(1, 10, 13, 512) dtype=float32>, 'avgpool1': <tf.Tensor 'AvgPool_5:0' shape=(1, 150, 200, 64) dtype=float32>, 'conv4_3': <tf.Tensor 'Relu_26:0' shape=(1, 38, 50, 512) dtype=float32>, 'avgpool4': <tf.Tensor 'AvgPool_8:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv2_2': <tf.Tensor 'Relu_19:0' shape=(1, 150, 200, 128) dtype=float32>, 'avgpool3': <tf.Tensor 'AvgPool_7:0' shape=(1, 38, 50, 256) dtype=float32>}

The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable’s value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the tf.assign function. In particular, you will use the assign function like this:

model["input"].assign(image)

This assigns the image as an input to the model. After this, if you want to access the activations of a particular layer, say layer 4_2 when the network is run on this image, you would run a TensorFlow session on the correct tensor conv4_2, as follows:

sess.run(model["conv4_2"])

3 - Neural Style Transfer

We will build the NST algorithm in three steps:

  • Build the content cost function Jcontent(C,G)Jcontent(C,G)
  • Build the style cost function Jstyle(S,G)Jstyle(S,G)
  • Put it together to get J(G)=αJcontent(C,G)+βJstyle(S,G)J(G)=αJcontent(C,G)+βJstyle(S,G).

3.1 - Computing the content cost

In our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre.

content_image = scipy.misc.imread("images/louvre.jpg") imshow(content_image) <matplotlib.image.AxesImage at 0x7fc36924e0f0>

The content image (C) shows the Louvre museum’s pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.

* 3.1.1 - How do you ensure the generated image G matches the content of the image C?*

As we saw in lecture, the earlier (shallower) layers of a ConvNet tend to detect lower-level features such as edges and simple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as object classes.

We would like the “generated” image G to have similar content as the input image C. Suppose you have chosen some layer’s activations to represent the content of an image. In practice, you’ll get the most visually pleasing results if you choose a layer in the middle of the network–neither too shallow nor too deep. (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.)

So, suppose you have picked one particular hidden layer to use. Now, set the image C as the input to the pretrained VGG network, and run forward propagation. Let a(C)a(C) be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as a[l](C)a[l](C), but here we’ll drop the superscript [l][l] to simplify the notation.) This will be a nH×nW×nCnH×nW×nC tensor. Repeat this process with the image G: Set G as the input, and run forward progation. Let

a(G)a(G) be the corresponding hidden layer activation. We will define as the content cost function as:

Jcontent(C,G)=14×nH×nW×nCall entries(a(C)?a(G))2(1)(1)Jcontent(C,G)=14×nH×nW×nC∑all entries(a(C)?a(G))2

Here, nH,nWnH,nW and nCnC are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. For clarity, note that a(C)a(C) and a(G)a(G) are the volumes corresponding to a hidden layer’s activations. In order to compute the cost Jcontent(C,G)Jcontent(C,G), it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below. (Technically this unrolling step isn’t needed to compute JcontentJcontent, but it will be good practice for when you do need to carry out a similar operation later for computing the style const JstyleJstyle.)

Exercise: Compute the “content cost” using TensorFlow.

Instructions: The 3 steps to implement this function are:
1. Retrieve dimensions from a_G:
- To retrieve dimensions from a tensor X, use: X.get_shape().as_list()
2. Unroll a_C and a_G as explained in the picture above
- If you are stuck, take a look at Hint1 and Hint2.
3. Compute the content cost:
- If you are stuck, take a look at Hint3, Hint4 and Hint5.

# GRADED FUNCTION: compute_content_costdef compute_content_cost(a_C, a_G):"""Computes the content costArguments:a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image GReturns: J_content -- scalar that you compute using equation 1 above."""### START CODE HERE #### Retrieve dimensions from a_G (≈1 line)m, n_H, n_W, n_C = a_G.get_shape().as_list() # print("m:{0}".format(m)) # print("n_H:{0}".format(n_H)) # print("n_W:{0}".format(n_W)) # print("n_C:{0}".format(n_C))# Reshape a_C and a_G (≈2 lines)a_C_unrolled = tf.reshape(tf.transpose(a_C, perm=[3, 2, 1, 0]), [n_C, n_H*n_W, -1])print("a_C_unrolled: {0}".format(a_C_unrolled.get_shape().as_list()))a_G_unrolled = tf.reshape(tf.transpose(a_G, perm=[3, 2, 1, 0]), [n_C, n_H*n_W, -1])print("a_G_unrolled: {0}".format(a_G_unrolled.get_shape().as_list()))# compute the cost with tensorflow (≈1 line)J_content = tf.reduce_sum(tf.square(tf.subtract(a_C_unrolled, a_G_unrolled))) / (4 * n_H * n_W * n_C)### END CODE HERE ###return J_content tf.reset_default_graph()with tf.Session() as test:tf.set_random_seed(1)a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)J_content = compute_content_cost(a_C, a_G)print("J_content = " + str(J_content.eval())) a_C_unrolled: [3, 16, 1] a_G_unrolled: [3, 16, 1] J_content = 6.76559

Expected Output:

J_content 6.76559


What you should remember:
- The content cost takes a hidden layer activation of the neural network, and measures how different a(C)a(C) and a(G)a(G) are.
- When we minimize the content cost later, this will help make sure GG has similar content as CC.

3.2 - Computing the style cost

For our running example, we will use the following style image:

style_image = scipy.misc.imread("images/monet_800600.jpg") imshow(style_image) <matplotlib.image.AxesImage at 0x7fc369506da0>

This painting was painted in the style of impressionism.

Lets see how you can now define a “style” const function Jstyle(S,G)Jstyle(S,G).

3.2.1 - Style matrix

The style matrix is also called a “Gram matrix.” In linear algebra, the Gram matrix G of a set of vectors (v1,,vn)(v1,…,vn) is the matrix of dot products, whose entries are Gij=vTivj=np.dot(vi,vj)Gij=viTvj=np.dot(vi,vj). In other words, GijGij compares how similar vivi is to vjvj: If they are highly similar, you would expect them to have a large dot product, and thus for GijGij to be large.

Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature, but GG is used to denote the Style matrix (or Gram matrix) as well as to denote the generated image GG. We will try to make sure which GG we are referring to is always clear from the context.

In NST, you can compute the Style matrix by multiplying the “unrolled” filter matrix with their transpose:

The result is a matrix of dimension (nC,nC)(nC,nC) where nCnC is the number of filters. The value GijGij measures how similar the activations of filter ii are to the activations of filter jj.

One important part of the gram matrix is that the diagonal elements such as GiiGii also measures how active filter ii is. For example, suppose filter ii is detecting vertical textures in the image. Then GiiGii measures how common vertical textures are in the image as a whole: If GiiGii is large, this means that the image has a lot of vertical texture.

By capturing the prevalence of different types of features (GiiGii), as well as how much different features occur together (GijGij), the Style matrix GG measures the style of an image.

Exercise:
Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. The formula is: The gram matrix of A is GA=AATGA=AAT. If you are stuck, take a look at Hint 1 and Hint 2.

# GRADED FUNCTION: gram_matrixdef gram_matrix(A):"""Argument:A -- matrix of shape (n_C, n_H*n_W)Returns:GA -- Gram matrix of A, of shape (n_C, n_C)"""### START CODE HERE ### (≈1 line)GA = tf.matmul(A, tf.transpose(A))### END CODE HERE ###return GA tf.reset_default_graph()with tf.Session() as test:tf.set_random_seed(1)A = tf.random_normal([3, 2*1], mean=1, stddev=4)GA = gram_matrix(A)print("GA = " + str(GA.eval())) GA = [[ 6.42230511 -4.42912197 -2.09668207][ -4.42912197 19.46583748 19.56387138][ -2.09668207 19.56387138 20.6864624 ]]

Expected Output:

GA [[ 6.42230511 -4.42912197 -2.09668207]
[ -4.42912197 19.46583748 19.56387138]
[ -2.09668207 19.56387138 20.6864624 ]]

3.2.2 - Style cost

After generating the Style matrix (Gram matrix), your goal will be to minimize the distance between the Gram matrix of the “style” image S and that of the “generated” image G. For now, we are using only a single hidden layer a[l]a[l], and the corresponding style cost for this layer is defined as:

J[l]style(S,G)=14×nC2×(nH×nW)2i=1nCj=1nC(G(S)ij?G(G)ij)2(2)(2)Jstyle[l](S,G)=14×nC2×(nH×nW)2∑i=1nC∑j=1nC(Gij(S)?Gij(G))2

where G(S)G(S) and G(G)G(G) are respectively the Gram matrices of the “style” image and the “generated” image, computed using the hidden layer activations for a particular hidden layer in the network.

Exercise: Compute the style cost for a single layer.

Instructions: The 3 steps to implement this function are:
1. Retrieve dimensions from the hidden layer activations a_G:
- To retrieve dimensions from a tensor X, use: X.get_shape().as_list()
2. Unroll the hidden layer activations a_S and a_G into 2D matrices, as explained in the picture above.
- You may find Hint1 and Hint2 useful.
3. Compute the Style matrix of the images S and G. (Use the function you had previously written.)
4. Compute the Style cost:
- You may find Hint3, Hint4 and Hint5 useful.

# GRADED FUNCTION: compute_layer_style_costdef compute_layer_style_cost(a_S, a_G):"""Arguments:a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image GReturns: J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2)"""### START CODE HERE #### Retrieve dimensions from a_G (≈1 line)m, n_H, n_W, n_C = a_G.get_shape().as_list()# Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines)a_S = tf.reshape(tf.transpose(a_S, perm=[3, 1, 2, 0]), [n_C, n_W*n_H])a_G = tf.reshape(tf.transpose(a_G, perm=[3, 1, 2, 0]), [n_C, n_W*n_H])# Computing gram_matrices for both images S and G (≈2 lines)GS = gram_matrix(a_S)GG = gram_matrix(a_G)# Computing the loss (≈1 line)J_style_layer = tf.reduce_sum(tf.square(tf.subtract(GS, GG))) / (4 * n_C**2 * (n_W * n_H)**2)### END CODE HERE ###return J_style_layer tf.reset_default_graph()with tf.Session() as test:tf.set_random_seed(1)a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)J_style_layer = compute_layer_style_cost(a_S, a_G)print("J_style_layer = " + str(J_style_layer.eval())) J_style_layer = 9.19028

Expected Output:

J_style_layer 9.19028

3.2.3 Style Weights

So far you have captured the style from only one layer. We’ll get better results if we “merge” style costs from several different layers. After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image GG. But for now, this is a pretty reasonable default:

STYLE_LAYERS = [('conv1_1', 0.2),('conv2_1', 0.2),('conv3_1', 0.2),('conv4_1', 0.2),('conv5_1', 0.2)]

You can combine the style costs for different layers as follows:

Jstyle(S,G)=lλ[l]J[l]style(S,G)Jstyle(S,G)=∑lλ[l]Jstyle[l](S,G)

where the values for λ[l]λ[l] are given in STYLE_LAYERS.

We’ve implemented a compute_style_cost(…) function. It simply calls your compute_layer_style_cost(...) several times, and weights their results using the values in STYLE_LAYERS. Read over it to make sure you understand what it’s doing.

def compute_style_cost(model, STYLE_LAYERS):"""Computes the overall style cost from several chosen layersArguments:model -- our tensorflow modelSTYLE_LAYERS -- A python list containing:- the names of the layers we would like to extract style from- a coefficient for each of themReturns: J_style -- tensor representing a scalar value, style cost defined above by equation (2)"""# initialize the overall style costJ_style = 0for layer_name, coeff in STYLE_LAYERS:# Select the output tensor of the currently selected layerout = model[layer_name]# Set a_S to be the hidden layer activation from the layer we have selected, by running the session on outa_S = sess.run(out)# Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name] # and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.a_G = out# Compute style_cost for the current layerJ_style_layer = compute_layer_style_cost(a_S, a_G)# Add coeff * J_style_layer of this layer to overall style costJ_style += coeff * J_style_layerreturn J_style

Note: In the inner-loop of the for-loop above, a_G is a tensor and hasn’t been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below.


What you should remember:
- The style of an image can be represented using the Gram matrix of a hidden layer’s activations. However, we get even better results combining this representation from multiple different layers. This is in contrast to the content representation, where usually using just a single hidden layer is sufficient.
- Minimizing the style cost will cause the image GG to follow the style of the image SS.

3.3 - Defining the total cost to optimize

Finally, let’s create a cost function that minimizes both the style and the content cost. The formula is:

J(G)=αJcontent(C,G)+βJstyle(S,G)J(G)=αJcontent(C,G)+βJstyle(S,G)

Exercise: Implement the total cost function which includes both the content cost and the style cost.

# GRADED FUNCTION: total_costdef total_cost(J_content, J_style, alpha = 10, beta = 40):"""Computes the total cost functionArguments:J_content -- content cost coded aboveJ_style -- style cost coded abovealpha -- hyperparameter weighting the importance of the content costbeta -- hyperparameter weighting the importance of the style costReturns:J -- total cost as defined by the formula above."""### START CODE HERE ### (≈1 line)J = J_content * alpha + J_style * beta### END CODE HERE ###return J tf.reset_default_graph()with tf.Session() as test:np.random.seed(3)J_content = np.random.randn() J_style = np.random.randn()J = total_cost(J_content, J_style)print("J = " + str(J)) J = 35.34667875478276

Expected Output:

J 35.34667875478276


What you should remember:
- The total cost is a linear combination of the content cost Jcontent(C,G)Jcontent(C,G) and the style cost Jstyle(S,G)Jstyle(S,G)
- αα and ββ are hyperparameters that control the relative weighting between content and style

4 - Solving the optimization problem

Finally, let’s put everything together to implement Neural Style Transfer!

Here’s what the program will have to do:

  • Create an Interactive Session
  • Load the content image
  • Load the style image
  • Randomly initialize the image to be generated
  • Load the VGG16 model
  • Build the TensorFlow graph:
    • Run the content image through the VGG16 model and compute the content cost
    • Run the style image through the VGG16 model and compute the style cost
    • Compute the total cost
    • Define the optimizer and the learning rate
  • Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step.

  • Lets go through the individual steps in detail.

    You’ve previously implemented the overall cost J(G)J(G). We’ll now set up TensorFlow to optimize this with respect to GG<script type="math/tex" id="MathJax-Element-60">G</script>. To do so, your program has to reset the graph and use an “Interactive Session“. Unlike a regular session, the “Interactive Session” installs itself as the default session to build a graph. This allows you to run variables without constantly needing to refer to the session object, which simplifies the code.

    Lets start the interactive session.

    # Reset the graph tf.reset_default_graph()# Start interactive session sess = tf.InteractiveSession()

    Let’s load, reshape, and normalize our “content” image (the Louvre museum picture):

    content_image = scipy.misc.imread("images/louvre_small.jpg") content_image = reshape_and_normalize_image(content_image)

    Let’s load, reshape and normalize our “style” image (Claude Monet’s painting):

    style_image = scipy.misc.imread("images/monet.jpg") style_image = reshape_and_normalize_image(style_image)

    Now, we initialize the “generated” image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the “generated” image more rapidly match the content of the “content” image. (Feel free to look in nst_utils.py to see the details of generate_noise_image(...); to do so, click “File–>Open…” at the upper-left corner of this Jupyter notebook.)

    generated_image = generate_noise_image(content_image) imshow(generated_image[0]) <matplotlib.image.AxesImage at 0x7fc286ba1c50>

    Next, as explained in part (2), let’s load the VGG16 model.

    model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")

    To get the program to compute the content cost, we will now assign a_C and a_G to be the appropriate hidden layer activations. We will use layer conv4_2 to compute the content cost. The code below does the following:

  • Assign the content image to be the input to the VGG model.
  • Set a_C to be the tensor giving the hidden layer activation for layer “conv4_2”.
  • Set a_G to be the tensor giving the hidden layer activation for the same layer.
  • Compute the content cost using a_C and a_G.
  • # Assign the content image to be the input of the VGG model. sess.run(model['input'].assign(content_image))# Select the output tensor of layer conv4_2 out = model['conv4_2']# Set a_C to be the hidden layer activation from the layer we have selected a_C = sess.run(out)# Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2'] # and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that # when we run the session, this will be the activations drawn from the appropriate layer, with G as input. a_G = out# Compute the content cost J_content = compute_content_cost(a_C, a_G) a_C_unrolled: [512, 1900, 1] a_G_unrolled: [512, 1900, 1]

    Note: At this point, a_G is a tensor and hasn’t been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below.

    # Assign the input of the model to be the "style" image sess.run(model['input'].assign(style_image))# Compute the style cost J_style = compute_style_cost(model, STYLE_LAYERS)

    Exercise: Now that you have J_content and J_style, compute the total cost J by calling total_cost(). Use alpha = 10 and beta = 40.

    ### START CODE HERE ### (1 line) J = total_cost(J_content, J_style, alpha = 10, beta = 40) ### END CODE HERE ###

    You’d previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. See reference

    # define optimizer (1 line) optimizer = tf.train.AdamOptimizer(2.0)# define train_step (1 line) train_step = optimizer.minimize(J)

    Exercise: Implement the model_nn() function which initializes the variables of the tensorflow graph, assigns the input image (initial generated image) as the input of the VGG16 model and runs the train_step for a large number of steps.

    def model_nn(sess, input_image, num_iterations = 200):# Initialize global variables (you need to run the session on the initializer)### START CODE HERE ### (1 line)sess.run(tf.global_variables_initializer())### END CODE HERE #### Run the noisy input image (initial generated image) through the model. Use assign().### START CODE HERE ### (1 line)sess.run(model['input'].assign(input_image))### END CODE HERE ###for i in range(num_iterations):# Run the session on the train_step to minimize the total cost### START CODE HERE ### (1 line)sess.run(train_step)### END CODE HERE #### Compute the generated image by running the session on the current model['input']### START CODE HERE ### (1 line)generated_image = sess.run(model['input'])### END CODE HERE #### Print every 20 iteration.if i%20 == 0:Jt, Jc, Js = sess.run([J, J_content, J_style])print("Iteration " + str(i) + " :")print("total cost = " + str(Jt))print("content cost = " + str(Jc))print("style cost = " + str(Js))# save current generated image in the "/output" directorysave_image("output/" + str(i) + ".png", generated_image)# save last generated imagesave_image('output/generated_image.jpg', generated_image)return generated_image

    Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs.

    model_nn(sess, generated_image) Iteration 0 : total cost = 5.0224e+09 content cost = 7886.73 style cost = 1.25558e+08

    Expected Output:

    Iteration 0 : total cost = 5.05035e+09
    content cost = 7877.67
    style cost = 1.26257e+08

    You’re done! After running this, in the upper bar of the notebook click on “File” and then “Open”. Go to the “/output” directory to see all the saved images. Open “generated_image” to see the generated image! :)

    You should see something the image presented below on the right:

    We didn’t want you to wait too long to see an initial result, and so had set the hyperparameters accordingly. To get the best looking results, running the optimization algorithm longer (and perhaps with a smaller learning rate) might work better. After completing and submitting this assignment, we encourage you to come back and play more with this notebook, and see if you can generate even better looking images.

    Here are few other examples:

    • The beautiful ruins of the ancient city of Persepolis (Iran) with the style of Van Gogh (The Starry Night)

    • The tomb of Cyrus the great in Pasargadae with the style of a Ceramic Kashi from Ispahan.

    • A scientific study of a turbulent fluid with the style of a abstract blue fluid painting.

    5 - Test with your own image (Optional/Ungraded)

    Finally, you can also rerun the algorithm on your own images!

    To do so, go back to part 4 and change the content image and style image with your own pictures. In detail, here’s what you should do:

  • Click on “File -> Open” in the upper tab of the notebook
  • Go to “/images” and upload your images (requirement: (WIDTH = 300, HEIGHT = 225)), rename them “my_content.png” and “my_style.png” for example.
  • Change the code in part (3.4) from :
  • content_image = scipy.misc.imread("images/louvre.jpg") style_image = scipy.misc.imread("images/claude-monet.jpg")

    to:

    content_image = scipy.misc.imread("images/my_content.jpg") style_image = scipy.misc.imread("images/my_style.jpg")
  • Rerun the cells (you may need to restart the Kernel in the upper tab of the notebook).
  • You can also tune your hyperparameters:
    - Which layers are responsible for representing the style? STYLE_LAYERS
    - How many iterations do you want to run the algorithm? num_iterations
    - What is the relative weighting between content and style? alpha/beta

    6 - Conclusion

    Great job on completing this assignment! You are now able to use Neural Style Transfer to generate artistic images. This is also your first time building a model in which the optimization algorithm updates the pixel values rather than the neural network’s parameters. Deep learning has many different types of models and this is only one of them!


    What you should remember:
    - Neural Style Transfer is an algorithm that given a content image C and a style image S can generate an artistic image
    - It uses representations (hidden layer activations) based on a pretrained ConvNet.
    - The content cost function is computed using one hidden layer’s activations.
    - The style cost function for one layer is computed using the Gram matrix of that layer’s activations. The overall style cost function is obtained using several hidden layers.
    - Optimizing the total cost function results in synthesizing new images.

    This was the final programming exercise of this course. Congratulations–you’ve finished all the programming exercises of this course on Convolutional Networks! We hope to also see you in Course 5, on Sequence models!

    References:

    The Neural Style Transfer algorithm was due to Gatys et al. (2015). Harish Narayanan and Github user “l(fā)og0” also have highly readable write-ups from which we drew inspiration. The pre-trained network used in this implementation is a VGG network, which is due to Simonyan and Zisserman (2015). Pre-trained weights were from the work of the MathConvNet team.

    • Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, (2015). A Neural Algorithm of Artistic Style (https://arxiv.org/abs/1508.06576)
    • Harish Narayanan, Convolutional neural networks for artistic style transfer. https://harishnarayanan.org/writing/artistic-style-transfer/
    • Log0, TensorFlow Implementation of “A Neural Algorithm of Artistic Style”. http://www.chioka.in/tensorflow-implementation-neural-algorithm-of-artistic-style
    • Karen Simonyan and Andrew Zisserman (2015). Very deep convolutional networks for large-scale image recognition (https://arxiv.org/pdf/1409.1556.pdf)
    • MatConvNet. http://www.vlfeat.org/matconvnet/pretrained/

    總結

    以上是生活随笔為你收集整理的吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 4 Art Generation with Neural Style Transfer的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內容還不錯,歡迎將生活随笔推薦給好友。

    主站蜘蛛池模板: 69成人网| 97国产精品人人爽人人做 | 狠狠干网 | 老司机深夜福利视频 | 国产在线视频一区二区三区 | 黄网站在线播放 | 久久精品超碰 | 天天操婷婷 | 99国产精品久久久久久久成人热 | 91美女精品| 国产综合视频一区二区 | 精品人妻人人做人人爽 | 日韩美女免费线视频 | 日韩一级不卡 | 艳妇乳肉豪妇荡乳av无码福利 | 亚洲另类av | 日韩av成人在线观看 | 浓精喷进老师黑色丝袜在线观看 | 黑人玩弄人妻一区二区三区 | 欧美人妻一区二区三区 | 成人a区 | 欧美色图视频在线 | av香蕉 | 四虎精品一区二区三区 | 亚洲免费精品视频 | 亚洲av无码乱码国产精品久久 | av尤物在线 | 亚洲区小说区图片区qvod | 99久久久国产精品 | 日韩黄网站 | 人妻体内射精一区二区三区 | 懂色av蜜臀av粉嫩av分享吧最新章节 | 色呦呦在线观看视频 | 少妇高潮一区二区三区在线 | 人妻无码中文久久久久专区 | 黄色一级片国产 | 一区二区在线播放视频 | 热久久免费 | 成人av专区 | 中文字幕免费在线看线人动作大片 | 98久久久| 六月丁香婷婷网 | 亚洲a级片 | 日日夜夜撸撸 | 久久露脸国语精品国产 | 久草视频免费播放 | 欧美黑人xxxⅹ高潮交 | 精品国产午夜福利 | 日本99热| 精品国产成人亚洲午夜福利 | 成人国产一区二区三区精品麻豆 | sm国产在线调教视频 | 国产高清黄色 | 日本啪啪啪一区二区 | 狠狠av| 日本一级黄色录像 | 亚洲精品无码成人 | 欧美熟妇另类久久久久久多毛 | 亚洲色图在线播放 | 性欧美videossex精品 | 国产超碰 | 亚洲成人婷婷 | 中文字幕.com | 成人在线不卡 | 亚洲精品一区二区三区影院忠贞 | 国产无遮挡一区二区三区毛片日本 | 偷拍一区二区三区四区 | 97色在线视频 | 国产字幕侵犯亲女 | 国产福利视频在线观看 | 俺啪也 | 欧美区一区二区 | 欧美裸体xxxx| 日韩美女视频19 | 国产免费91视频 | 日本a级c片免费看三区 | 麻豆精品久久久久久久99蜜桃 | 成人乱人乱一区二区三区一级视频 | 新国产视频 | www.成人在线观看 | 欧美一区二区在线看 | 亚洲免费成人av | 最新欧美大片 | 成人性生交大片免费看 | 最好看十大无码av | 欧美高清hd18日本 | 欧美a在线观看 | 国产精品人人妻人人爽人人牛 | 强睡邻居人妻中文字幕 | 国产做受高潮动漫 | 亚洲二级片 | 亚洲成a人片 | 加勒比在线免费视频 | 日本公妇乱淫免费视频一区三区 | 日本道中文字幕 | 国产午夜精品理论片 | 性激烈视频在线观看 | 国产精品乱码妇女bbbb | 福利视频第一页 |