日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > pytorch >内容正文

pytorch

吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 2 Residual Networks

發布時間:2025/3/21 pytorch 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 2 Residual Networks 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

吳恩達deeplearning.ai課程作業,自己寫的答案。
補充說明:
1. 評論中總有人問為什么直接復制這些notebook運行不了?請不要直接復制粘貼,不可能運行通過的,這個只是notebook中我們要自己寫的那部分,要正確運行還需要其他py文件,請自己到GitHub上下載完整的。這里的部分僅僅是參考用的,建議還是自己按照提示一點一點寫,如果實在卡住了再看答案。個人覺得這樣才是正確的學習方法,況且作業也不算難。
2. 關于評論中有人說我是抄襲,注釋還沒別人詳細,復制下來還運行不過。答復是:做伸手黨之前,請先搞清這個作業是干什么的。大家都是從GitHub上下載原始的作業,然后根據代碼前面的提示(通常會指定函數和公式)來編寫代碼,而且后面還有expected output供你比對,如果程序正確,結果一般來說是一樣的。請不要無腦噴,說什么跟別人的答案一樣的。說到底,我們要做的就是,看他的文字部分,根據提示在代碼中加入部分自己的代碼。我們自己要寫的部分只有那么一小部分代碼。
3. 由于實在很反感無腦噴子,故禁止了下面的評論功能,請見諒。如果有問題,請私信我,在力所能及的范圍內會盡量幫忙。

由于這部分的作業后面要自己訓練一個ResNet-50的網絡,訓練耗時較長。如果是CPU模式,一個epoch大約100s,但在我的GPU服務器上,一次epoch大約5s。不使用GPU訓練,實在是耗時太長,沒有GPU的話建議還是先用訓練好的模型。我訓練好的幾個模型,下面給出百度云鏈接。
resnet50_20_epochs.h5 鏈接:https://pan.baidu.com/s/1eROf3BO 密碼:qed2
resnet50_30_epochs.h5 鏈接:https://pan.baidu.com/s/1o8kPNUM 密碼:tqio
resnet50_44_epochs.h5 鏈接:https://pan.baidu.com/s/1c1N3AzI 密碼:2xwu
resnet50_55_epochs.h5 鏈接:https://pan.baidu.com/s/1bpfMA0v 密碼:cxcv
Coursera上提供的模型文件:
ResNet50.h5 鏈接:鏈接:https://pan.baidu.com/s/1boCG2Iz 密碼:sefq

Residual Networks

Welcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by He et al., allow you to train much deeper networks than were previously practically feasible.

In this assignment, you will:
- Implement the basic building blocks of ResNets.
- Put together these building blocks to implement and train a state-of-the-art neural network for image classification.

This assignment will be done in Keras.

Before jumping into the problem, let’s run the cell below to load the required packages.

import numpy as np from keras import layers from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D from keras.models import Model, load_model from keras.preprocessing import image from keras.utils import layer_utils from keras.utils.data_utils import get_file from keras.applications.imagenet_utils import preprocess_input import pydot from IPython.display import SVG from keras.utils.vis_utils import model_to_dot from keras.utils import plot_model from resnets_utils import * from keras.initializers import glorot_uniform import scipy.misc from matplotlib.pyplot import imshow %matplotlib inlineimport keras.backend as K K.set_image_data_format('channels_last') K.set_learning_phase(1) Using TensorFlow backend.

1 - The problem of very deep neural networks

Last week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.

The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the lower layers) to very complex features (at the deeper layers). However, using a deeper network doesn’t always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent unbearably slow. More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and “explode” to take very large values).

During training, you might therefore see the magnitude (or norm) of the gradient for the earlier layers descrease to zero very rapidly as training proceeds:


Figure 1 : Vanishing gradient
The speed of learning decreases very rapidly for the early layers as the network trains

You are now going to solve this problem by building a Residual Network!

2 - Building a Residual Network

In ResNets, a “shortcut” or a “skip connection” allows the gradient to be directly backpropagated to earlier layers:


Figure 2 : A ResNet block showing a skip-connection

The image on the left shows the “main path” through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network.

We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function–even more than skip connections helping with vanishing gradients–accounts for ResNets’ remarkable performance.)

Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them.

2.1 - The identity block

The identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say a[l]a[l]) has the same dimension as the output activation (say a[l+2]a[l+2]). To flesh out the different steps of what happens in a ResNet’s identity block, here is an alternative diagram showing the individual steps:


Figure 3 : Identity block. Skip connection “skips over” 2 layers.

The upper path is the “shortcut path.” The lower path is the “main path.” In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don’t worry about this being complicated to implement–you’ll see that BatchNorm is just one line of code in Keras!

In this exercise, you’ll actually implement a slightly more powerful version of this identity block, in which the skip connection “skips over” 3 hidden layers rather than 2 layers. It looks like this:


Figure 4 : Identity block. Skip connection “skips over” 3 layers.

Here’re the individual steps.

First component of main path:
- The first CONV2D has F1F1 filters of shape (1,1) and a stride of (1,1). Its padding is “valid” and its name should be conv_name_base + '2a'. Use 0 as the seed for the random initialization.
- The first BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2a'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.

Second component of main path:
- The second CONV2D has F2F2 filters of shape (f,f)(f,f) and a stride of (1,1). Its padding is “same” and its name should be conv_name_base + '2b'. Use 0 as the seed for the random initialization.
- The second BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2b'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.

Third component of main path:
- The third CONV2D has F3F3 filters of shape (1,1) and a stride of (1,1). Its padding is “valid” and its name should be conv_name_base + '2c'. Use 0 as the seed for the random initialization.
- The third BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2c'. Note that there is no ReLU activation function in this component.

Final step:
- The shortcut and the input are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.

Exercise: Implement the ResNet identity block. We have implemented the first component of the main path. Please read over this carefully to make sure you understand what it is doing. You should implement the rest.
- To implement the Conv2D step: See reference
- To implement BatchNorm: See reference (axis: Integer, the axis that should be normalized (typically the channels axis))
- For the activation, use: Activation('relu')(X)
- To add the value passed forward by the shortcut: See reference

# GRADED FUNCTION: identity_blockdef identity_block(X, f, filters, stage, block):"""Implementation of the identity block as defined in Figure 3Arguments:X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)f -- integer, specifying the shape of the middle CONV's window for the main pathfilters -- python list of integers, defining the number of filters in the CONV layers of the main pathstage -- integer, used to name the layers, depending on their position in the networkblock -- string/character, used to name the layers, depending on their position in the networkReturns:X -- output of the identity block, tensor of shape (n_H, n_W, n_C)"""# defining name basisconv_name_base = 'res' + str(stage) + block + '_branch'bn_name_base = 'bn' + str(stage) + block + '_branch'# Retrieve FiltersF1, F2, F3 = filters# Save the input value. You'll need this later to add back to the main path. X_shortcut = X# First component of main pathX = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)X = Activation('relu')(X)### START CODE HERE #### Second component of main path (≈3 lines)X = Conv2D(filters=F2, kernel_size=(f,f), strides=(1,1), padding='same', name=conv_name_base+'2b', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)X = Activation('relu')(X)# Third component of main path (≈2 lines)X = Conv2D(filters=F3, kernel_size=(1,1), strides=(1,1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)X = Add()([X, X_shortcut])X = Activation('relu')(X)### END CODE HERE ###return X tf.reset_default_graph()with tf.Session() as test:np.random.seed(1)A_prev = tf.placeholder("float", [3, 4, 4, 6])X = np.random.randn(3, 4, 4, 6)A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')test.run(tf.global_variables_initializer())out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})print("out = " + str(out[0][1][1][0])) out = [ 0.94822997 0. 1.16101444 2.747859 0. 1.36677003]

Expected Output:

out [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]

2.2 - The convolutional block

You’ve implemented the ResNet identity block. Next, the ResNet “convolutional block” is the other type of block. You can use this type of block when the input and output dimensions don’t match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path:


Figure 4 : Convolutional block

The CONV2D layer in the shortcut path is used to resize the input xx to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix WsWs discussed in lecture.) For example, to reduce the activation dimensions’s height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.

The details of the convolutional block are as follows.

First component of main path:
- The first CONV2D has F1F1 filters of shape (1,1) and a stride of (s,s). Its padding is “valid” and its name should be conv_name_base + '2a'.
- The first BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2a'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.

Second component of main path:
- The second CONV2D has F2F2 filters of (f,f) and a stride of (1,1). Its padding is “same” and it’s name should be conv_name_base + '2b'.
- The second BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2b'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.

Third component of main path:
- The third CONV2D has F3F3 filters of (1,1) and a stride of (1,1). Its padding is “valid” and it’s name should be conv_name_base + '2c'.
- The third BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2c'. Note that there is no ReLU activation function in this component.

Shortcut path:
- The CONV2D has F3F3 filters of shape (1,1) and a stride of (s,s). Its padding is “valid” and its name should be conv_name_base + '1'.
- The BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '1'.

Final step:
- The shortcut and the main path values are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.

Exercise: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.
- Conv Hint
- BatchNorm Hint (axis: Integer, the axis that should be normalized (typically the features axis))
- For the activation, use: Activation('relu')(X)
- Addition Hint

# GRADED FUNCTION: convolutional_blockdef convolutional_block(X, f, filters, stage, block, s = 2):"""Implementation of the convolutional block as defined in Figure 4Arguments:X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)f -- integer, specifying the shape of the middle CONV's window for the main pathfilters -- python list of integers, defining the number of filters in the CONV layers of the main pathstage -- integer, used to name the layers, depending on their position in the networkblock -- string/character, used to name the layers, depending on their position in the networks -- Integer, specifying the stride to be usedReturns:X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)"""# defining name basisconv_name_base = 'res' + str(stage) + block + '_branch'bn_name_base = 'bn' + str(stage) + block + '_branch'# Retrieve FiltersF1, F2, F3 = filters# Save the input valueX_shortcut = X##### MAIN PATH ###### First component of main path X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)X = Activation('relu')(X)### START CODE HERE #### Second component of main path (≈3 lines)X = Conv2D(filters=F2, kernel_size=(f,f), strides=(1,1), padding='same', name=conv_name_base+'2b', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name=bn_name_base+'2b')(X)X = Activation('relu')(X)# Third component of main path (≈2 lines)X = Conv2D(filters=F3, kernel_size=(1,1), strides=(1,1), padding='valid', name=conv_name_base+'2c', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name=bn_name_base+'2c')(X)##### SHORTCUT PATH #### (≈2 lines)X_shortcut = Conv2D(filters=F3, kernel_size=(1,1), strides=(s, s), padding='valid', name=conv_name_base+'1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)X_shortcut = BatchNormalization(axis=3, name=bn_name_base+'1')(X_shortcut)# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)X = Add()([X, X_shortcut])X = Activation('relu')(X)### END CODE HERE ###return X tf.reset_default_graph()with tf.Session() as test:np.random.seed(1)A_prev = tf.placeholder("float", [3, 4, 4, 6])X = np.random.randn(3, 4, 4, 6)A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')test.run(tf.global_variables_initializer())out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0}) # print(len(out[0])) # print(out)print("out = " + str(out[0][1][1][0])) out = [ 0.09018461 1.23489773 0.46822017 0.0367176 0. 0.65516603]

Expected Output:

out [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]

3 - Building your first ResNet model (50 layers)

You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. “ID BLOCK” in the diagram stands for “Identity block,” and “ID BLOCK x3” means you should stack 3 identity blocks together.


Figure 5 : ResNet-50 model

The details of this ResNet-50 model are:
- Zero-padding pads the input with a pad of (3,3)
- Stage 1:
- The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is “conv1”.
- BatchNorm is applied to the channels axis of the input.
- MaxPooling uses a (3,3) window and a (2,2) stride.
- Stage 2:
- The convolutional block uses three set of filters of size [64,64,256], “f” is 3, “s” is 1 and the block is “a”.
- The 2 identity blocks use three set of filters of size [64,64,256], “f” is 3 and the blocks are “b” and “c”.
- Stage 3:
- The convolutional block uses three set of filters of size [128,128,512], “f” is 3, “s” is 2 and the block is “a”.
- The 3 identity blocks use three set of filters of size [128,128,512], “f” is 3 and the blocks are “b”, “c” and “d”.
- Stage 4:
- The convolutional block uses three set of filters of size [256, 256, 1024], “f” is 3, “s” is 2 and the block is “a”.
- The 5 identity blocks use three set of filters of size [256, 256, 1024], “f” is 3 and the blocks are “b”, “c”, “d”, “e” and “f”.
- Stage 5:
- The convolutional block uses three set of filters of size [512, 512, 2048], “f” is 3, “s” is 2 and the block is “a”.
- The 2 identity blocks use three set of filters of size [256, 256, 2048], “f” is 3 and the blocks are “b” and “c”.
- The 2D Average Pooling uses a window of shape (2,2) and its name is “avg_pool”.
- The flatten doesn’t have any hyperparameters or name.
- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be 'fc' + str(classes).

Exercise: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above.

You’ll need to use this function:
- Average pooling see reference

Here’re some other functions we used in the code below:
- Conv2D: See reference
- BatchNorm: See reference (axis: Integer, the axis that should be normalized (typically the features axis))
- Zero padding: See reference
- Max pooling: See reference
- Fully conected layer: See reference
- Addition: See reference

# GRADED FUNCTION: ResNet50def ResNet50(input_shape = (64, 64, 3), classes = 6):"""Implementation of the popular ResNet50 the following architecture:CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYERArguments:input_shape -- shape of the images of the datasetclasses -- integer, number of classesReturns:model -- a Model() instance in Keras"""# Define the input as a tensor with shape input_shapeX_input = Input(input_shape)# Zero-PaddingX = ZeroPadding2D((3, 3))(X_input)# Stage 1X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)X = Activation('relu')(X)X = MaxPooling2D((3, 3), strides=(2, 2))(X)# Stage 2X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')### START CODE HERE #### helper functions# convolutional_block(X, f, filters, stage, block, s = 2)# identity_block(X, f, filters, stage, block)# Stage 3 (≈4 lines)X = convolutional_block(X, f=3, filters=[128, 128, 512], stage=3, block='a', s=2)X = identity_block(X, f=3, filters=[128, 128, 512], stage=3, block='b')X = identity_block(X, f=3, filters=[128, 128, 512], stage=3, block='c')X = identity_block(X, f=3, filters=[128, 128, 512], stage=3, block='d')# Stage 4 (≈6 lines)X = convolutional_block(X, f=3, filters=[256, 256, 1024], stage=4, block='a', s=2)X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='b')X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='c')X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='d')X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='e')X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='f')# Stage 5 (≈3 lines)X = convolutional_block(X, f=3, filters=[512, 512, 2048], stage=5, block='a', s=2)X = identity_block(X, f=3, filters=[512, 512, 2048], stage=5, block='b')X = identity_block(X, f=3, filters=[512, 512, 2048], stage=5, block='c')# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"X = AveragePooling2D((2,2), name='avg_pool')(X)### END CODE HERE #### output layerX = Flatten()(X)X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)# Create modelmodel = Model(inputs = X_input, outputs = X, name='ResNet50')return model

Run the following code to build the model’s graph. If your implementation is not correct you will know it by checking your accuracy when running model.fit(...) below.

model = ResNet50(input_shape = (64, 64, 3), classes = 6)

As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

The model is now ready to be trained. The only thing you need is a dataset.

Let’s load the SIGNS Dataset.


Figure 6 : SIGNS dataset

X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()# Normalize image vectors X_train = X_train_orig/255. X_test = X_test_orig/255.# Convert training and test labels to one hot matrices Y_train = convert_to_one_hot(Y_train_orig, 6).T Y_test = convert_to_one_hot(Y_test_orig, 6).Tprint ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) number of training examples = 1080 number of test examples = 120 X_train shape: (1080, 64, 64, 3) Y_train shape: (1080, 6) X_test shape: (120, 64, 64, 3) Y_test shape: (120, 6)

Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.

model.fit(X_train, Y_train, epochs = 2, batch_size = 32) Epoch 1/2 1080/1080 [==============================] - 107s 99ms/step - loss: 3.0556 - acc: 0.2481 Epoch 2/2 1080/1080 [==============================] - 103s 95ms/step - loss: 2.4399 - acc: 0.3278<keras.callbacks.History at 0x7ff1c149af60>

Expected Output:

Epoch 1/2 loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours.
Epoch 2/2 loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing.

Let’s see how this model (trained on only two epochs) performs on the test set.

preds = model.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1])) 120/120 [==============================] - 5s 38ms/step Loss = 2.24901695251 Test Accuracy = 0.166666666667

Expected Output:

Test Accuracy between 0.16 and 0.25

For the purpose of this assignment, we’ve asked you to train the model only for two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well.

After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU.

Using a GPU, we’ve trained our own ResNet50 model’s weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.

注:這里我導入的是自己在GPU服務器上訓練的模型,故修改成了對應的名字。

# model = load_model('ResNet50.h5') model = load_model('resnet50_44_epochs.h5') preds = model.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1])) 120/120 [==============================] - 9s 78ms/step Loss = 0.0914498666922 Test Accuracy = 0.958333337307

ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you’ve learnt and apply it to your own classification problem to perform state-of-the-art accuracy.

Congratulations on finishing this assignment! You’ve now implemented a state-of-the-art image classification system!

4 - Test on your own image (Optional/Ungraded)

If you wish, you can also take a picture of your own hand and see the output of the model. To do this:
1. Click on “File” in the upper bar of this notebook, then click “Open” to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook’s directory, in the “images” folder
3. Write your image’s name in the following code
4. Run the code and check if the algorithm is right!

img_path = 'images/my_image.jpg' img = image.load_img(img_path, target_size=(64, 64)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) print('Input image shape:', x.shape) my_image = scipy.misc.imread(img_path) imshow(my_image) print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ") print(model.predict(x)) Input image shape: (1, 64, 64, 3) class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = [[ 1. 0. 0. 0. 0. 0.]]

You can also print a summary of your model by running the following code.

model.summary()

(打印出resnet的網絡結構,省略)

Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to “File -> Open…-> model.png”.

plot_model(model, to_file='model.png') SVG(model_to_dot(model).create(prog='dot', format='svg'))

(畫出圖表表示resnet,省略)


What you should remember:
- Very deep “plain” networks don’t work in practice because they are hard to train due to vanishing gradients.
- The skip-connections help to address the Vanishing Gradient problem. They also make it easy for a ResNet block to learn an identity function.
- There are two main type of blocks: The identity block and the convolutional block.
- Very deep Residual Networks are built by stacking these blocks together.

References

This notebook presents the ResNet algorithm due to He et al. (2015). The implementation here also took significant inspiration and follows the structure given in the github repository of Francois Chollet:

  • Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - Deep Residual Learning for Image Recognition (2015)
  • Francois Chollet’s github repository: https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py

訓練過程中的代碼

epoch = 20

model1 = ResNet50(input_shape = (64, 64, 3), classes = 6) model1.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model1.fit(X_train, Y_train, epochs = 20, batch_size = 32) model1.save('resnet50_20_epochs.h5') preds = model1.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1])) Epoch 1/20 1080/1080 [==============================] - 15s 14ms/step - loss: 2.5141 - acc: 0.4241 Epoch 2/20 1080/1080 [==============================] - 5s 5ms/step - loss: 1.7727 - acc: 0.6194 Epoch 3/20 1080/1080 [==============================] - 6s 5ms/step - loss: 1.4935 - acc: 0.6769 Epoch 4/20 1080/1080 [==============================] - 5s 5ms/step - loss: 1.5494 - acc: 0.5833 Epoch 5/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.6902 - acc: 0.7889 Epoch 6/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4155 - acc: 0.8593 Epoch 7/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2782 - acc: 0.9139 Epoch 8/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1665 - acc: 0.9500 Epoch 9/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2578 - acc: 0.9185 Epoch 10/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1690 - acc: 0.9435 Epoch 11/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0913 - acc: 0.9694 Epoch 12/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1389 - acc: 0.9602 Epoch 13/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1490 - acc: 0.9444 Epoch 14/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1044 - acc: 0.9694 Epoch 15/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0435 - acc: 0.9861 Epoch 16/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0324 - acc: 0.9926 Epoch 17/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0190 - acc: 0.9926 Epoch 18/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0577 - acc: 0.9824 Epoch 19/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0268 - acc: 0.9907 Epoch 20/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0662 - acc: 0.9787 120/120 [==============================] - 2s 17ms/step Loss = 0.825686124961 Test Accuracy = 0.833333333333

epoch = 50

model2 = ResNet50(input_shape = (64, 64, 3), classes = 6) model2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model2.fit(X_train, Y_train, epochs = 50, batch_size = 32) model2.save('resnet50_50_epochs.h5') preds = model2.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1])) Epoch 1/50 1080/1080 [==============================] - 18s 17ms/step - loss: 2.1776 - acc: 0.4556 Epoch 2/50 1080/1080 [==============================] - 5s 5ms/step - loss: 1.8498 - acc: 0.5370 Epoch 3/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.9010 - acc: 0.6852 Epoch 4/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4735 - acc: 0.8407 Epoch 5/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2245 - acc: 0.9222 Epoch 6/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1450 - acc: 0.9611 Epoch 7/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.7002 - acc: 0.7759 Epoch 8/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2651 - acc: 0.9102 Epoch 9/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1757 - acc: 0.9481 Epoch 10/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1131 - acc: 0.9602 Epoch 11/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0816 - acc: 0.9759 Epoch 12/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0332 - acc: 0.9907 Epoch 13/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0397 - acc: 0.9861 Epoch 14/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0305 - acc: 0.9907 Epoch 15/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0318 - acc: 0.9889 Epoch 16/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0125 - acc: 0.9972 Epoch 17/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0279 - acc: 0.9907 Epoch 18/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0888 - acc: 0.9657 Epoch 19/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0460 - acc: 0.9843 Epoch 20/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0512 - acc: 0.9787 Epoch 21/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0423 - acc: 0.9843 Epoch 22/50 1080/1080 [==============================] - 6s 5ms/step - loss: 0.0473 - acc: 0.9870 Epoch 23/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1245 - acc: 0.9750 Epoch 24/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0739 - acc: 0.9741 Epoch 25/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0663 - acc: 0.9815 Epoch 26/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0175 - acc: 0.9926 Epoch 27/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0103 - acc: 0.9981 Epoch 28/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0496 - acc: 0.9963 Epoch 29/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0023 - acc: 0.9991 Epoch 30/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0085 - acc: 0.9972 Epoch 31/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0329 - acc: 0.9981 Epoch 32/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0039 - acc: 0.9981 Epoch 33/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0071 - acc: 0.9981 Epoch 34/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0237 - acc: 0.9898 Epoch 35/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0667 - acc: 0.9769 Epoch 36/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1863 - acc: 0.9500 Epoch 37/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0612 - acc: 0.9787 Epoch 38/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0362 - acc: 0.9880 Epoch 39/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0251 - acc: 0.9935 Epoch 40/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0210 - acc: 0.9898 Epoch 41/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0090 - acc: 0.9981 Epoch 42/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0534 - acc: 0.9870 Epoch 43/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0138 - acc: 0.9954 Epoch 44/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0055 - acc: 0.9991 Epoch 45/50 1080/1080 [==============================] - 5s 5ms/step - loss: 4.9548e-04 - acc: 1.0000 Epoch 46/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0015 - acc: 1.0000 Epoch 47/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0086 - acc: 0.9972 Epoch 48/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0213 - acc: 0.9944 Epoch 49/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0286 - acc: 0.9898 Epoch 50/50 1080/1080 [==============================] - 6s 5ms/step - loss: 0.0228 - acc: 0.9880 120/120 [==============================] - 4s 31ms/step Loss = 1.81704656283 Test Accuracy = 0.69166667064

epoch = 30

model3 = ResNet50(input_shape = (64, 64, 3), classes = 6) model3.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model3.fit(X_train, Y_train, epochs = 30, batch_size = 32) model3.save('resnet50_30_epochs.h5') preds = model3.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1])) Epoch 1/30 1080/1080 [==============================] - 13s 12ms/step - loss: 1.9815 - acc: 0.4713 Epoch 2/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.5419 - acc: 0.8120 Epoch 3/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4135 - acc: 0.8713 Epoch 4/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4284 - acc: 0.8713 Epoch 5/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4162 - acc: 0.8722 Epoch 6/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1329 - acc: 0.9546 Epoch 7/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1342 - acc: 0.9602 Epoch 8/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1336 - acc: 0.9630 Epoch 9/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1799 - acc: 0.9611 Epoch 10/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4936 - acc: 0.8731 Epoch 11/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.5063 - acc: 0.8398 Epoch 12/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1922 - acc: 0.9426 Epoch 13/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2066 - acc: 0.9435 Epoch 14/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2782 - acc: 0.9139 Epoch 15/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2053 - acc: 0.9324 Epoch 16/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1632 - acc: 0.9602 Epoch 17/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0824 - acc: 0.9787 Epoch 18/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0485 - acc: 0.9880 Epoch 19/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0145 - acc: 0.9981 Epoch 20/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0145 - acc: 0.9944 Epoch 21/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0119 - acc: 0.9963 Epoch 22/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0280 - acc: 0.9954 Epoch 23/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0187 - acc: 0.9917 Epoch 24/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0483 - acc: 0.9824 Epoch 25/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0939 - acc: 0.9685 Epoch 26/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0390 - acc: 0.9907 Epoch 27/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0197 - acc: 0.9917 Epoch 28/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0270 - acc: 0.9972 Epoch 29/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0021 - acc: 1.0000 Epoch 30/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0012 - acc: 1.0000 120/120 [==============================] - 1s 11ms/step Loss = 0.0633144895857 Test Accuracy = 0.983333333333

epoch = 44

model4 = ResNet50(input_shape = (64, 64, 3), classes = 6) model4.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model4.fit(X_train, Y_train, epochs = 44, batch_size = 32) model4.save('resnet50_44_epochs.h5') preds = model3.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1])) Epoch 1/44 1080/1080 [==============================] - 30s 28ms/step - loss: 2.1430 - acc: 0.3972 Epoch 2/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.9240 - acc: 0.6972 Epoch 3/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.3779 - acc: 0.8583 Epoch 4/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2152 - acc: 0.9352 Epoch 5/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1680 - acc: 0.9463 Epoch 6/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.3060 - acc: 0.9065 Epoch 7/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.3636 - acc: 0.8713 Epoch 8/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4806 - acc: 0.8704 Epoch 9/44 1080/1080 [==============================] - 6s 5ms/step - loss: 0.5736 - acc: 0.8222 Epoch 10/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.9006 - acc: 0.8065 Epoch 11/44 1080/1080 [==============================] - 5s 5ms/step - loss: 1.2664 - acc: 0.7065 Epoch 12/44 1080/1080 [==============================] - 5s 5ms/step - loss: 1.0772 - acc: 0.7426 Epoch 13/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.6108 - acc: 0.8093 Epoch 14/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.5305 - acc: 0.8537 Epoch 15/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.3256 - acc: 0.9102 Epoch 16/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1295 - acc: 0.9491 Epoch 17/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0994 - acc: 0.9676 Epoch 18/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1509 - acc: 0.9639 Epoch 19/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2855 - acc: 0.8889 Epoch 20/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1235 - acc: 0.9620 Epoch 21/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0788 - acc: 0.9759 Epoch 22/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0639 - acc: 0.9769 Epoch 23/44 1080/1080 [==============================] - 6s 5ms/step - loss: 0.0678 - acc: 0.9824 Epoch 24/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0255 - acc: 0.9917 Epoch 25/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0320 - acc: 0.9917 Epoch 26/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0136 - acc: 0.9954 Epoch 27/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0370 - acc: 0.9963 Epoch 28/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.6671 - acc: 0.8111 Epoch 29/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.5925 - acc: 0.8611 Epoch 30/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.9083 - acc: 0.8028 Epoch 31/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.7969 - acc: 0.7306 Epoch 32/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.3669 - acc: 0.8676 Epoch 33/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2057 - acc: 0.9352 Epoch 34/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1457 - acc: 0.9528 Epoch 35/44 1080/1080 [==============================] - 6s 5ms/step - loss: 0.1085 - acc: 0.9657 Epoch 36/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0829 - acc: 0.9694 Epoch 37/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1326 - acc: 0.9593 Epoch 38/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0626 - acc: 0.9750 Epoch 39/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0381 - acc: 0.9880 Epoch 40/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0153 - acc: 0.9963 Epoch 41/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1065 - acc: 0.9657 Epoch 42/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1160 - acc: 0.9713 Epoch 43/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0581 - acc: 0.9880 Epoch 44/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0293 - acc: 0.9926 120/120 [==============================] - 0s 1ms/step Loss = 0.0633144895857 Test Accuracy = 0.983333333333

總結

以上是生活随笔為你收集整理的吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 2 Residual Networks的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

999国产| 精品国产1区| 欧美日韩一区二区免费在线观看 | 日日麻批40分钟视频免费观看 | 亚洲第一av在线播放 | 在线中文字幕视频 | 成人午夜电影在线播放 | 亚洲最新在线 | 91视频xxxx| 97av色| 探花视频在线观看+在线播放 | 欧美夫妻性生活电影 | 中文日韩在线视频 | 成人国产精品久久久久久亚洲 | 亚洲人成精品久久久久 | 97国产电影 | 夜色资源站国产www在线视频 | 国产精品伦一区二区三区视频 | 国产美女精品视频 | 久久久久久草 | 国产精品黄色影片导航在线观看 | 亚洲精品国偷拍自产在线观看 | 久艹在线免费观看 | 黄色小说在线观看视频 | 亚洲国产精品成人综合 | 亚洲国产午夜视频 | 伊人久久在线观看 | 午夜精品久久久99热福利 | 日本特黄一级片 | 91网站观看 | 奇米影音四色 | 亚洲黄色一级大片 | 久草在线视频看看 | 成人av电影在线播放 | 成人网看片 | 色播99| 久久精品视频在线播放 | 玖玖精品在线 | 一色屋精品视频在线观看 | 久久久黄视频 | 色a资源在线 | 国产明星视频三级a三级点| 伊人成人久久 | 婷婷新五月 | 人人插人人澡 | 亚洲91精品 | 久久精品中文字幕免费mv | 高清精品在线 | 亚洲免费av一区二区 | 国产精品乱码久久 | 色噜噜狠狠狠狠色综合久不 | av解说在线 | 欧美视频日韩视频 | 久久免费看 | 91污视频在线 | www蜜桃视频 | 免费看成人av | 久久久国产精品网站 | 久久这里只有精品视频99 | 免费a视频 | 久久精品站 | bbw av| av韩国在线| 久久极品 | 成人播放器 | 97夜夜澡人人双人人人喊 | 国产高清视频在线 | 国产永久网站 | 精品国产一区二区三区免费 | 麻豆免费视频观看 | 青青视频一区 | 国产精品刺激对白麻豆99 | 欧美aaaxxxx做受视频 | 中文字幕中文字幕在线中文字幕三区 | 色综合五月天 | 欧美激情视频一区二区三区 | 国产精品久久久久av免费 | av日韩av| 成人久久精品视频 | 亚洲精品在线视频播放 | 精品国产片 | 国产精品中文字幕在线观看 | 色播99| 中文字幕电影在线 | 99免费在线视频观看 | 精品免费久久久久久 | 日韩精品一区二区三区外面 | 国产精品久久久久永久免费 | 毛片无卡免费无播放器 | 中文十次啦 | 天天操夜夜看 | 日韩综合视频在线观看 | 国产99久久九九精品免费 | 日韩欧美电影在线 | 国产一区二三区好的 | 久久久久国产成人免费精品免费 | av黄色免费在线观看 | 成人在线黄色 | 国产丝袜 | 午夜精品福利一区二区三区蜜桃 | 精品久久久久_ | 在线免费观看视频a | 中文字幕日本在线观看 | 国产一区二区在线免费视频 | 日韩欧美一区视频 | 日韩在线欧美在线 | 久99久中文字幕在线 | 日本夜夜草视频网站 | 91九色在线视频观看 | 久久dvd| 在线国产一区 | 日韩专区一区二区 | 天天射网 | 最近字幕在线观看第一季 | 欧美一区免费在线观看 | 婷婷av网| 99热这里只有精品1 av中文字幕日韩 | 色香蕉视频 | 国产成人a亚洲精品 | 久久夜色精品国产欧美乱 | 亚洲视频专区在线 | 成人福利在线播放 | 久久久免费av | 中文字幕资源网 国产 | 日日夜夜天天 | 日韩精品免费一区二区在线观看 | 日韩免费电影一区二区三区 | 91精品国产高清 | 日韩欧美电影在线观看 | 91久久人澡人人添人人爽欧美 | 色婷婷亚洲综合 | 日本三级大片 | 波多野结衣精品在线 | 日韩网站在线观看 | 国产黄影院色大全免费 | 超碰公开在线 | 91亚洲激情| 在线国产中文字幕 | 成人av一区二区兰花在线播放 | www在线观看视频 | 亚洲粉嫩av | 精品一区二区三区在线播放 | 8090yy亚洲精品久久 | 在线观看亚洲成人 | 92中文资源在线 | 久草视频2 | 草久久久 | 日韩高清网站 | 在线观看成人国产 | 91喷水 | 国产精品69久久久久 | 91在线精品秘密一区二区 | 日本性动态图 | 国产成人综合图片 | 国产黄色美女 | 欧美韩日精品 | 国产精品密入口果冻 | 97成人资源| 免费在线国产视频 | 中文字幕网站 | 亚洲精品视频免费观看 | 夜夜嗨av色一区二区不卡 | 97香蕉超级碰碰久久免费软件 | 婷婷六月色 | 中文字幕在线视频一区二区三区 | 国产精品美女久久久久久 | 人人爽人人爽人人片 | 亚洲精品乱码久久久久久久久久 | 亚洲成人国产 | 日韩激情片在线观看 | 日韩大片在线免费观看 | 91麻豆视频 | 国产精品一区二区在线免费观看 | 色国产视频 | 亚洲精品在线看 | 九九热精品视频在线观看 | 久久久精品午夜 | 久久久观看| 四虎影视成人永久免费观看亚洲欧美 | 国产美女视频网站 | 国产在线国偷精品产拍免费yy | 最近乱久中文字幕 | www色| 深爱激情av | 五月综合在线观看 | 久久 地址 | 成人av在线资源 | 亚洲尺码电影av久久 | 国产69精品久久久久99尤 | 成年人在线播放视频 | 久久99视频免费观看 | 99精品国产免费久久久久久下载 | 丁香视频全集免费观看 | 国产做a爱一级久久 | 免费色婷婷 | 国产一区二区电影在线观看 | 91九色porn在线资源 | 欧美精品生活片 | 97碰碰精品嫩模在线播放 | 亚洲黄色av网址 | 8x成人在线 | 色综合www | 激情综合色图 | 国产精品毛片一区视频播 | 国产一级视频在线观看 | 亚洲欧洲精品视频 | 国产99久久久久 | 亚洲综合在线视频 | 久久久久久99精品 | 伊人资源视频在线 | 91视频91自拍 | 一区二区欧美日韩 | 日本中文字幕视频 | 91精品91| 免费看片网页 | 国产麻豆精品在线观看 | 国产一区精品在线观看 | 啪啪免费试看 | 亚洲精品在线国产 | 日本久热| 亚洲精品欧美视频 | 青春草视频在线播放 | 久久国产精品99久久久久久丝袜 | 日韩电影在线一区二区 | 国产精品 999 | 久久理论片 | 欧美精品v国产精品 | 成年人视频在线免费观看 | 免费观看久久 | 亚洲日本色 | 中文字幕123区 | 99精品网站 | 毛片精品免费在线观看 | 亚洲精品国产高清 | 日韩高清一二三区 | 在线 欧美 日韩 | av在线电影播放 | 国产破处精品 | 蜜臀av网址 | a色视频| 国模一区二区三区四区 | 91精品在线视频观看 | 日韩av免费一区 | 国产精品热视频 | 国产精品一区一区三区 | 欧美最新大片在线看 | 综合色久| 久久国产综合视频 | 偷拍区另类综合在线 | 久久第四色 | 欧美-第1页-屁屁影院 | 久久午夜免费视频 | 国产免费黄色 | 成年人在线| 91视频在线国产 | 天天操夜夜摸 | 狠狠网亚洲精品 | 国产美女视频黄a视频免费 久久综合九色欧美综合狠狠 | 中文字幕三区 | 久久精品一区二区三区视频 | 成人在线免费av | 粉嫩一区二区三区粉嫩91 | 国产99久久九九精品免费 | 久久久久久伊人 | 亚洲色图av| 激情五月婷婷激情 | 黄色小网站在线 | 在线天堂8√ | 91丨精品丨蝌蚪丨白丝jk | 久久亚洲人 | 999毛片 | 91桃色免费观看 | 精品视频不卡 | 在线观看免费视频 | 久久草在线精品 | 国产亚洲成av人片在线观看桃 | www视频在线免费观看 | 精品欧美一区二区三区久久久 | 成人久久久久久久久 | 日本99久久 | 国产精品一区二区久久精品爱涩 | 波多野结衣网址 | 少妇视频在线播放 | a在线观看免费视频 | 欧美日韩三级在线观看 | av在线免费播放网站 | 一区二区影视 | 久久久国产一区二区三区 | 国产在线观看国语版免费 | www.色午夜 | 久久精品久久久精品美女 | 天天做天天爱天天综合网 | 成人h在线观看 | 亚洲欧美国产精品久久久久 | 一级黄色片网站 | 中文字幕视频在线播放 | 国产无套精品久久久久久 | 色网站免费在线观看 | 人人盈棋牌 | 日本一区二区不卡高清 | 日韩av快播电影网 | 亚洲视频h | 国产区网址 | 亚洲精品www. | 国产高清av免费在线观看 | 丁香六月婷婷开心婷婷网 | 天天激情在线 | 黄色a大片 | 成人一级片免费看 | 国产视频一区二区在线播放 | 婷婷激情欧美 | 久久精品99北条麻妃 | 久草在线视频在线观看 | 亚洲欧美va| 激情www| 久影院| 成年人免费在线看 | 色99色| 中文字幕一区av | 亚洲最新av网站 | 在线亚洲精品 | 亚洲精品国产拍在线 | 久久综合久久八八 | 成片免费观看视频999 | 99综合久久 | 99人久久精品视频最新地址 | 国产精品久久久久久69 | 最新中文字幕在线资源 | 中文字幕在线观看第一页 | 狠狠躁夜夜躁人人爽超碰91 | 丁香花中文在线免费观看 | 91av官网| 三级视频国产 | 色综合天天综合 | 天天艹天天爽 | 9999国产精品 | 国产在线不卡精品 | 久久久在线免费观看 | 国产一级片网站 | 日日爱网址 | 色插综合 | 99色婷婷| 91精品一区二区三区久久久久久 | 久久婷婷一区二区三区 | 国产成人99av超碰超爽 | 国内久久视频 | 日韩免费小视频 | 国产免费亚洲高清 | 青青河边草免费视频 | 青草视频在线 | 久草在线视频看看 | 久久精品99国产精品亚洲最刺激 | 久久久麻豆精品一区二区 | 午夜91在线| 天天操天天操天天操天天操天天操 | 欧美一级淫片videoshd | 欧美日韩视频免费看 | 麻豆国产在线播放 | 91 在线视频| 中文字幕影片免费在线观看 | 国产精品正在播放 | 欧美色图另类 | 在线精品在线 | 四虎欧美 | 在线观看免费av网站 | 最近中文字幕高清字幕在线视频 | 深爱婷婷久久综合 | 久草新在线 | av在线不卡观看 | 黄色一级免费电影 | 亚洲国产三级在线 | 国产精品欧美久久 | 亚洲激色| 国产精品美女视频网站 | 国产高清精品在线观看 | 久久av观看 | 国产一区二区在线免费视频 | 亚av在线 | 亚洲一区二区三区精品在线观看 | 久草在线播放视频 | 爱色婷婷 | 亚洲午夜精品久久久久久久久 | 91视频国产免费 | 天天干天天想 | 亚洲aⅴ乱码精品成人区 | 99精品欧美一区二区 | 91av综合 | 国产91精品一区二区绿帽 | 天天做天天干 | 色九色| 激情www| 久久婷婷亚洲 | 日韩免费b | 亚洲伊人成综合网 | 日本中文字幕在线看 | 色悠悠久久综合 | 国产黄色一级片 | av在线成人 | 日韩国产高清在线 | 毛片1000部免费看 | 中文超碰字幕 | 99国产精品 | 亚洲 欧美 综合 在线 精品 | 久久高清 | 国产精品成人a免费观看 | www狠狠| 天天操天天爽天天干 | 日韩午夜在线播放 | 99热精品国产| 久久 国产一区 | 久草香蕉在线视频 | 国产xx在线 | 热久久影视 | 婷婷狠狠操 | 久久精品一区二区三区国产主播 | 99久久超碰中文字幕伊人 | 日本狠狠色 | 在线视频一区二区 | av电影一区 | 日韩欧美高清在线观看 | 五月天狠狠操 | 午夜精品在线看 | 日日干av| 主播av在线 | 天天色成人网 | 久久国产精品第一页 | 懂色av懂色av粉嫩av分享吧 | 中文字幕视频观看 | 91精品久久久久久粉嫩 | 婷婷激情站 | 一区二区三区高清在线 | 国产裸体视频bbbbb | 久草久草久草久草 | 免费观看黄 | 91视频 - v11av | 天天插日日操 | 久热国产视频 | 日韩欧美精品在线观看 | 男女全黄一级一级高潮免费看 | 九九99视频 | 久久婷婷久久 | 午夜美女福利直播 | 免费能看的av| 精品超碰| 91九色在线播放 | 在线免费观看黄色大片 | 日韩欧美国产精品 | 国产亚洲无 | 丁香六月天婷婷 | 波多野结衣理论片 | 久久婷婷网 | 亚洲一级性| 国产精品视频999 | 香蕉视频在线免费 | 天天干,天天操 | 成人性生交大片免费观看网站 | av中文字幕第一页 | 久久久久国产精品午夜一区 | 免费a v视频 | 在线观看一区 | 国产精品成人国产乱一区 | 天天做天天射 | a级免费观看 | 色久五月 | 精品免费久久久久久 | 国产精品久99 | 激情久久影院 | 日韩资源在线观看 | 婷婷六月综合网 | avcom在线| 美女黄网站视频免费 | 91av原创| 天天爱天天 | 波多野结衣电影一区 | 在线日韩一区 | 99久久精品无码一区二区毛片 | 亚洲精品久久久久久国 | 欧美另类高潮 | 久要激情网 | 成人中文字幕+乱码+中文字幕 | 精品一区三区 | 国产精品久久电影观看 | 久久一线 | 国产乱码精品一区二区蜜臀 | 西西4444www大胆视频 | 国产一区二区久久久 | 丁香婷婷色月天 | 亚洲第一av在线播放 | 久久久黄色av| 成人av av在线 | 亚洲国产网站 | 黄影院 | 91精品视频在线免费观看 | 久久久久久久久久久久影院 | a级国产乱理伦片在线播放 久久久久国产精品一区 | 69精品视频 | 欧美伦理一区二区三区 | 激情av在线资源 | 五月婷婷爱 | 中文字幕av一区二区三区四区 | 婷婷色狠狠 | www免费 | 婷婷丁香激情网 | 国产又粗又猛又黄又爽的视频 | av免费电影网站 | 久久久久久久免费观看 | 亚洲成av人片在线观看www | 国产精品视频资源 | 人人看人人爱 | 成人久久亚洲 | 久久综合久久鬼 | av在线播放免费 | 99视频这里只有 | 国产香蕉视频在线播放 | 精品日韩在线一区 | 免费v片 | 黄色三级在线看 | 久久免费a | 国产精品免费人成网站 | 狠狠操精品 | 人人添人人澡 | 亚洲精品午夜国产va久久成人 | 中文字幕乱码在线播放 | 色婷丁香 | 中文字幕资源网 国产 | 中文字幕在线看视频 | www久| 丰满少妇高潮在线观看 | 97色狠狠| 亚洲美女视频在线观看 | 人人搞人人搞 | 香蕉视频在线网站 | 最近av在线| 激情欧美丁香 | 黄污网| 久久久激情网 | 欧美激情h | 亚洲精品久久久蜜桃直播 | 久久视频在线观看中文字幕 | 日日夜夜天天综合 | 丁香网婷婷 | 国产精品一区二区在线免费观看 | 奇米四色影狠狠爱7777 | 午夜视频在线观看一区二区三区 | 国产中文字幕av | 成人四虎影院 | 国产精品亚州 | 欧美日韩中文字幕综合视频 | 欧美激情视频一二区 | 丝袜av一区 | 亚洲国产片 | 一区二区三区在线视频观看58 | 五月婷婷影院 | 亚洲欧洲视频 | 99久久激情视频 | 中文字幕在线国产 | 日韩黄色影院 | 黄色aaa级片 | 亚洲视频分类 | 国产精品国产毛片 | 中文字幕一区二区三区乱码不卡 | 日韩精品久久一区二区 | 91在线资源 | 亚洲一区免费在线 | 亚洲精品乱码久久久一二三 | 国产精品第10页 | 九九精品毛片 | 女人18精品一区二区三区 | 国产黄色片免费看 | 国产精品成人品 | 99亚洲精品在线 | 91黄色在线看 | 亚洲一区二区视频在线播放 | 国产成人久久精品77777 | av福利在线导航 | 2023亚洲精品国偷拍自产在线 | 亚洲精品欧洲精品 | 国产精品久久久一区二区三区网站 | 国产精品久久av | 成人综合婷婷国产精品久久免费 | 97视频在线看 | 一区二区日韩av | 久草电影在线观看 | 97人人精品| 国产美女免费视频 | 草久视频在线 | 日韩最新在线视频 | 9999在线观看 | 国产精品免费久久 | 日韩欧美v | 日日干夜夜干 | 蜜臀av性久久久久蜜臀aⅴ涩爱 | 99久久久久久| 色爱区综合激月婷婷 | 久久婷婷一区二区三区 | 日本最新一区二区三区 | 国内视频一区二区 | 日韩av电影中文字幕在线观看 | www.狠狠色| 婷婷深爱 | 日韩av影片在线观看 | 久久视频国产精品免费视频在线 | 人人干免费 | 亚洲精品国产精品国自 | 中文字幕有码在线观看 | 91香蕉视频污在线 | 久久免费精彩视频 | 国产精品一区二区av影院萌芽 | av一级在线| 午夜a区| 色婷婷丁香 | 高清国产午夜精品久久久久久 | 精品av网站 | 色狠狠综合天天综合综合 | 亚洲国产欧美在线人成大黄瓜 | 欧美大荫蒂xxx | 午夜丰满寂寞少妇精品 | 午夜性色 | 探花视频免费观看高清视频 | 在线日韩一区 | 国产免费一区二区三区最新6 | 亚洲午夜精品一区 | 成人国产一区二区 | 五月天色网站 | 国产一区二区视频在线播放 | 精品久久久久久久久久久院品网 | 免费久草视频 | 亚洲经典精品 | 成人精品一区二区三区中文字幕 | 91色九色 | 91免费网 | 中文av网站 | 狠狠狠干| 成人高清在线观看 | 日韩av电影免费在线观看 | 国产不卡高清 | 日韩理论电影网 | 四虎影视精品永久在线观看 | 黄色在线小网站 | 黄色软件大全网站 | 一区二区免费不卡在线 | 91一区二区三区在线观看 | 麻豆视频在线观看 | 在线视频精品播放 | 欧美日韩国产一区二区在线观看 | 亚洲国产天堂av | 麻豆影音先锋 | 亚洲粉嫩av| 蜜臀av网址 | 日韩免费福利 | 狠狠色丁香久久婷婷综 | 又长又大又黑又粗欧美 | 成人97人人超碰人人99 | 日韩一区在线免费观看 | 日日爽天天爽 | 天堂视频一区 | 国产精品女同一区二区三区久久夜 | 91精品在线观看入口 | 国产看片网站 | 96av麻豆蜜桃一区二区 | 国产999精品久久久久久麻豆 | 婷婷中文在线 | 日本久热 | 在线国产一区 | 在线看黄色的网站 | 五月婷婷丁香综合 | 91视频下载 | 成人免费视频网 | 天堂在线一区二区三区 | av在线成人| www.色婷婷 | 午夜久久影院 | 久久一级片 | 日韩精品视频免费在线观看 | 奇米影视四色8888 | 午夜在线资源 | 日本中文一级片 | 波多野结衣视频一区二区 | 婷五月天激情 | www.久艹| 成人精品国产免费网站 | 国产一区二区在线免费 | 精品久久影院 | 日韩在线免费播放 | 婷婷综合在线 | 日韩1级片 | 激情网站网址 | 成人av免费网站 | 五月综合婷| 久一久久 | 色婷婷久久久 | av网站免费线看精品 | 免费试看一区 | 99免费在线观看视频 | www.天天干.com | 看片黄网站 | 黄色成人小视频 | 久久久久久久久久国产精品 | 视频成人永久免费视频 | 成人精品一区二区三区中文字幕 | 一级欧美一级日韩 | 国产又粗又硬又长又爽的视频 | 高清中文字幕av | 日韩资源视频 | 在线观av| 天天干,天天射,天天操,天天摸 | 欧美xxxxx在线视频 | 亚洲黄色一级大片 | 免费在线激情电影 | 中文字幕一区二区三区四区视频 | 色国产精品一区在线观看 | 91精品国产高清 | 国产香蕉av | 国产精品视频你懂的 | 国产中文字幕视频在线 | 日韩在线观看三区 | 日韩精品一区二区三区在线播放 | 91视频-88av| 一区二区在线影院 | 黄色影院在线免费观看 | 操操综合网 | 色视频一区 | 国产999精品久久久影片官网 | www亚洲国产 | 中文字幕在线观看网址 | 91精品入口 | 成人免费电影 | 欧美一区三区四区 | 中文字幕日韩免费视频 | 日韩91精品 | 97在线看片 | 中文字幕在线一二 | 亚洲人xxx | 欧美一级特黄aaaaaa大片在线观看 | 亚洲国产高清视频 | 亚洲天堂网站视频 | 成人在线免费看视频 | 国产精品福利在线 | 久久久久久久久久久高潮一区二区 | 日韩天天干| .国产精品成人自产拍在线观看6 | 免费麻豆视频 | 一区二区三区在线观看 | 国产在线污 | 亚洲九九九在线观看 | 国产精品免费看久久久8精臀av | 亚洲资源在线 | 99免费| 亚洲国产中文字幕在线 | 激情电影影院 | 一级免费黄视频 | 在线播放 日韩专区 | 在线免费av电影 | 丁香婷五月 | 欧美色图88 | 中文字幕a∨在线乱码免费看 | 丁香网五月天 | 免费av黄色 | 亚洲黄色三级 | 91色综合| 成人午夜电影久久影院 | 色诱亚洲精品久久久久久 | 国产亚洲免费观看 | 色噜噜狠狠狠狠色综合 | 日韩久久精品一区二区 | 国产高清视频在线播放 | 成人免费xxxxxx视频 | 久久午夜影视 | 狠狠躁夜夜a产精品视频 | 久久伊人五月天 | 91色偷偷 | 久久久久久综合网天天 | 久久免费一级片 | 久久优| 人人澡人人爱 | 日本三级久久久 | 国产精品久久久久影院日本 | 久草免费在线视频观看 | 天天激情天天干 | 97精品超碰一区二区三区 | 国产高清不卡一区二区三区 | 欧美日韩三级在线观看 | 天堂激情网 | 久久视频在线观看 | 亚洲精品视频 | 国产美女主播精品一区二区三区 | 亚洲撸撸 | 欧洲激情在线 | 97国产小视频 | 久久综合色一综合色88 | 天堂入口网站 | www·22com天天操| 中文字幕一区二区三区久久 | 欧美精品久久久久久久久久丰满 | 深爱激情亚洲 | 国产精品岛国久久久久久久久红粉 | 亚洲综合五月天 | 六月丁香婷婷网 | 91成人免费看片 | 国产成在线观看免费视频 | 日韩影视在线观看 | 国产美女网站视频 | 97超碰人人 | 韩国av免费在线 | 成人视屏免费看 | 国产一区二区在线免费 | 91精品在线免费观看视频 | 日韩有码中文字幕在线 | av短片在线观看 | 又大又硬又黄又爽视频在线观看 | 午夜精品福利在线 | 视频在线观看入口黄最新永久免费国产 | 日韩r级电影在线观看 | 探花视频免费在线观看 | 色视频网站在线观看一=区 a视频免费在线观看 | 久久精品123 | 在线看毛片网站 | 青青久视频 | 久久久久亚洲精品成人网小说 | 99亚洲精品视频 | 久久久天堂 | 国产精品99久久久久久人免费 | 午夜精品一区二区三区视频免费看 | 亚洲一区 影院 | 欧美精品在线观看免费 | 久久久96| 中文字幕在线不卡国产视频 | 久草在线视频首页 | 韩国一区二区三区在线观看 | 国产精品国产三级国产aⅴ无密码 | 国产精品私人影院 | 高清av在线免费观看 | 91精品国自产在线观看 | 青青草国产精品 | 免费看三级网站 | 日日碰狠狠躁久久躁综合网 | 欧美日韩视频网站 | 日韩三级免费观看 | 日韩乱色精品一区二区 | av官网在线| 在线一级片| 国产精品免费小视频 | 99精品乱码国产在线观看 | 亚州免费视频 | 中文国产字幕在线观看 | 久草免费电影 | 国产一级精品在线观看 | 久久成人毛片 | 一区二区三区韩国免费中文网站 | 五月婷婷丁香激情 | 日韩区视频 | 日韩av成人 | 五月天天天操 | 视频国产精品 | 成人h动漫精品一区二 | 国产中文字幕视频在线观看 | 香蕉网站在线观看 | 日韩一区二区三免费高清在线观看 | 日韩在线观看高清 | 特级西西444www高清大视频 | 丁香五婷| 国产一级做a爱片久久毛片a | 精品久久一| 国产精品九九视频 | av电影免费在线 | 国模一区二区三区四区 | 一区在线播放 | 亚洲码国产日韩欧美高潮在线播放 | 中文字幕频道 | 婷婷色综 | 中文字幕免费观看 | 精品久久视频 | 玖玖视频 | 久久人人爽视频 | 色香蕉视频 | 久久久久久久久久免费视频 | 夜夜夜夜爽 | 色综合久久网 | www.黄色片网站 | 国内视频一区二区 | 日韩在线观看一区二区三区 | 高清av中文字幕 | 久久亚洲美女 | 亚洲成人影音 | 正在播放亚洲精品 | 色久天| 97精品一区 | 一区二区三区视频在线 | 九九涩涩av台湾日本热热 | 天干啦夜天干天干在线线 | 国产五码一区 | 天天干天天射天天插 | 午夜精品视频一区二区三区在线看 | 成人黄色在线观看视频 | 三级小视频在线观看 | 黄色影院在线播放 | 在线亚洲观看 | 久久婷婷精品视频 | 免费男女网站 | 久精品一区 | 黄色一级大片免费看 | 欧美日韩1区2区 | 在线免费观看国产 | 欧美日韩视频一区二区三区 | 国产精品久久久久久久久久东京 | 亚洲成人黄色av | 欧美aaa一级 | 国产成人精品一区二区三区网站观看 | 成人av免费看 | 久久人人爽爽 | 国产在线毛片 | 欧美性受极品xxxx喷水 | 麻豆91精品 | 欧美精选一区二区三区 | 日本成人a | 精品三级av | 国产色拍拍拍拍在线精品 | 99久久久久国产精品免费 | 久久免费福利视频 | 91视频a | 日韩欧美一区二区三区免费观看 | 日韩欧美综合视频 | 99热在线国产精品 | av女优中文字幕在线观看 | 久久免费a | 区一区二区三区中文字幕 | 欧美久久久久久久久中文字幕 | 日韩欧美一区二区不卡 | 97在线播放视频 | 国产一级视频免费看 | 久久九九影院 | 91传媒在线 | 久久人人插| 中文字幕在线一二 | 国产又粗又猛又黄 | 国产伦精品一区二区三区四区视频 | 一区二区三区四区五区六区 | 成人在线视频在线观看 | 91在线麻豆| 亚洲精品毛片一级91精品 | 三上悠亚一区二区在线观看 | 久草在线最新免费 | 91精品久久久久久综合五月天 | 国产馆在线播放 | 伊人国产女 | 国外成人在线视频网站 | 一区二区三区高清在线观看 | 99国产在线 | 欧美性护士 | 日韩av一区二区在线 | 亚洲天堂网视频 | 亚洲最新av网址 | 欧美日韩国产一区二区在线观看 | 毛片无卡免费无播放器 | 久久国产成人午夜av影院潦草 | 亚洲精品国产高清 | 亚洲一区二区黄色 | 不卡的一区二区三区 | 91黄色免费网站 | 日韩在线观看 | 在线播放你懂 | 国产中文字幕免费 | 国产成人a亚洲精品v | 西西4444www大胆艺术 | 2024国产精品视频 | 一区二区三区 亚洲 | 午夜在线免费视频 | 国偷自产视频一区二区久 | 久久久首页 | 久草久 | 中文字幕精品www乱入免费视频 | 欧美日韩成人 | 国产91小视频 | 五月天综合在线 | av电影在线不卡 | av电影一区二区三区 | 伊人成人久久 | 一区二区视频在线免费观看 | 中午字幕在线观看 | 国产精品一区二区av | 欧美午夜久久久 | 久久久国产电影 | 婷婷电影在线观看 | 日韩av一区二区在线播放 | 免费日韩电影 | 国产精品v欧美精品 | 午夜精品久久久久久久99水蜜桃 | 九七视频在线观看 | 久草资源在线观看 | 丝袜av一区| 免费亚洲黄色 | 日日操日日插 | 亚洲综合成人在线 | 中文av字幕在线观看 | 精品国产成人av在线免 | 中文字幕久久亚洲 | 日韩视频一区二区三区在线播放免费观看 | 欧美了一区在线观看 |