日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

cs231n:assignment2——Q4: ConvNet on CIFAR-10

發布時間:2023/12/20 编程问答 45 豆豆
生活随笔 收集整理的這篇文章主要介紹了 cs231n:assignment2——Q4: ConvNet on CIFAR-10 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

視頻里 Andrej Karpathy上課的時候說,這次的作業meaty but educational,確實很meaty,作業一般是由.ipynb文件和.py文件組成,這次因為每個.ipynb文件涉及到的.py文件較多,且互相之間有交叉,所以每篇博客只貼出一個.ipynb或者一個.py文件.(因為之前的作業由于是一個.ipynb文件對應一個.py文件,所以就整合到一篇博客里)
還是那句話,有錯誤希望幫我指出來,多多指教,謝謝
ConvolutionalNetworks.ipynb內容:

  • Convolutional Networks
  • Convolution Naive forward pass
  • Aside Image processing via convolutions
  • Convolution Naive backward pass
  • Max pooling Naive forward
  • Max pooling Naive backward
  • Fast layers
  • Convolutional sandwich layers
  • Three-layer ConvNet
    • Sanity check loss
    • Gradient check
    • Overfit small data
    • Train the net
    • Visualize Filters
  • Spatial Batch Normalization
    • Spatial batch normalization forward
    • Spatial batch normalization backward
  • Experiment
      • Things you should try
      • Tips for training
      • Going above and beyond
      • What we expect
  • Extra Credit Description

Convolutional Networks

So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.

First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.

# As usual, a bit of setupimport numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.cnn import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient from cs231n.layers import * from cs231n.fast_layers import * from cs231n.solver import Solver%matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray'# for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2def rel_error(x, y):""" returns relative error """return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data.data = get_CIFAR10_data() for k, v in data.iteritems():print '%s: ' % k, v.shape X_val: (1000, 3, 32, 32) X_train: (49000, 3, 32, 32) X_test: (1000, 3, 32, 32) y_val: (1000,) y_train: (49000,) y_test: (1000,)

Convolution: Naive forward pass

The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.

You don’t have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.

You can test your implementation by running the following:

x_shape = (2, 3, 4, 4) w_shape = (3, 3, 4, 4) x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape) w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape) b = np.linspace(-0.1, 0.2, num=3)conv_param = {'stride': 2, 'pad': 1} out, _ = conv_forward_naive(x, w, b, conv_param) correct_out = np.array([[[[-0.08759809, -0.10987781],[-0.18387192, -0.2109216 ]],[[ 0.21027089, 0.21661097],[ 0.22847626, 0.23004637]],[[ 0.50813986, 0.54309974],[ 0.64082444, 0.67101435]]],[[[-0.98053589, -1.03143541],[-1.19128892, -1.24695841]],[[ 0.69108355, 0.66880383],[ 0.59480972, 0.56776003]],[[ 2.36270298, 2.36904306],[ 2.38090835, 2.38247847]]]]) # Compare your output to ours; difference should be around 1e-8 print 'Testing conv_forward_naive' print 'difference: ', rel_error(out, correct_out) Testing conv_forward_naive difference: 2.21214764175e-08

Aside: Image processing via convolutions

As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.

from scipy.misc import imread, imresizekitten, puppy = imread('kitten.jpg'), imread('puppy.jpg') # kitten is wide, and puppy is already square d = kitten.shape[1] - kitten.shape[0] kitten_cropped = kitten[:, d/2:-d/2, :]img_size = 200 # Make this smaller if it runs too slow x = np.zeros((2, 3, img_size, img_size)) x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1)) x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))# Set up a convolutional weights holding 2 filters, each 3x3 w = np.zeros((2, 3, 3, 3))# The first filter converts the image to grayscale. # Set up the red, green, and blue channels of the filter. w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]] w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]] w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]# Second filter detects horizontal edges in the blue channel. w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]# Vector of biases. We don't need any bias for the grayscale # filter, but for the edge detection filter we want to add 128 # to each output so that nothing is negative. b = np.array([0, 128])# Compute the result of convolving each input in x with each filter in w, # offsetting by b, and storing the results in out. out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})def imshow_noax(img, normalize=True):""" Tiny helper to show images as uint8 and remove axis labels """if normalize:img_max, img_min = np.max(img), np.min(img)img = 255.0 * (img - img_min) / (img_max - img_min)plt.imshow(img.astype('uint8'))plt.gca().axis('off')# Show the original images and the results of the conv operation plt.subplot(2, 3, 1) imshow_noax(puppy, normalize=False) plt.title('Original image') plt.subplot(2, 3, 2) imshow_noax(out[0, 0]) plt.title('Grayscale') plt.subplot(2, 3, 3) imshow_noax(out[0, 1]) plt.title('Edges') plt.subplot(2, 3, 4) imshow_noax(kitten_cropped, normalize=False) plt.subplot(2, 3, 5) imshow_noax(out[1, 0]) plt.subplot(2, 3, 6) imshow_noax(out[1, 1]) plt.show()

Convolution: Naive backward pass

Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don’t need to worry too much about computational efficiency.

When you are done, run the following to check your backward pass with a numeric gradient check.

x = np.random.randn(4, 3, 5, 5) w = np.random.randn(2, 3, 3, 3) b = np.random.randn(2,) dout = np.random.randn(4, 2, 5, 5) conv_param = {'stride': 1, 'pad': 1}dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)out, cache = conv_forward_naive(x, w, b, conv_param) dx, dw, db = conv_backward_naive(dout, cache)# Your errors should be around 1e-9' print 'Testing conv_backward_naive function' print 'dx error: ', rel_error(dx, dx_num) print 'dw error: ', rel_error(dw, dw_num) print 'db error: ', rel_error(db, db_num) Testing conv_backward_naive function dx error: 1.14027414431e-09 dw error: 2.30256641538e-10 db error: 3.20966816447e-12

Max pooling: Naive forward

Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don’t worry too much about computational efficiency.

Check your implementation by running the following:

x_shape = (2, 3, 4, 4) x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape) pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}out, _ = max_pool_forward_naive(x, pool_param)correct_out = np.array([[[[-0.26315789, -0.24842105],[-0.20421053, -0.18947368]],[[-0.14526316, -0.13052632],[-0.08631579, -0.07157895]],[[-0.02736842, -0.01263158],[ 0.03157895, 0.04631579]]],[[[ 0.09052632, 0.10526316],[ 0.14947368, 0.16421053]],[[ 0.20842105, 0.22315789],[ 0.26736842, 0.28210526]],[[ 0.32631579, 0.34105263],[ 0.38526316, 0.4 ]]]]) print correct_out.shape# Compare your output with ours. Difference should be around 1e-8. print 'Testing max_pool_forward_naive function:' print 'difference: ', rel_error(out, correct_out) (2, 3, 2, 2) Testing max_pool_forward_naive function: difference: 4.16666651573e-08

Max pooling: Naive backward

Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don’t need to worry about computational efficiency.

Check your implementation with numeric gradient checking by running the following:

x = np.random.randn(3, 2, 8, 8) dout = np.random.randn(3, 2, 4, 4) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)out, cache = max_pool_forward_naive(x, pool_param) dx = max_pool_backward_naive(dout, cache)# Your error should be around 1e-12 print 'Testing max_pool_backward_naive function:' print 'dx error: ', rel_error(dx, dx_num) Testing max_pool_backward_naive function: dx error: 3.27563382511e-12

Fast layers

Making convolution and pooling layers fast can be challenging. To spare you the pain, we’ve provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.

The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:

python setup.py build_ext --inplace

The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.

NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.

You can compare the performance of the naive and fast versions of these layers by running the following:

from cs231n.fast_layers import conv_forward_fast, conv_backward_fast from time import timex = np.random.randn(100, 3, 31, 31) w = np.random.randn(25, 3, 3, 3) b = np.random.randn(25,) dout = np.random.randn(100, 25, 16, 16) conv_param = {'stride': 2, 'pad': 1}t0 = time() out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param) t1 = time() out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param) t2 = time()print 'Testing conv_forward_fast:' print 'Naive: %fs' % (t1 - t0) print 'Fast: %fs' % (t2 - t1) print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1)) print 'Difference: ', rel_error(out_naive, out_fast)t0 = time() dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive) t1 = time() dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast) t2 = time()print '\nTesting conv_backward_fast:' print 'Naive: %fs' % (t1 - t0) print 'Fast: %fs' % (t2 - t1) print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1)) print 'dx difference: ', rel_error(dx_naive, dx_fast) print 'dw difference: ', rel_error(dw_naive, dw_fast) print 'db difference: ', rel_error(db_naive, db_fast) Testing conv_forward_fast: Naive: 23.932088s Fast: 0.015995s Speedup: 1496.220665x Difference: 2.1149165916e-11Testing conv_backward_fast: Naive: 23.537247s Fast: 0.010192s Speedup: 2309.349201x dx difference: 5.65990987256e-12 dw difference: 1.43212406176e-12 db difference: 0.0 from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fastx = np.random.randn(100, 3, 32, 32) dout = np.random.randn(100, 3, 16, 16) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}t0 = time() out_naive, cache_naive = max_pool_forward_naive(x, pool_param) t1 = time() out_fast, cache_fast = max_pool_forward_fast(x, pool_param) t2 = time()print 'Testing pool_forward_fast:' print 'Naive: %fs' % (t1 - t0) print 'fast: %fs' % (t2 - t1) print 'speedup: %fx' % ((t1 - t0) / (t2 - t1)) print 'difference: ', rel_error(out_naive, out_fast)t0 = time() dx_naive = max_pool_backward_naive(dout, cache_naive) t1 = time() dx_fast = max_pool_backward_fast(dout, cache_fast) t2 = time()print '\nTesting pool_backward_fast:' print 'Naive: %fs' % (t1 - t0) print 'speedup: %fx' % ((t1 - t0) / (t2 - t1)) print 'dx difference: ', rel_error(dx_naive, dx_fast) Testing pool_forward_fast: Naive: 1.115254s fast: 0.002008s speedup: 555.416172x difference: 0.0Testing pool_backward_fast: Naive: 0.476396s speedup: 39.999800x dx difference: 0.0

Convolutional “sandwich” layers

Previously we introduced the concept of “sandwich” layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.

from cs231n.layer_utils import *x = np.random.randn(2, 3, 16, 16) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} bn_param = {'mode': 'train'} gamma = np.ones(w.shape[0]) beta = np.zeros(w.shape[0])###check conv_relu_pool_forward out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param) dx, dw, db = conv_relu_pool_backward(dout, cache)dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)print 'Testing conv_relu_pool' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db)## check conv_bn_relu_pool_forward ## 加了spacial batch normalization以后, db不知道為什么誤差這么大,可能是因為b的維數較少所以計算中有很大一部分是加和, ## 所以積累每一元素上的小誤差,也可能是在第一層網絡,反向傳播到第一層的時候誤差就會累計的比較大 out, cache = conv_bn_relu_pool_forward(x, w, b, gamma, beta, conv_param, pool_param, bn_param) dx, dw, db, dgamma, dbeta = conv_bn_relu_pool_backward(dout, cache)dx_num = eval_numerical_gradient_array(lambda x: conv_bn_relu_pool_forward(x, w, b, gamma, beta, conv_param, pool_param, bn_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_bn_relu_pool_forward(x, w, b, gamma, beta, conv_param, pool_param, bn_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_bn_relu_pool_forward(x, w, b, gamma, beta, conv_param, pool_param, bn_param)[0], b, dout) dgamma_num = eval_numerical_gradient_array(lambda gamma: conv_bn_relu_pool_forward(x, w, b, gamma, beta, conv_param, pool_param, bn_param)[0], gamma, dout) dbeta_num = eval_numerical_gradient_array(lambda beta: conv_bn_relu_pool_forward(x, w, b, gamma, beta, conv_param, pool_param, bn_param)[0], beta, dout)print print 'Testing conv_relu_pool' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) print 'dgamma error: ', rel_error(dgamma_num, dgamma) print 'dbeta error: ', rel_error(dbeta_num, dbeta) Testing conv_relu_pool dx error: 1.07784659804e-08 dw error: 3.73875622936e-09 db error: 8.74414919539e-11Testing conv_relu_pool dx error: 1.54664016081e-06 dw error: 1.96618403155e-09 db error: 0.0185469286087 dgamma error: 9.44011367278e-12 dbeta error: 1.32364005526e-11 from cs231n.layer_utils import conv_relu_forward, conv_relu_backwardx = np.random.randn(2, 3, 8, 8) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1}out, cache = conv_relu_forward(x, w, b, conv_param) dx, dw, db = conv_relu_backward(dout, cache)dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)print 'Testing conv_relu:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) Testing conv_relu: dx error: 2.54502598987e-09 dw error: 4.53460011947e-10 db error: 6.7945865355e-09

Three-layer ConvNet

Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.

Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:

Sanity check loss

After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.

model = ThreeLayerConvNet()N = 50 X = np.random.randn(N, 3, 32, 32) y = np.random.randint(10, size=N)loss, grads = model.loss(X, y) print 'Initial loss (no regularization): ', lossmodel.reg = 0.5 loss, grads = model.loss(X, y) print 'Initial loss (with regularization): ', loss Initial loss (no regularization): 2.30258381546 Initial loss (with regularization): 2.50837260664

Gradient check

After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.

num_inputs = 2 input_dim = (3, 16, 16) reg = 0.0 num_classes = 10 X = np.random.randn(num_inputs, *input_dim) y = np.random.randint(num_classes, size=num_inputs)model = ThreeLayerConvNet(num_filters=3, filter_size=3,input_dim=input_dim, hidden_dim=7,dtype=np.float64) loss, grads = model.loss(X, y) for param_name in sorted(grads):f = lambda _: model.loss(X, y)[0]param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)e = rel_error(param_grad_num, grads[param_name])print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])) W1 max relative error: 9.833916e-04 W2 max relative error: 6.500223e-03 W3 max relative error: 2.111341e-04 b1 max relative error: 4.609401e-05 b2 max relative error: 4.915309e-08 b3 max relative error: 6.948804e-10

Overfit small data

A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.

num_train = 100 small_data = {'X_train': data['X_train'][:num_train],'y_train': data['y_train'][:num_train],'X_val': data['X_val'],'y_val': data['y_val'], }model = ThreeLayerConvNet(weight_scale=1e-2)solver = Solver(model, small_data,num_epochs=10, batch_size=50,update_rule='adam',optim_config={'learning_rate': 1e-3,},verbose=True, print_every=1) solver.train() (Iteration 1 / 20) loss: 2.327333 (Epoch 0 / 10) train acc: 0.210000; val_acc: 0.129000 (Iteration 2 / 20) loss: 2.920699 (Epoch 1 / 10) train acc: 0.180000; val_acc: 0.136000 (Iteration 3 / 20) loss: 2.843339 (Iteration 4 / 20) loss: 2.256272 (Epoch 2 / 10) train acc: 0.140000; val_acc: 0.080000 (Iteration 5 / 20) loss: 3.195308 (Iteration 6 / 20) loss: 2.364024 (Epoch 3 / 10) train acc: 0.360000; val_acc: 0.199000 (Iteration 7 / 20) loss: 2.414444 (Iteration 8 / 20) loss: 2.564107 (Epoch 4 / 10) train acc: 0.390000; val_acc: 0.199000 (Iteration 9 / 20) loss: 2.238707 (Iteration 10 / 20) loss: 1.846348 (Epoch 5 / 10) train acc: 0.290000; val_acc: 0.156000 (Iteration 11 / 20) loss: 1.713543 (Iteration 12 / 20) loss: 1.578357 (Epoch 6 / 10) train acc: 0.580000; val_acc: 0.207000 (Iteration 13 / 20) loss: 1.487418 (Iteration 14 / 20) loss: 1.152261 (Epoch 7 / 10) train acc: 0.660000; val_acc: 0.207000 (Iteration 15 / 20) loss: 1.103686 (Iteration 16 / 20) loss: 1.153359 (Epoch 8 / 10) train acc: 0.680000; val_acc: 0.173000 (Iteration 17 / 20) loss: 1.311634 (Iteration 18 / 20) loss: 0.984277 (Epoch 9 / 10) train acc: 0.780000; val_acc: 0.231000 (Iteration 19 / 20) loss: 0.987850 (Iteration 20 / 20) loss: 0.655680 (Epoch 10 / 10) train acc: 0.840000; val_acc: 0.264000

Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:

plt.subplot(2, 1, 1) plt.plot(solver.loss_history, 'o') plt.xlabel('iteration') plt.ylabel('loss')plt.subplot(2, 1, 2) plt.plot(solver.train_acc_history, '-o') plt.plot(solver.val_acc_history, '-o') plt.legend(['train', 'val'], loc='upper left') plt.xlabel('epoch') plt.ylabel('accuracy') plt.show()

Train the net

By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:

model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)solver = Solver(model, data,num_epochs=1, batch_size=50,update_rule='adam',optim_config={'learning_rate': 1e-3,},verbose=True, print_every=20) solver.train() (Iteration 1 / 980) loss: 2.304576 (Epoch 0 / 1) train acc: 0.108000; val_acc: 0.098000 (Iteration 21 / 980) loss: 1.965750 (Iteration 41 / 980) loss: 1.979698 (Iteration 61 / 980) loss: 2.101320 (Iteration 81 / 980) loss: 1.901987 (Iteration 101 / 980) loss: 1.823573 (Iteration 121 / 980) loss: 1.559670 (Iteration 141 / 980) loss: 1.521758 (Iteration 161 / 980) loss: 1.614254 (Iteration 181 / 980) loss: 1.525828 (Iteration 201 / 980) loss: 1.801237 (Iteration 221 / 980) loss: 1.771171 (Iteration 241 / 980) loss: 1.935747 (Iteration 261 / 980) loss: 1.706197 (Iteration 281 / 980) loss: 1.771841 (Iteration 301 / 980) loss: 1.730827 (Iteration 321 / 980) loss: 1.766924 (Iteration 341 / 980) loss: 1.604705 (Iteration 361 / 980) loss: 1.689329 (Iteration 381 / 980) loss: 1.487211 (Iteration 401 / 980) loss: 1.652397 (Iteration 421 / 980) loss: 1.624637 (Iteration 441 / 980) loss: 1.774464 (Iteration 461 / 980) loss: 1.728469 (Iteration 481 / 980) loss: 1.990141 (Iteration 501 / 980) loss: 1.571801 (Iteration 521 / 980) loss: 1.592427 (Iteration 541 / 980) loss: 1.860452 (Iteration 561 / 980) loss: 1.967219 (Iteration 581 / 980) loss: 1.513192 (Iteration 601 / 980) loss: 1.872284 (Iteration 621 / 980) loss: 1.673944 (Iteration 641 / 980) loss: 1.810775 (Iteration 661 / 980) loss: 1.636547 (Iteration 681 / 980) loss: 1.489698 (Iteration 701 / 980) loss: 1.718354 (Iteration 721 / 980) loss: 1.916079 (Iteration 741 / 980) loss: 1.666237 (Iteration 761 / 980) loss: 1.716002 (Iteration 781 / 980) loss: 1.543222 (Iteration 801 / 980) loss: 1.491887 (Iteration 821 / 980) loss: 1.967372 (Iteration 841 / 980) loss: 1.685699 (Iteration 861 / 980) loss: 1.239976 (Iteration 881 / 980) loss: 1.609454 (Iteration 901 / 980) loss: 1.513272 (Iteration 921 / 980) loss: 1.752893 (Iteration 941 / 980) loss: 1.586221 (Iteration 961 / 980) loss: 1.616744 (Epoch 1 / 1) train acc: 0.476000; val_acc: 0.490000

Visualize Filters

You can visualize the first-layer convolutional filters from the trained network by running the following:

from cs231n.vis_utils import visualize_gridgrid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1)) plt.imshow(grid.astype('uint8')) plt.axis('off') plt.gcf().set_size_inches(5, 5) plt.show()

Spatial Batch Normalization

We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called “spatial batch normalization.”

Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.

If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different images and different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.

Spatial batch normalization: forward

In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:

# Check the training-time forward pass by checking means and variances # of features both before and after spatial batch normalizationN, C, H, W = 2, 3, 4, 5 x = 4 * np.random.randn(N, C, H, W) + 10print 'Before spatial batch normalization:' print ' Shape: ', x.shape print ' Means: ', x.mean(axis=(0, 2, 3)) print ' Stds: ', x.std(axis=(0, 2, 3))# Means should be close to zero and stds close to one gamma, beta = np.ones(C), np.zeros(C) bn_param = {'mode': 'train'} out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print 'After spatial batch normalization:' print ' Shape: ', out.shape print ' Means: ', out.mean(axis=(0, 2, 3)) print ' Stds: ', out.std(axis=(0, 2, 3))# Means should be close to beta and stds close to gamma gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8]) out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print 'After spatial batch normalization (nontrivial gamma, beta):' print ' Shape: ', out.shape print ' Means: ', out.mean(axis=(0, 2, 3)) print ' Stds: ', out.std(axis=(0, 2, 3)) Before spatial batch normalization:Shape: (2, 3, 4, 5)Means: [ 11.09151825 10.38671938 9.79044576]Stds: [ 4.23766998 4.62486377 4.55046613] After spatial batch normalization:Shape: (2, 3, 4, 5)Means: [ 1.16573418e-16 8.54871729e-16 1.99840144e-16]Stds: [ 0.99999972 0.99999977 0.99999976] After spatial batch normalization (nontrivial gamma, beta):Shape: (2, 3, 4, 5)Means: [ 6. 7. 8.]Stds: [ 2.99999916 3.99999906 4.99999879] i=0# Check the test-time forward pass by running the training-time # forward pass many times to warm up the running averages, and then # checking the means and variances of activations after a test-time # forward pass.N, C, H, W = 10, 4, 11, 12bn_param = {'mode': 'train'} gamma = np.ones(C) beta = np.zeros(C) for t in xrange(50):x = 2.3 * np.random.randn(N, C, H, W) + 13spatial_batchnorm_forward(x, gamma, beta, bn_param) bn_param['mode'] = 'test' x = 2.3 * np.random.randn(N, C, H, W) + 13 a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)# Means should be close to zero and stds close to one, but will be # noisier than training-time forward passes. print 'After spatial batch normalization (test-time):' print ' means: ', a_norm.mean(axis=(0, 2, 3)) print ' stds: ', a_norm.std(axis=(0, 2, 3)) After spatial batch normalization (test-time):means: [ 0.06975173 0.06009512 0.02887493 0.02397713]stds: [ 1.00763471 0.99021634 0.99863325 0.97411123]

Spatial batch normalization: backward

In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:

N, C, H, W = 2, 3, 4, 5 x = 5 * np.random.randn(N, C, H, W) + 12 gamma = np.random.randn(C) beta = np.random.randn(C) dout = np.random.randn(N, C, H, W)bn_param = {'mode': 'train'} fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]dx_num = eval_numerical_gradient_array(fx, x, dout) da_num = eval_numerical_gradient_array(fg, gamma, dout) db_num = eval_numerical_gradient_array(fb, beta, dout)_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param) dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache) print 'dx error: ', rel_error(dx_num, dx) print 'dgamma error: ', rel_error(da_num, dgamma) print 'dbeta error: ', rel_error(db_num, dbeta) dx error: 4.3762056051e-08 dgamma error: 4.25829344038e-11 dbeta error: 5.07865400281e-12

Experiment!

Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started:

Things you should try:

  • Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
  • Number of filters: Above we used 32 filters. Do more or fewer do better?
  • Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
  • Network architecture: The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file cs231n/classifiers/convnet.py. Some good architectures to try include:
    • [conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]
    • [conv-relu-pool]XN - [affine]XM - [softmax or SVM]
    • [conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]

Tips for training

For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:

  • If the parameters are working well, you should see improvement within a few hundred iterations
  • Remember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
  • Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.

Going above and beyond

If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.

  • Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
  • Alternative activation functions such as leaky ReLU, parametric ReLU, or MaxOut.
  • Model ensembles
  • Data augmentation

If you do decide to implement something extra, clearly describe it in the “Extra Credit Description” cell below.

What we expect

At the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.

You should use the space below to experiment and train your network. The final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.

Have fun and happy training!

model = ConvNetArch_1([(2,7),(4,5)], [5,5], connect_conv=(4,3), use_batchnorm=True,loss_fuction = 'softmax', weight_scale=5e-2, reg=0, dtype=np.float64)###################### check shape of initial parameters: ####################### for k, v in model.params.iteritems(): # print "%s" % k, model.params[k].shape########################################################################################################### Sanit check loss: ###################################### zengjia batch norm yihou,結果在2.7,2.8,2.9左右,應該是在2.3左右才對,不知道哪里出錯了, #### 可能是batch norm加太多了,每一層后面都有 N = 10 X = np.random.randn(N, 3, 32, 32) y = np.random.randint(10, size=N)loss, _ = model.loss(X, y) print 'Initial loss (no regularization): ', lossmodel.reg = 0.1 loss, _ = model.loss(X, y) print 'Initial loss (with regularization): ', loss######################################################################################################### Sanity gradient check : ###############################N = 2 ### X的寬和高設為16,加快運算, 計算數值梯度的速度特別慢 X = np.random.randn(N, 3, 16, 16) y = np.random.randint(10, size=N)model = ConvNetArch_1([(2,3)], [5,5], input_dim=(3, 16, 16), connect_conv=(2,3), use_batchnorm=True,loss_fuction = 'softmax', weight_scale=5e-2, reg=0, dtype=np.float64)loss, grads = model.loss(X, y)for param_name in sorted(grads):f = lambda _: model.loss(X, y)[0]param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)#print param_grad_nume = rel_error(param_grad_num, grads[param_name])print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))######################################################## Initial loss (no regularization): 2.8666209542 Initial loss (with regularization): 3.12124148165 CCW max relative error: 3.688953e-06 CW1 max relative error: 2.849567e-06 FW1 max relative error: 8.171128e-03 FW2 max relative error: 6.072839e-06 cb1 max relative error: 6.661434e-02 cbeta1 max relative error: 1.112161e-08 ccb max relative error: 4.440936e-01 ccbeta max relative error: 5.932733e-09 ccgamma max relative error: 7.900637e-09 cgamma1 max relative error: 3.363437e-09 fb1 max relative error: 3.552714e-07 fb2 max relative error: 7.993606e-07 fbeta1 max relative error: 6.106227e-08 fbeta2 max relative error: 5.441469e-10 fgamma1 max relative error: 3.206630e-09 fgamma2 max relative error: 9.478054e-10 ### 程序運行太慢了,目前這個參數是試的第一組,效果還不錯,第二輪迭代就達到50+% ### 跑了一晚上發現大約在第10次以后開始過擬合了,所以改為epoch=10,best_val_acc = 0.683, ### 之前epoch=20時,best_val_acc大概能到能到70% ### 在我電腦上,程序跑太慢,所以就沒怎么調參數 ### 考慮到這速度,下面這種結構就沒有實現: ### [conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM] ### 目前可以實現下面兩種結構: ### [conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM] ### [conv-relu-pool]XN - [affine]XM - [softmax or SVM]model = ConvNetArch_1([(32,3),(64,3),(128,3)], [100], connect_conv=0, use_batchnorm=False,loss_fuction = 'softmax', weight_scale=5e-2, reg=0, dtype=np.float64)solver = Solver(model, data,num_epochs=10, batch_size=100,update_rule='adam',optim_config={'learning_rate': 1e-3},print_every=100,lr_decay=0.95,verbose=True) solver.train() print solver.best_val_acc (Iteration 1 / 4900) loss: 47.549210 (Epoch 0 / 10) train acc: 0.122000; val_acc: 0.124000 (Iteration 101 / 4900) loss: 1.889701 (Iteration 201 / 4900) loss: 1.628195 (Iteration 301 / 4900) loss: 1.503709 (Iteration 401 / 4900) loss: 1.284585 (Epoch 1 / 10) train acc: 0.497000; val_acc: 0.531000 (Iteration 501 / 4900) loss: 1.277860 (Iteration 601 / 4900) loss: 1.215424 (Iteration 701 / 4900) loss: 1.209591 (Iteration 801 / 4900) loss: 1.173904 (Iteration 901 / 4900) loss: 1.068132 (Epoch 2 / 10) train acc: 0.594000; val_acc: 0.580000 (Iteration 1001 / 4900) loss: 1.062385 (Iteration 1101 / 4900) loss: 1.080026 (Iteration 1201 / 4900) loss: 0.758866 (Iteration 1301 / 4900) loss: 0.818387 (Iteration 1401 / 4900) loss: 1.086589 (Epoch 3 / 10) train acc: 0.674000; val_acc: 0.616000 (Iteration 1501 / 4900) loss: 0.980248 (Iteration 1601 / 4900) loss: 0.964738 (Iteration 1701 / 4900) loss: 1.003369 (Iteration 1801 / 4900) loss: 0.974146 (Iteration 1901 / 4900) loss: 1.043093 (Epoch 4 / 10) train acc: 0.727000; val_acc: 0.645000 (Iteration 2001 / 4900) loss: 0.684164 (Iteration 2101 / 4900) loss: 0.899941 (Iteration 2201 / 4900) loss: 0.670452 (Iteration 2301 / 4900) loss: 0.715849 (Iteration 2401 / 4900) loss: 0.798524 (Epoch 5 / 10) train acc: 0.759000; val_acc: 0.661000 (Iteration 2501 / 4900) loss: 0.725618 (Iteration 2601 / 4900) loss: 0.702008 (Iteration 2701 / 4900) loss: 0.591424 (Iteration 2801 / 4900) loss: 0.943362 (Iteration 2901 / 4900) loss: 0.450685 (Epoch 6 / 10) train acc: 0.773000; val_acc: 0.674000 (Iteration 3001 / 4900) loss: 0.898413 (Iteration 3101 / 4900) loss: 0.627382 (Iteration 3201 / 4900) loss: 0.454569 (Iteration 3301 / 4900) loss: 0.446561 (Iteration 3401 / 4900) loss: 0.499366 (Epoch 7 / 10) train acc: 0.795000; val_acc: 0.667000 (Iteration 3501 / 4900) loss: 0.503052 (Iteration 3601 / 4900) loss: 0.408205 (Iteration 3701 / 4900) loss: 0.437030 (Iteration 3801 / 4900) loss: 0.510435 (Iteration 3901 / 4900) loss: 0.735819 (Epoch 8 / 10) train acc: 0.835000; val_acc: 0.678000 (Iteration 4001 / 4900) loss: 0.559391 (Iteration 4101 / 4900) loss: 0.451097 (Iteration 4201 / 4900) loss: 0.609639 (Iteration 4301 / 4900) loss: 0.549392 (Iteration 4401 / 4900) loss: 0.704371 (Epoch 9 / 10) train acc: 0.821000; val_acc: 0.682000 (Iteration 4501 / 4900) loss: 0.642858 (Iteration 4601 / 4900) loss: 0.502988 (Iteration 4701 / 4900) loss: 0.418752 (Iteration 4801 / 4900) loss: 0.306134 (Epoch 10 / 10) train acc: 0.842000; val_acc: 0.683000 0.683 print plt.subplot(2, 1, 1) plt.title('Training loss') plt.plot(solver.loss_history, 'o') plt.xlabel('Iteration')plt.subplot(2, 1, 2) plt.title('Accuracy') plt.plot(solver.train_acc_history, '-o', label='train') plt.plot(solver.val_acc_history, '-o', label='val') plt.plot([0.5] * len(solver.val_acc_history), 'k--') plt.xlabel('Epoch') plt.legend(loc='lower right') plt.gcf().set_size_inches(15, 12) plt.show()

Extra Credit Description

If you implement any additional features for extra credit, clearly describe them here with pointers to any code in this or other files if applicable.

總結

以上是生活随笔為你收集整理的cs231n:assignment2——Q4: ConvNet on CIFAR-10的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

999在线精品 | 亚洲精品在线一区二区 | 国产精品成久久久久 | 国产区精品在线观看 | 国产精品一区二区三区在线播放 | 天堂在线一区二区三区 | 三级黄色免费片 | 丁香影院在线 | 日韩电影在线一区二区 | 久久国产精品视频观看 | 激情文学综合丁香 | 欧美精品久久久久久 | 不卡电影一区二区三区 | 人人搞人人爽 | 亚洲精品在线观看中文字幕 | 欧美日韩在线第一页 | 97天堂网 | 国产人成免费视频 | 蜜臀久久99静品久久久久久 | 国产精品一区在线播放 | 成人欧美在线 | 精品亚洲免a| 日韩在线无 | 亚洲激情电影在线 | 久久九九网站 | 久久这里只有精品久久 | 首页国产精品 | 久久久激情视频 | 99视频免费播放 | 国产一区二区免费在线观看 | 黄网av在线 | 免费成人av在线看 | 久草在线精品观看 | 伊香蕉大综综综合久久啪 | 欧美天天综合 | av.com在线| a在线免费 | 91免费在线 | 99福利片| 久久tv| 少妇精品久久久一区二区免费 | 91在线超碰| 久久久久久久免费 | 伊人中文字幕在线 | 国产精品一区二区在线观看 | 午夜成人免费电影 | 丝袜美腿在线播放 | 亚洲天天综合网 | 亚洲成人免费观看 | 日本免费一二三区 | 亚洲精品一区二区三区四区高清 | 6699私人影院 | 久久精品视频在线免费观看 | 国产精品美女久久久网av | 日日激情| 欧美五月婷婷 | 精品福利在线 | 久久人人97超碰精品888 | 欧美一区二区在线 | 成人黄色电影在线播放 | 欧美成年网站 | 一区二区三区av在线 | 一级片免费在线 | 黄色片网站免费 | www.com黄| 日韩免费观看视频 | 亚洲视频精品在线 | 欧美日韩中文字幕视频 | 五月精品 | 国产黄色播放 | 国产午夜精品一区二区三区四区 | 在线观看一级片 | 久草精品在线 | 999久久精品| 九色精品免费永久在线 | 久久毛片视频 | 免费国产一区二区视频 | 亚洲第一av在线播放 | 色吊丝在线永久观看最新版本 | 黄毛片在线观看 | 国产在线视频一区二区 | 成年人免费在线播放 | 99热最新| 三级动态视频在线观看 | 国产不卡av在线 | 成人动漫精品一区二区 | 国产网站在线免费观看 | 欧美一二区在线 | 久久欧美综合 | 福利视频一二区 | 欧美色婷 | 在线观看中文字幕一区二区 | 欧美性生活久久 | 国产韩国精品一区二区三区 | 三级视频国产 | 黄色免费大片 | 日韩一区二区三区在线观看 | 久久乐九色婷婷综合色狠狠182 | 深爱激情五月婷婷 | 成人午夜剧场在线观看 | 久久久久久久久久久综合 | 在线亚洲成人 | 天天综合网 天天综合色 | 久草在线视频精品 | 国产91欧美| 亚洲最大的av网站 | 日本婷婷色 | 正在播放国产一区二区 | 国产999精品久久久影片官网 | 国产精品人成电影在线观看 | 亚洲在线精品视频 | 亚洲成人午夜在线 | 热久久99这里有精品 | 亚洲 在线 | 五月婷av| 婷婷激情5月天 | 99免在线观看免费视频高清 | 亚洲国产免费 | 人人爽人人爽人人 | 欧亚日韩精品一区二区在线 | 久久久久国产视频 | 亚洲无吗av | 又黄又爽又无遮挡免费的网站 | 四虎成人av | 成人午夜电影免费在线观看 | 免费高清男女打扑克视频 | 欧美日韩99 | 91视频免费视频 | 久久成人综合 | 六月婷婷久香在线视频 | 日韩伦理片hd | 亚洲精品久 | 国产不卡网站 | 在线观av| 狠狠网 | 婷婷成人综合 | 久久久www成人免费毛片 | 久久久精品日本 | 中文字幕超清在线免费 | 日韩在线色 | 久操久 | 五月激情六月丁香 | 亚洲男男gaygay无套同网址 | 999久久国产精品免费观看网站 | 久久老司机精品视频 | 午夜视频在线网站 | 五月激情五月激情 | 中文字幕一区二区三区四区久久 | 亚洲精品国内 | 日韩素人在线观看 | 在线黄av | 麻豆影视在线免费观看 | 日韩一区二区久久 | 久久草在线免费 | 日韩一级精品 | 免费看黄的视频 | 午夜精品久久久久久99热明星 | 在线免费观看涩涩 | 日韩欧美在线一区二区 | 黄色99视频 | 亚洲国产资源 | 五月婷网 | 九色琪琪久久综合网天天 | 久久久久久久久毛片 | 爱色av.com | 在线播放日韩av | 国产精品美女视频网站 | 在线播放 亚洲 | 不卡国产视频 | 午夜av一区二区三区 | a级国产乱理伦片在线观看 亚洲3级 | 精品国产1区2区3区 国产欧美精品在线观看 | 五月天综合在线 | 免费电影播放 | 中文字幕在线第一页 | 免费a现在观看 | 夜夜视频欧洲 | 久久精品观看 | 91九色在线观看 | 国色天香在线 | www亚洲视频 | 国产手机av在线 | 日本久久精品 | 在线观看理论 | 国产 视频 高清 免费 | 国产又粗又猛又黄视频 | 麻豆视频免费播放 | 99精品国产亚洲 | 天天曰天天爽 | 天天拍天天色 | 国产精品免费久久久久久 | av三区在线| 成人中文字幕av | 国产一区二区成人 | 综合视频在线 | 日韩综合视频在线观看 | 国产男女爽爽爽免费视频 | 91av大全| 国产精品免费在线播放 | 999国产| 国产精品青草综合久久久久99 | 亚洲欧美日韩国产一区二区 | 日韩大陆欧美高清视频区 | 久久九九久久九九 | av在线播放不卡 | 免费看片网址 | 91网站观看 | 久久精品这里热有精品 | 欧美在线视频精品 | 一区二区中文字幕在线观看 | 天天干天天上 | 久久精品视频一 | 久久久一本精品99久久精品66 | 日日干av | 亚洲精品91天天久久人人 | www.伊人网 | 国产一级精品在线观看 | 欧美日韩高清一区二区 国产亚洲免费看 | 国产一线二线三线在线观看 | 久久综合五月天婷婷伊人 | 国内精品久久久久久久影视麻豆 | 在线电影 一区 | 在线成人高清电影 | 在线观看一区视频 | 国产一级黄大片 | 日韩在线视 | 国产在线a | 免费在线视频一区二区 | 午夜成人免费电影 | 狠狠狠狠狠狠天天爱 | 91精品一区在线观看 | 综合色婷婷 | 91亚·色| 日日操日日操 | 亚洲精品久久久久中文字幕二区 | 国产手机在线观看 | 久久久www成人免费毛片麻豆 | 狠狠的干狠狠的操 | 久久精品国亚洲 | 精品产品国产在线不卡 | 在线蜜桃视频 | 欧美一级小视频 | 亚洲自拍偷拍色图 | 成人精品一区二区三区电影免费 | 日日爽| 免费看三片 | 人人干网 | 亚洲一区二区精品3399 | 蜜臀av性久久久久av蜜臀三区 | 狠狠狠狠狠干 | 日韩国产在线观看 | 91最新地址永久入口 | 综合精品久久久 | 在线观看网站黄 | 天天色天天射天天综合网 | 日日夜av| 一区二区视频在线看 | 国产精品久久久久久久午夜 | 亚洲1级片 | 色综合久久五月 | 日韩av女优视频 | 精品视频久久久久久 | 亚洲 欧美 另类人妖 | 在线观看日韩国产 | 在线观看一区二区视频 | 一区二区精品在线观看 | 2022中文字幕在线观看 | 五月婷婷六月丁香 | 国产精品久久综合 | 国产精品久久久久久99 | 日本性高潮视频 | 国产精品毛片一区视频播不卡 | 韩国一区二区av | 正在播放久久 | 女人魂免费观看 | 99视频在线观看视频 | 九九交易行官网 | 成人a在线观看高清电影 | 福利视频一区二区 | 在线有码中文 | 91一区二区三区在线观看 | 国产美女在线观看 | 亚洲天堂色婷婷 | 免费看色网站 | 丁香六月婷婷开心 | 欧美精品日韩 | 亚洲不卡av一区二区三区 | 久久精久久精 | 亚洲欧美国产精品 | 亚洲综合色丁香婷婷六月图片 | 91精品在线播放 | 精品亚洲二区 | 国产精品一区二区av日韩在线 | 午夜 免费 | 激情 亚洲 | 国产糖心vlog在线观看 | 精品一区二区三区在线播放 | 91在线日韩 | a在线v| 日本丶国产丶欧美色综合 | 婷婷射五月| 色婷婷av一区二 | 成人欧美一区二区三区在线观看 | 涩五月婷婷 | 日韩大片在线看 | 欧美精品资源 | 国产中文字幕久久 | 久久精品一区八戒影视 | 成人久久18免费网站图片 | 欧美精品久久久久久久久久丰满 | 在线免费观看黄色小说 | 久久亚洲热 | 天天干干 | 国产一区二区免费 | 91久久一区二区 | 中文亚洲欧美日韩 | a级国产毛片 | 蜜臀av夜夜澡人人爽人人桃色 | 亚洲草视频 | 久久午夜精品 | 亚洲网站在线看 | 天天插伊人 | 精品久久免费看 | av三级av| 草久电影| 国产精品福利久久久 | 97久久久免费福利网址 | 国产精品欧美在线 | 九九涩涩av台湾日本热热 | 综合激情婷婷 | 日韩在线视频免费观看 | 中文字幕 国产精品 | 青草视频在线免费 | 天天拍夜夜拍 | 一级黄毛片 | 久久久久久久久久电影 | av在线网站观看 | 精品久久综合 | www.亚洲视频 | 日韩高清无线码2023 | 99久久精品国产亚洲 | 天天操夜夜干 | 怡红院久久 | 十八岁以下禁止观看的1000个网站 | 黄色片网站 | 久久99精品国产麻豆婷婷 | 免费a网址 | 91久久精品一区二区二区 | 国产小视频91 | 九九热精 | 精品国产人成亚洲区 | www国产亚洲精品久久麻豆 | 中文字幕免费在线 | 国产91综合一区在线观看 | 麻豆视传媒官网免费观看 | 国产二区av | 亚洲三级黄 | 国产麻豆精品95视频 | 五月婷亚洲 | 韩日精品在线 | 青草视频在线看 | 久久精美视频 | av电影av在线| 操老逼免费视频 | 国产一级一级国产 | 永久黄网站色视频免费观看w | 日本黄色a级大片 | 高清不卡毛片 | 国产69久久久欧美一级 | 99久久夜色精品国产亚洲 | 91亚洲永久精品 | 国产精品久久久久久久毛片 | 亚洲另类交 | 国内精品视频免费 | 狠狠狠狠狠狠狠 | 国产精品女 | 黄色片网站免费 | 日本黄色一级电影 | 狠狠色狠狠色综合系列 | 伊甸园永久入口www 99热 精品在线 | 亚洲综合色婷婷 | 最近中文字幕免费大全 | 九九热精品在线 | 国产免费影院 | 欧美精品资源 | 亚洲综合在线观看视频 | 久久久久免费精品国产 | 欧美日韩中文视频 | 韩国av免费 | 色爽网站 | 国产码电影 | 免费男女羞羞的视频网站中文字幕 | 免费在线观看av网址 | 国产精品福利午夜在线观看 | 免费亚洲黄色 | 在线之家官网 | 天天射天天干天天爽 | 天天天插 | 欧美福利片在线观看 | 玖玖在线资源 | 友田真希av | 97人人视频| 国产精品青草综合久久久久99 | 欧美一级片在线观看视频 | 国产手机av在线 | 在线 高清 中文字幕 | 国产成人精品久久久久 | 国产系列精品av | 91成人区| 国内精品久久久久影院一蜜桃 | 高清av在线 | 亚洲欧美国产精品va在线观看 | 欧美成人精品欧美一级乱黄 | 最近中文字幕国语免费av | 月丁香婷婷 | 福利视频入口 | 国产一级在线免费观看 | 久草在线观 | 99久久99久国产黄毛片 | 免费av观看网站 | 中国成人一区 | 在线看片视频 | 一区在线电影 | 日韩免费视频在线观看 | 久久久久久久久黄色 | 亚洲va在线va天堂 | 精品国产一区二区三区久久久 | 9草在线| 一区二区精品在线 | 国产精品99久久久久久宅男 | 私人av| 啪啪动态视频 | 国产在线1区| 国产资源精品 | 国产在线91在线电影 | 91桃花视频 | 亚洲最新毛片 | 久久久久激情视频 | 黄色软件网站在线观看 | 五月天婷婷免费视频 | av高清免费 | 999国产 | 六月激情网 | 亚洲aaa毛片 | 国产精品美女视频网站 | 亚洲女欲精品久久久久久久18 | 在线成人短视频 | 免费黄色小网站 | 中国一级特黄毛片大片久久 | 成年人av在线播放 | 五月婷在线观看 | 91爱爱网址 | 亚洲国产成人高清精品 | 亚洲精品国产综合99久久夜夜嗨 | 区一区二区三区中文字幕 | 久久伊人精品一区二区三区 | 免费在线一区二区 | 在线电影播放 | 中文字幕一二三区 | 在线网址你懂得 | 999久久久欧美日韩黑人 | 久久96国产精品久久99漫画 | 国产一级片免费视频 | 日本精品视频在线 | 国产黄在线播放 | 久久精品中文字幕免费mv | 91精品无人成人www | 欧美日韩xxx | 中国黄色一级大片 | 日本黄色一级电影 | 国产免费观看av | 亚洲精品五月 | 91 中文字幕 | 久久久久久国产一区二区三区 | 亚洲精品午夜久久久久久久久久久 | 日韩精品中文字幕在线 | 一区二区三区高清不卡 | 午夜视频在线观看一区二区三区 | 国产一区欧美日韩 | 国色综合| 国产91欧美 | 999亚洲国产996395 | 中文字幕在线视频一区二区三区 | 亚洲视频精品在线 | 久草久草在线 | 狂野欧美激情性xxxx欧美 | 午夜视频久久久 | 27xxoo无遮挡动态视频 | 在线91网 | 日本视频久久久 | 99视频久 | 三级免费黄色 | 日韩中文字 | 久久久精品欧美 | 99精品免费久久久久久久久 | 亚洲国产人午在线一二区 | 免费aa大片 | 国产一在线精品一区在线观看 | 成人中文字幕在线观看 | 91最新中文字幕 | 久久草草热国产精品直播 | 国产精品毛片久久久久久久久久99999999 | 欧洲色综合 | japanesexxxhd奶水 91在线精品一区二区 | 91在线观看欧美日韩 | 国产精品欧美一区二区 | 97精品国自产拍在线观看 | 国产日韩精品一区二区 | 国产精品岛国久久久久久久久红粉 | 最近2019好看的中文字幕免费 | 免费一级特黄录像 | 国产一级大片免费看 | 亚洲精品乱码白浆高清久久久久久 | 91精品久久久久久 | 午夜影视一区 | 91在线免费公开视频 | 欧美亚洲成人xxx | 在线观看国产v片 | 成人午夜精品久久久久久久3d | 国产一级片观看 | 国产福利av在线 | 欧美久久久久久久久中文字幕 | 最近高清中文在线字幕在线观看 | 精品国产视频在线观看 | 久久99九九99精品 | 国产91精品一区二区麻豆亚洲 | 最近高清中文字幕 | av网站手机在线观看 | 国产成人专区 | 91av福利视频 | 欧美中文字幕第一页 | 久久精品5| 色综合天天综合 | 亚洲日本国产精品 | 激情视频免费观看 | 99视频在线免费播放 | 婷婷色 亚洲| 国产精品一区二区av影院萌芽 | 亚洲a色 | 91在线影院 | 91av99| 97成人在线观看 | 久色小说 | 成人a v视频| 国产婷婷精品av在线 | 中文字幕有码在线播放 | 四虎影视成人精品国库在线观看 | 国产麻豆电影 | 91久久影院| 国产女人免费看a级丨片 | 欧美视频xxx | 极品中文字幕 | 国产美女在线免费观看 | 亚洲激情| 国产高清免费观看 | 国产录像在线观看 | 精品亚洲一区二区 | 久草青青在线观看 | 精品亚洲免费视频 | 91视频3p | 蜜臀av麻豆 | 精品美女久久久久久免费 | 欧美亚洲专区 | 99精品视频在线看 | 色婷婷在线播放 | 麻豆一区二区 | 国产日韩欧美综合在线 | 日日夜夜精品视频天天综合网 | 高清一区二区 | 日韩av资源在线观看 | 国产精品黄色影片导航在线观看 | 精品在线观看一区二区三区 | 91成人看片| 91久久偷偷做嫩草影院 | 激情久久久久久久久久久久久久久久 | 精品黄色在线 | 精品日韩在线一区 | 欧美性粗大hdvideo | 亚洲va欧美va国产va黑人 | 免费视频a | 99精品视频一区二区 | 国产视频中文字幕在线观看 | 欧美激情视频免费看 | 99免费视频 | 久草在线精品观看 | 美女黄视频免费 | 久久亚洲综合国产精品99麻豆的功能介绍 | 午夜精品久久久 | 国产69熟 | 免费h在线观看 | 狠狠狠操 | 国产99久久九九精品免费 | 国产精品美女久久久久久久网站 | 六月丁香婷婷久久 | 麻花传媒mv免费观看 | 伊人五月天综合 | 久久精品之| 国产精品日韩久久久久 | 日韩在线一二三区 | 天天干天天干天天干天天干天天干天天干 | av黄免费看| 欧美日韩免费观看一区二区三区 | 天天综合网 天天综合色 | 国产中文在线视频 | 日韩欧美视频 | 欧美精品久久久久久久久久久 | 国产免费国产 | 午夜影院在线观看18 | 欧美久久久久久久 | 91最新在线观看 | 亚洲一二视频 | 久草免费新视频 | 国产成人av福利 | 久久精品综合网 | 日韩精品中文字幕在线播放 | 国产精品资源在线 | 精品久久视频 | av在线直接看 | 91高清免费在线观看 | 欧美精品乱码久久久久久 | 99精品国产在热久久下载 | 免费视频你懂的 | 蜜桃av人人夜夜澡人人爽 | 婷婷免费在线视频 | 中文在线免费视频 | 日韩av免费在线看 | 中文字幕在线视频免费播放 | 黄色片网站免费 | 四虎在线观看视频 | 亚洲性少妇性猛交wwww乱大交 | 99久久久久久久 | 国产精品毛片久久蜜 | 久久色在线播放 | 天堂久色 | 九九综合久久 | 久久久国产99久久国产一 | 93久久精品日日躁夜夜躁欧美 | 四虎影视8848aamm | 九九热免费精品视频 | 一区二区三区动漫 | 国产麻豆视频免费观看 | 五月开心六月婷婷 | 日韩在线视频播放 | 久久免费视频8 | 成人欧美一区二区三区黑人麻豆 | 日韩不卡高清 | 国产亚洲小视频 | 日韩欧美综合精品 | 国产成人福利在线 | 国产精品 国产精品 | 天天曰天天射 | 丁香婷婷在线观看 | 午夜婷婷综合 | 天天要夜夜操 | 99久久99精品 | 午夜精品一区二区国产 | 精品美女久久久久 | avwww在线 | 亚洲精欧美一区二区精品 | 国产伦理精品一区二区 | 黄色av网站在线免费观看 | 香蕉视频久久 | 成年人在线观看 | 人人插超碰 | 久久艹艹 | av成年人电影 | 久久综合干 | 天天舔夜夜操 | 狠狠干在线 | 久久免费视频在线观看30 | 天天曰天天| 99久久久久国产精品免费 | 天天干天天插 | 久久精品视频播放 | 一区二区在线影院 | 免费v片 | 成人性生爱a∨ | av电影一区二区三区 | 麻豆播放 | 久久综合婷婷国产二区高清 | 日日碰狠狠添天天爽超碰97久久 | 国产精品久久网 | 国模精品一区二区三区 | 久久激情小视频 | 九九精品在线观看 | 97精品国产97久久久久久春色 | 白丝av免费观看 | 色在线免费观看 | 视频一区在线免费观看 | 亚州免费视频 | 99久久精品无免国产免费 | 久久视频网址 | 国产在线精品一区二区不卡了 | 97超级碰碰碰视频在线观看 | 在线观看黄色国产 | 男女啪啪免费网站 | 人人涩 | 久久在线免费观看 | 国产黄色免费电影 | 欧美亚洲免费在线一区 | 999久久久久久久久 69av视频在线观看 | 午夜av免费看| 伊人成人激情 | 91一区啪爱嗯打偷拍欧美 | 丁香视频免费观看 | japanesexxxhd奶水| 免费成人av网站 | 精品国产一区二区三区日日嗨 | 欧美日韩国产二区三区 | 五月婷婷色 | 九色精品免费永久在线 | 超碰国产97 | 中文乱幕日产无线码1区 | 三级a视频 | 久久久精品福利视频 | www91在线观看 | 国产精品亚洲片在线播放 | 91爱看片 | 91精品爽啪蜜夜国产在线播放 | 日本久久久久久科技有限公司 | 91看国产 | 午夜精品av在线 | 国产成人久久久久 | 久草视频看看 | 成人av电影在线观看 | 免费a网址| 久久高清精品 | 99视频在线观看免费 | 国产999视频在线观看 | 久久成人国产精品 | 欧美成人tv| 日韩免费区 | 精品久久国产精品 | 久久字幕网 | 在线一区二区三区 | 91系列在线观看 | 91精品国产福利在线观看 | 久久电影网站中文字幕 | 欧美analxxxx| 久久久免费在线观看 | 五月婷婷在线视频观看 | 97在线观看免费观看高清 | 蜜桃视频在线观看一区 | 中文字幕在线视频免费播放 | 九九免费在线观看视频 | 综合久久一本 | 激情开心站 | 激情五月伊人 | 精品1区二区 | 成人午夜影院在线观看 | 亚洲精品在线一区二区三区 | 天天天天爱天天躁 | 日韩精品中文字幕在线 | 一区二区成人国产精品 | 在线观看欧美成人 | 99国产视频 | 超薄丝袜一二三区 | 日日夜夜骑| 91在线观看视频 | 4438全国亚洲精品在线观看视频 | 激情婷婷综合 | 国产中文字幕三区 | 激情五月婷婷丁香 | 中文字幕高清免费日韩视频在线 | 亚洲欧洲精品久久 | 日日夜夜天天人人 | 久久久av免费 | 国产视频一 | 天天在线视频色 | 国精产品永久999 | 日韩精品免费一线在线观看 | av一级片| 婷婷色在线资源 | 久久久久久久久免费视频 | 狠狠色丁香婷婷综合久久片 | 亚洲人成精品久久久久 | 日韩欧美高清一区二区 | 久久精品欧美一区二区三区麻豆 | 亚洲黄色在线观看 | 国产在线欧美 | 国产裸体视频网站 | 亚洲日本在线视频观看 | 热久久免费视频精品 | 亚洲人成网站精品片在线观看 | www.久久com| 久久99精品国产一区二区三区 | 国产第一福利 | 高潮毛片无遮挡高清免费 | 国产精品理论在线观看 | 91视频一8mav| 亚洲精品国产欧美在线观看 | 香蕉久久久久久av成人 | 国产va饥渴难耐女保洁员在线观看 | 成人a免费| 国产精品国产三级国产不产一地 | 国产精品久久久久一区二区三区 | 少妇自拍av| 在线看成人 | 日韩久久精品一区 | 日韩一区二区在线免费观看 | 日韩欧美在线观看一区 | 999成人国产 | 国产区在线 | 韩国av免费在线 | 五月天丁香视频 | 国产日韩欧美在线 | 黄色大全免费网站 | 51久久成人国产精品麻豆 | 久久久久久久久亚洲精品 | 中文字幕中文 | 中文字幕一二三区 | 中文字幕av在线免费 | 免费午夜网站 | 国产1级毛片 | 二区三区精品 | av不卡免费看 | av一区二区三区在线观看 | 精品久久久久亚洲 | 成人在线观看网址 | 夜夜操狠狠干 | 69久久夜色精品国产69 | 91人人网 | 国产在线色| 亚洲欧洲一级 | 激情五月激情综合网 | 日韩在线视频免费看 | 国产三级久久久 | 一级黄色片在线观看 | 人人爽人人澡人人添人人人人 | www.色综合.com | 手机av观看| 中国黄色一级大片 | 九月婷婷人人澡人人添人人爽 | 婷婷伊人五月 | 色射色| 天天综合网久久 | 6080yy精品一区二区三区 | 天天干,夜夜操 | 欧美精品久久久久久久久久 | 国产精品综合av一区二区国产馆 | 精品久久久久久久久久久久 | 午夜精品一区二区三区免费视频 | 久久九九网站 | 欧美日高清视频 | 91色视频 | 日韩免费av在线 | 韩国精品福利一区二区三区 | 国产成人a亚洲精品v | 精品国产一区二区三区蜜臀 | 亚洲精品视频免费 | 99成人精品 | 久久99热国产 | 久久y| 黄色毛片视频 | 欧美成人亚洲成人 | 丁香婷婷激情 | 国产一区精品在线 | 国产精品成人一区二区三区吃奶 | 国产精品美女久久久免费 | 国产一区自拍视频 | 国产精品成人aaaaa网站 | 午夜在线看片 | 91高清免费观看 | 国产在线欧美 | 国产视频精品网 | 91亚洲精| 国产亚洲在线 | 国产麻豆电影在线观看 | 国产剧情一区 | 亚洲精品www. | 国模一区二区三区四区 | 五月婷婷香蕉 | 激情综合色播五月 | 在线观看av大片 | 久久久国产一区二区三区 | 成人91av| 五月婷婷开心 | 综合色久 | 国产精品系列在线观看 | 国产精品免费在线播放 | 久久国产亚洲精品 | a黄色影院 | 一区二区免费不卡在线 | 精品国内自产拍在线观看视频 | 国产一级一级国产 | 91热爆视频 | 国产精品ssss在线亚洲 | 手机av在线不卡 | 在线韩国电影免费观影完整版 | 四虎国产永久在线精品 | 在线免费观看国产 | 又黄又爽的视频在线观看网站 | 丁香 久久 综合 | 右手影院亚洲欧美 | 日韩av一区在线观看 | 激情五月综合网 | 中文字幕丝袜一区二区 | 成人黄色小说视频 | 99精品国产一区二区 | 在线欧美国产 | 久久成电影 | 亚欧洲精品视频在线观看 | 免费av免费观看 | 色婷婷亚洲婷婷 | 免费视频在线观看网站 | 九九九电影免费看 | 成人av免费在线播放 | 国产精品a久久 | 成人av播放| 毛片永久免费 | 黄色软件在线观看免费 | 在线观看视频你懂 | 青草草在线视频 | www.777奇米| 亚洲乱码精品久久久久 | 精品国产乱码久久久久久1区二区 | 午夜精品一区二区三区在线观看 | 精品国产aⅴ麻豆 | 奇米网444 | 99欧美精品 | 欧美大片aaa | 亚洲国产日韩一区 | 69人人 | 99热免费在线| 99在线精品视频观看 | 国产剧情av在线播放 | 狠狠色综合网站久久久久久久 | www.com黄色| 99精品免费| 久久久精品欧美一区二区免费 | 91精品啪在线观看国产 | 中文一区二区三区在线观看 | 国内精品久久久久久 | 欧美精品一区二区蜜臀亚洲 | 在线观看亚洲 | 手机在线看片日韩 | 亚洲动漫在线观看 | 国产成人av在线 | 黄色免费看片网站 | 五月婷婷播播 | 日韩在线二区 | 日韩中文字幕在线不卡 | 欧美日韩中文字幕综合视频 | www.夜夜爱 | 91成人欧美 | 国产精品久久久久久久7电影 | 久久九九久久精品 | 欧洲在线免费视频 | 精品成人国产 | 国产精品毛片久久久久久久 | 成人 亚洲 欧美 | 97在线观看免费观看 | 欧美激情综合色综合啪啪五月 | 97色婷婷人人爽人人 | 人人爱天天操 | 精品嫩模福利一区二区蜜臀 | 在线观看免费成人av | 国产精品久久久久久久久费观看 | 亚洲欧美视频一区二区三区 | 日韩欧美在线综合网 | 久久一区91 | 欧美亚洲久久 | av电影免费看 | 欧美日韩国产精品一区 | 综合视频在线 | 久久国产精品99久久久久久进口 | 免费在线观看日韩视频 | 国产亚洲精品bv在线观看 | 亚洲电影第一页av | 日本公乱妇视频 | 亚洲日本在线视频观看 | av色图天堂网 | 中文十次啦 | 成人久久久久久久久 | 亚洲一区日韩 | 99热在线看 | av成人在线看 | 免费在线观看成人 | 91黄色在线看 | 日韩乱色精品一区二区 | 高清av中文字幕 | 成人免费视频网址 | 亚洲一区视频在线播放 | 五月情婷婷| 天天天天天天干 | 91在线播放视频 | 不卡日韩av | 五月开心婷婷网 | 在线免费看黄色 | 色99导航 | 久久999久久 | 国产成人精品一区二区三区在线 | 婷婷九月激情 | 精品99在线视频 | 日日夜操| 久久草 | 欧美人zozo| 成年人在线播放视频 |