日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

cs231n:assignment2——Q4: ConvNet on CIFAR-10

發布時間:2023/12/20 编程问答 45 豆豆
生活随笔 收集整理的這篇文章主要介紹了 cs231n:assignment2——Q4: ConvNet on CIFAR-10 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

視頻里 Andrej Karpathy上課的時候說,這次的作業meaty but educational,確實很meaty,作業一般是由.ipynb文件和.py文件組成,這次因為每個.ipynb文件涉及到的.py文件較多,且互相之間有交叉,所以每篇博客只貼出一個.ipynb或者一個.py文件.(因為之前的作業由于是一個.ipynb文件對應一個.py文件,所以就整合到一篇博客里)
還是那句話,有錯誤希望幫我指出來,多多指教,謝謝
ConvolutionalNetworks.ipynb內容:

  • Convolutional Networks
  • Convolution Naive forward pass
  • Aside Image processing via convolutions
  • Convolution Naive backward pass
  • Max pooling Naive forward
  • Max pooling Naive backward
  • Fast layers
  • Convolutional sandwich layers
  • Three-layer ConvNet
    • Sanity check loss
    • Gradient check
    • Overfit small data
    • Train the net
    • Visualize Filters
  • Spatial Batch Normalization
    • Spatial batch normalization forward
    • Spatial batch normalization backward
  • Experiment
      • Things you should try
      • Tips for training
      • Going above and beyond
      • What we expect
  • Extra Credit Description

Convolutional Networks

So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.

First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.

# As usual, a bit of setupimport numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.cnn import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient from cs231n.layers import * from cs231n.fast_layers import * from cs231n.solver import Solver%matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray'# for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2def rel_error(x, y):""" returns relative error """return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data.data = get_CIFAR10_data() for k, v in data.iteritems():print '%s: ' % k, v.shape X_val: (1000, 3, 32, 32) X_train: (49000, 3, 32, 32) X_test: (1000, 3, 32, 32) y_val: (1000,) y_train: (49000,) y_test: (1000,)

Convolution: Naive forward pass

The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.

You don’t have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.

You can test your implementation by running the following:

x_shape = (2, 3, 4, 4) w_shape = (3, 3, 4, 4) x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape) w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape) b = np.linspace(-0.1, 0.2, num=3)conv_param = {'stride': 2, 'pad': 1} out, _ = conv_forward_naive(x, w, b, conv_param) correct_out = np.array([[[[-0.08759809, -0.10987781],[-0.18387192, -0.2109216 ]],[[ 0.21027089, 0.21661097],[ 0.22847626, 0.23004637]],[[ 0.50813986, 0.54309974],[ 0.64082444, 0.67101435]]],[[[-0.98053589, -1.03143541],[-1.19128892, -1.24695841]],[[ 0.69108355, 0.66880383],[ 0.59480972, 0.56776003]],[[ 2.36270298, 2.36904306],[ 2.38090835, 2.38247847]]]]) # Compare your output to ours; difference should be around 1e-8 print 'Testing conv_forward_naive' print 'difference: ', rel_error(out, correct_out) Testing conv_forward_naive difference: 2.21214764175e-08

Aside: Image processing via convolutions

As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.

from scipy.misc import imread, imresizekitten, puppy = imread('kitten.jpg'), imread('puppy.jpg') # kitten is wide, and puppy is already square d = kitten.shape[1] - kitten.shape[0] kitten_cropped = kitten[:, d/2:-d/2, :]img_size = 200 # Make this smaller if it runs too slow x = np.zeros((2, 3, img_size, img_size)) x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1)) x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))# Set up a convolutional weights holding 2 filters, each 3x3 w = np.zeros((2, 3, 3, 3))# The first filter converts the image to grayscale. # Set up the red, green, and blue channels of the filter. w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]] w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]] w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]# Second filter detects horizontal edges in the blue channel. w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]# Vector of biases. We don't need any bias for the grayscale # filter, but for the edge detection filter we want to add 128 # to each output so that nothing is negative. b = np.array([0, 128])# Compute the result of convolving each input in x with each filter in w, # offsetting by b, and storing the results in out. out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})def imshow_noax(img, normalize=True):""" Tiny helper to show images as uint8 and remove axis labels """if normalize:img_max, img_min = np.max(img), np.min(img)img = 255.0 * (img - img_min) / (img_max - img_min)plt.imshow(img.astype('uint8'))plt.gca().axis('off')# Show the original images and the results of the conv operation plt.subplot(2, 3, 1) imshow_noax(puppy, normalize=False) plt.title('Original image') plt.subplot(2, 3, 2) imshow_noax(out[0, 0]) plt.title('Grayscale') plt.subplot(2, 3, 3) imshow_noax(out[0, 1]) plt.title('Edges') plt.subplot(2, 3, 4) imshow_noax(kitten_cropped, normalize=False) plt.subplot(2, 3, 5) imshow_noax(out[1, 0]) plt.subplot(2, 3, 6) imshow_noax(out[1, 1]) plt.show()

Convolution: Naive backward pass

Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don’t need to worry too much about computational efficiency.

When you are done, run the following to check your backward pass with a numeric gradient check.

x = np.random.randn(4, 3, 5, 5) w = np.random.randn(2, 3, 3, 3) b = np.random.randn(2,) dout = np.random.randn(4, 2, 5, 5) conv_param = {'stride': 1, 'pad': 1}dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)out, cache = conv_forward_naive(x, w, b, conv_param) dx, dw, db = conv_backward_naive(dout, cache)# Your errors should be around 1e-9' print 'Testing conv_backward_naive function' print 'dx error: ', rel_error(dx, dx_num) print 'dw error: ', rel_error(dw, dw_num) print 'db error: ', rel_error(db, db_num) Testing conv_backward_naive function dx error: 1.14027414431e-09 dw error: 2.30256641538e-10 db error: 3.20966816447e-12

Max pooling: Naive forward

Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don’t worry too much about computational efficiency.

Check your implementation by running the following:

x_shape = (2, 3, 4, 4) x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape) pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}out, _ = max_pool_forward_naive(x, pool_param)correct_out = np.array([[[[-0.26315789, -0.24842105],[-0.20421053, -0.18947368]],[[-0.14526316, -0.13052632],[-0.08631579, -0.07157895]],[[-0.02736842, -0.01263158],[ 0.03157895, 0.04631579]]],[[[ 0.09052632, 0.10526316],[ 0.14947368, 0.16421053]],[[ 0.20842105, 0.22315789],[ 0.26736842, 0.28210526]],[[ 0.32631579, 0.34105263],[ 0.38526316, 0.4 ]]]]) print correct_out.shape# Compare your output with ours. Difference should be around 1e-8. print 'Testing max_pool_forward_naive function:' print 'difference: ', rel_error(out, correct_out) (2, 3, 2, 2) Testing max_pool_forward_naive function: difference: 4.16666651573e-08

Max pooling: Naive backward

Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don’t need to worry about computational efficiency.

Check your implementation with numeric gradient checking by running the following:

x = np.random.randn(3, 2, 8, 8) dout = np.random.randn(3, 2, 4, 4) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)out, cache = max_pool_forward_naive(x, pool_param) dx = max_pool_backward_naive(dout, cache)# Your error should be around 1e-12 print 'Testing max_pool_backward_naive function:' print 'dx error: ', rel_error(dx, dx_num) Testing max_pool_backward_naive function: dx error: 3.27563382511e-12

Fast layers

Making convolution and pooling layers fast can be challenging. To spare you the pain, we’ve provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.

The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:

python setup.py build_ext --inplace

The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.

NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.

You can compare the performance of the naive and fast versions of these layers by running the following:

from cs231n.fast_layers import conv_forward_fast, conv_backward_fast from time import timex = np.random.randn(100, 3, 31, 31) w = np.random.randn(25, 3, 3, 3) b = np.random.randn(25,) dout = np.random.randn(100, 25, 16, 16) conv_param = {'stride': 2, 'pad': 1}t0 = time() out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param) t1 = time() out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param) t2 = time()print 'Testing conv_forward_fast:' print 'Naive: %fs' % (t1 - t0) print 'Fast: %fs' % (t2 - t1) print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1)) print 'Difference: ', rel_error(out_naive, out_fast)t0 = time() dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive) t1 = time() dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast) t2 = time()print '\nTesting conv_backward_fast:' print 'Naive: %fs' % (t1 - t0) print 'Fast: %fs' % (t2 - t1) print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1)) print 'dx difference: ', rel_error(dx_naive, dx_fast) print 'dw difference: ', rel_error(dw_naive, dw_fast) print 'db difference: ', rel_error(db_naive, db_fast) Testing conv_forward_fast: Naive: 23.932088s Fast: 0.015995s Speedup: 1496.220665x Difference: 2.1149165916e-11Testing conv_backward_fast: Naive: 23.537247s Fast: 0.010192s Speedup: 2309.349201x dx difference: 5.65990987256e-12 dw difference: 1.43212406176e-12 db difference: 0.0 from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fastx = np.random.randn(100, 3, 32, 32) dout = np.random.randn(100, 3, 16, 16) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}t0 = time() out_naive, cache_naive = max_pool_forward_naive(x, pool_param) t1 = time() out_fast, cache_fast = max_pool_forward_fast(x, pool_param) t2 = time()print 'Testing pool_forward_fast:' print 'Naive: %fs' % (t1 - t0) print 'fast: %fs' % (t2 - t1) print 'speedup: %fx' % ((t1 - t0) / (t2 - t1)) print 'difference: ', rel_error(out_naive, out_fast)t0 = time() dx_naive = max_pool_backward_naive(dout, cache_naive) t1 = time() dx_fast = max_pool_backward_fast(dout, cache_fast) t2 = time()print '\nTesting pool_backward_fast:' print 'Naive: %fs' % (t1 - t0) print 'speedup: %fx' % ((t1 - t0) / (t2 - t1)) print 'dx difference: ', rel_error(dx_naive, dx_fast) Testing pool_forward_fast: Naive: 1.115254s fast: 0.002008s speedup: 555.416172x difference: 0.0Testing pool_backward_fast: Naive: 0.476396s speedup: 39.999800x dx difference: 0.0

Convolutional “sandwich” layers

Previously we introduced the concept of “sandwich” layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.

from cs231n.layer_utils import *x = np.random.randn(2, 3, 16, 16) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} bn_param = {'mode': 'train'} gamma = np.ones(w.shape[0]) beta = np.zeros(w.shape[0])###check conv_relu_pool_forward out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param) dx, dw, db = conv_relu_pool_backward(dout, cache)dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)print 'Testing conv_relu_pool' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db)## check conv_bn_relu_pool_forward ## 加了spacial batch normalization以后, db不知道為什么誤差這么大,可能是因為b的維數較少所以計算中有很大一部分是加和, ## 所以積累每一元素上的小誤差,也可能是在第一層網絡,反向傳播到第一層的時候誤差就會累計的比較大 out, cache = conv_bn_relu_pool_forward(x, w, b, gamma, beta, conv_param, pool_param, bn_param) dx, dw, db, dgamma, dbeta = conv_bn_relu_pool_backward(dout, cache)dx_num = eval_numerical_gradient_array(lambda x: conv_bn_relu_pool_forward(x, w, b, gamma, beta, conv_param, pool_param, bn_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_bn_relu_pool_forward(x, w, b, gamma, beta, conv_param, pool_param, bn_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_bn_relu_pool_forward(x, w, b, gamma, beta, conv_param, pool_param, bn_param)[0], b, dout) dgamma_num = eval_numerical_gradient_array(lambda gamma: conv_bn_relu_pool_forward(x, w, b, gamma, beta, conv_param, pool_param, bn_param)[0], gamma, dout) dbeta_num = eval_numerical_gradient_array(lambda beta: conv_bn_relu_pool_forward(x, w, b, gamma, beta, conv_param, pool_param, bn_param)[0], beta, dout)print print 'Testing conv_relu_pool' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) print 'dgamma error: ', rel_error(dgamma_num, dgamma) print 'dbeta error: ', rel_error(dbeta_num, dbeta) Testing conv_relu_pool dx error: 1.07784659804e-08 dw error: 3.73875622936e-09 db error: 8.74414919539e-11Testing conv_relu_pool dx error: 1.54664016081e-06 dw error: 1.96618403155e-09 db error: 0.0185469286087 dgamma error: 9.44011367278e-12 dbeta error: 1.32364005526e-11 from cs231n.layer_utils import conv_relu_forward, conv_relu_backwardx = np.random.randn(2, 3, 8, 8) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1}out, cache = conv_relu_forward(x, w, b, conv_param) dx, dw, db = conv_relu_backward(dout, cache)dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)print 'Testing conv_relu:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) Testing conv_relu: dx error: 2.54502598987e-09 dw error: 4.53460011947e-10 db error: 6.7945865355e-09

Three-layer ConvNet

Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.

Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:

Sanity check loss

After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.

model = ThreeLayerConvNet()N = 50 X = np.random.randn(N, 3, 32, 32) y = np.random.randint(10, size=N)loss, grads = model.loss(X, y) print 'Initial loss (no regularization): ', lossmodel.reg = 0.5 loss, grads = model.loss(X, y) print 'Initial loss (with regularization): ', loss Initial loss (no regularization): 2.30258381546 Initial loss (with regularization): 2.50837260664

Gradient check

After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.

num_inputs = 2 input_dim = (3, 16, 16) reg = 0.0 num_classes = 10 X = np.random.randn(num_inputs, *input_dim) y = np.random.randint(num_classes, size=num_inputs)model = ThreeLayerConvNet(num_filters=3, filter_size=3,input_dim=input_dim, hidden_dim=7,dtype=np.float64) loss, grads = model.loss(X, y) for param_name in sorted(grads):f = lambda _: model.loss(X, y)[0]param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)e = rel_error(param_grad_num, grads[param_name])print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])) W1 max relative error: 9.833916e-04 W2 max relative error: 6.500223e-03 W3 max relative error: 2.111341e-04 b1 max relative error: 4.609401e-05 b2 max relative error: 4.915309e-08 b3 max relative error: 6.948804e-10

Overfit small data

A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.

num_train = 100 small_data = {'X_train': data['X_train'][:num_train],'y_train': data['y_train'][:num_train],'X_val': data['X_val'],'y_val': data['y_val'], }model = ThreeLayerConvNet(weight_scale=1e-2)solver = Solver(model, small_data,num_epochs=10, batch_size=50,update_rule='adam',optim_config={'learning_rate': 1e-3,},verbose=True, print_every=1) solver.train() (Iteration 1 / 20) loss: 2.327333 (Epoch 0 / 10) train acc: 0.210000; val_acc: 0.129000 (Iteration 2 / 20) loss: 2.920699 (Epoch 1 / 10) train acc: 0.180000; val_acc: 0.136000 (Iteration 3 / 20) loss: 2.843339 (Iteration 4 / 20) loss: 2.256272 (Epoch 2 / 10) train acc: 0.140000; val_acc: 0.080000 (Iteration 5 / 20) loss: 3.195308 (Iteration 6 / 20) loss: 2.364024 (Epoch 3 / 10) train acc: 0.360000; val_acc: 0.199000 (Iteration 7 / 20) loss: 2.414444 (Iteration 8 / 20) loss: 2.564107 (Epoch 4 / 10) train acc: 0.390000; val_acc: 0.199000 (Iteration 9 / 20) loss: 2.238707 (Iteration 10 / 20) loss: 1.846348 (Epoch 5 / 10) train acc: 0.290000; val_acc: 0.156000 (Iteration 11 / 20) loss: 1.713543 (Iteration 12 / 20) loss: 1.578357 (Epoch 6 / 10) train acc: 0.580000; val_acc: 0.207000 (Iteration 13 / 20) loss: 1.487418 (Iteration 14 / 20) loss: 1.152261 (Epoch 7 / 10) train acc: 0.660000; val_acc: 0.207000 (Iteration 15 / 20) loss: 1.103686 (Iteration 16 / 20) loss: 1.153359 (Epoch 8 / 10) train acc: 0.680000; val_acc: 0.173000 (Iteration 17 / 20) loss: 1.311634 (Iteration 18 / 20) loss: 0.984277 (Epoch 9 / 10) train acc: 0.780000; val_acc: 0.231000 (Iteration 19 / 20) loss: 0.987850 (Iteration 20 / 20) loss: 0.655680 (Epoch 10 / 10) train acc: 0.840000; val_acc: 0.264000

Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:

plt.subplot(2, 1, 1) plt.plot(solver.loss_history, 'o') plt.xlabel('iteration') plt.ylabel('loss')plt.subplot(2, 1, 2) plt.plot(solver.train_acc_history, '-o') plt.plot(solver.val_acc_history, '-o') plt.legend(['train', 'val'], loc='upper left') plt.xlabel('epoch') plt.ylabel('accuracy') plt.show()

Train the net

By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:

model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)solver = Solver(model, data,num_epochs=1, batch_size=50,update_rule='adam',optim_config={'learning_rate': 1e-3,},verbose=True, print_every=20) solver.train() (Iteration 1 / 980) loss: 2.304576 (Epoch 0 / 1) train acc: 0.108000; val_acc: 0.098000 (Iteration 21 / 980) loss: 1.965750 (Iteration 41 / 980) loss: 1.979698 (Iteration 61 / 980) loss: 2.101320 (Iteration 81 / 980) loss: 1.901987 (Iteration 101 / 980) loss: 1.823573 (Iteration 121 / 980) loss: 1.559670 (Iteration 141 / 980) loss: 1.521758 (Iteration 161 / 980) loss: 1.614254 (Iteration 181 / 980) loss: 1.525828 (Iteration 201 / 980) loss: 1.801237 (Iteration 221 / 980) loss: 1.771171 (Iteration 241 / 980) loss: 1.935747 (Iteration 261 / 980) loss: 1.706197 (Iteration 281 / 980) loss: 1.771841 (Iteration 301 / 980) loss: 1.730827 (Iteration 321 / 980) loss: 1.766924 (Iteration 341 / 980) loss: 1.604705 (Iteration 361 / 980) loss: 1.689329 (Iteration 381 / 980) loss: 1.487211 (Iteration 401 / 980) loss: 1.652397 (Iteration 421 / 980) loss: 1.624637 (Iteration 441 / 980) loss: 1.774464 (Iteration 461 / 980) loss: 1.728469 (Iteration 481 / 980) loss: 1.990141 (Iteration 501 / 980) loss: 1.571801 (Iteration 521 / 980) loss: 1.592427 (Iteration 541 / 980) loss: 1.860452 (Iteration 561 / 980) loss: 1.967219 (Iteration 581 / 980) loss: 1.513192 (Iteration 601 / 980) loss: 1.872284 (Iteration 621 / 980) loss: 1.673944 (Iteration 641 / 980) loss: 1.810775 (Iteration 661 / 980) loss: 1.636547 (Iteration 681 / 980) loss: 1.489698 (Iteration 701 / 980) loss: 1.718354 (Iteration 721 / 980) loss: 1.916079 (Iteration 741 / 980) loss: 1.666237 (Iteration 761 / 980) loss: 1.716002 (Iteration 781 / 980) loss: 1.543222 (Iteration 801 / 980) loss: 1.491887 (Iteration 821 / 980) loss: 1.967372 (Iteration 841 / 980) loss: 1.685699 (Iteration 861 / 980) loss: 1.239976 (Iteration 881 / 980) loss: 1.609454 (Iteration 901 / 980) loss: 1.513272 (Iteration 921 / 980) loss: 1.752893 (Iteration 941 / 980) loss: 1.586221 (Iteration 961 / 980) loss: 1.616744 (Epoch 1 / 1) train acc: 0.476000; val_acc: 0.490000

Visualize Filters

You can visualize the first-layer convolutional filters from the trained network by running the following:

from cs231n.vis_utils import visualize_gridgrid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1)) plt.imshow(grid.astype('uint8')) plt.axis('off') plt.gcf().set_size_inches(5, 5) plt.show()

Spatial Batch Normalization

We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called “spatial batch normalization.”

Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.

If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different images and different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.

Spatial batch normalization: forward

In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:

# Check the training-time forward pass by checking means and variances # of features both before and after spatial batch normalizationN, C, H, W = 2, 3, 4, 5 x = 4 * np.random.randn(N, C, H, W) + 10print 'Before spatial batch normalization:' print ' Shape: ', x.shape print ' Means: ', x.mean(axis=(0, 2, 3)) print ' Stds: ', x.std(axis=(0, 2, 3))# Means should be close to zero and stds close to one gamma, beta = np.ones(C), np.zeros(C) bn_param = {'mode': 'train'} out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print 'After spatial batch normalization:' print ' Shape: ', out.shape print ' Means: ', out.mean(axis=(0, 2, 3)) print ' Stds: ', out.std(axis=(0, 2, 3))# Means should be close to beta and stds close to gamma gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8]) out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print 'After spatial batch normalization (nontrivial gamma, beta):' print ' Shape: ', out.shape print ' Means: ', out.mean(axis=(0, 2, 3)) print ' Stds: ', out.std(axis=(0, 2, 3)) Before spatial batch normalization:Shape: (2, 3, 4, 5)Means: [ 11.09151825 10.38671938 9.79044576]Stds: [ 4.23766998 4.62486377 4.55046613] After spatial batch normalization:Shape: (2, 3, 4, 5)Means: [ 1.16573418e-16 8.54871729e-16 1.99840144e-16]Stds: [ 0.99999972 0.99999977 0.99999976] After spatial batch normalization (nontrivial gamma, beta):Shape: (2, 3, 4, 5)Means: [ 6. 7. 8.]Stds: [ 2.99999916 3.99999906 4.99999879] i=0# Check the test-time forward pass by running the training-time # forward pass many times to warm up the running averages, and then # checking the means and variances of activations after a test-time # forward pass.N, C, H, W = 10, 4, 11, 12bn_param = {'mode': 'train'} gamma = np.ones(C) beta = np.zeros(C) for t in xrange(50):x = 2.3 * np.random.randn(N, C, H, W) + 13spatial_batchnorm_forward(x, gamma, beta, bn_param) bn_param['mode'] = 'test' x = 2.3 * np.random.randn(N, C, H, W) + 13 a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)# Means should be close to zero and stds close to one, but will be # noisier than training-time forward passes. print 'After spatial batch normalization (test-time):' print ' means: ', a_norm.mean(axis=(0, 2, 3)) print ' stds: ', a_norm.std(axis=(0, 2, 3)) After spatial batch normalization (test-time):means: [ 0.06975173 0.06009512 0.02887493 0.02397713]stds: [ 1.00763471 0.99021634 0.99863325 0.97411123]

Spatial batch normalization: backward

In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:

N, C, H, W = 2, 3, 4, 5 x = 5 * np.random.randn(N, C, H, W) + 12 gamma = np.random.randn(C) beta = np.random.randn(C) dout = np.random.randn(N, C, H, W)bn_param = {'mode': 'train'} fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]dx_num = eval_numerical_gradient_array(fx, x, dout) da_num = eval_numerical_gradient_array(fg, gamma, dout) db_num = eval_numerical_gradient_array(fb, beta, dout)_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param) dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache) print 'dx error: ', rel_error(dx_num, dx) print 'dgamma error: ', rel_error(da_num, dgamma) print 'dbeta error: ', rel_error(db_num, dbeta) dx error: 4.3762056051e-08 dgamma error: 4.25829344038e-11 dbeta error: 5.07865400281e-12

Experiment!

Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started:

Things you should try:

  • Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
  • Number of filters: Above we used 32 filters. Do more or fewer do better?
  • Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
  • Network architecture: The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file cs231n/classifiers/convnet.py. Some good architectures to try include:
    • [conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]
    • [conv-relu-pool]XN - [affine]XM - [softmax or SVM]
    • [conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]

Tips for training

For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:

  • If the parameters are working well, you should see improvement within a few hundred iterations
  • Remember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
  • Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.

Going above and beyond

If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.

  • Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
  • Alternative activation functions such as leaky ReLU, parametric ReLU, or MaxOut.
  • Model ensembles
  • Data augmentation

If you do decide to implement something extra, clearly describe it in the “Extra Credit Description” cell below.

What we expect

At the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.

You should use the space below to experiment and train your network. The final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.

Have fun and happy training!

model = ConvNetArch_1([(2,7),(4,5)], [5,5], connect_conv=(4,3), use_batchnorm=True,loss_fuction = 'softmax', weight_scale=5e-2, reg=0, dtype=np.float64)###################### check shape of initial parameters: ####################### for k, v in model.params.iteritems(): # print "%s" % k, model.params[k].shape########################################################################################################### Sanit check loss: ###################################### zengjia batch norm yihou,結果在2.7,2.8,2.9左右,應該是在2.3左右才對,不知道哪里出錯了, #### 可能是batch norm加太多了,每一層后面都有 N = 10 X = np.random.randn(N, 3, 32, 32) y = np.random.randint(10, size=N)loss, _ = model.loss(X, y) print 'Initial loss (no regularization): ', lossmodel.reg = 0.1 loss, _ = model.loss(X, y) print 'Initial loss (with regularization): ', loss######################################################################################################### Sanity gradient check : ###############################N = 2 ### X的寬和高設為16,加快運算, 計算數值梯度的速度特別慢 X = np.random.randn(N, 3, 16, 16) y = np.random.randint(10, size=N)model = ConvNetArch_1([(2,3)], [5,5], input_dim=(3, 16, 16), connect_conv=(2,3), use_batchnorm=True,loss_fuction = 'softmax', weight_scale=5e-2, reg=0, dtype=np.float64)loss, grads = model.loss(X, y)for param_name in sorted(grads):f = lambda _: model.loss(X, y)[0]param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)#print param_grad_nume = rel_error(param_grad_num, grads[param_name])print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))######################################################## Initial loss (no regularization): 2.8666209542 Initial loss (with regularization): 3.12124148165 CCW max relative error: 3.688953e-06 CW1 max relative error: 2.849567e-06 FW1 max relative error: 8.171128e-03 FW2 max relative error: 6.072839e-06 cb1 max relative error: 6.661434e-02 cbeta1 max relative error: 1.112161e-08 ccb max relative error: 4.440936e-01 ccbeta max relative error: 5.932733e-09 ccgamma max relative error: 7.900637e-09 cgamma1 max relative error: 3.363437e-09 fb1 max relative error: 3.552714e-07 fb2 max relative error: 7.993606e-07 fbeta1 max relative error: 6.106227e-08 fbeta2 max relative error: 5.441469e-10 fgamma1 max relative error: 3.206630e-09 fgamma2 max relative error: 9.478054e-10 ### 程序運行太慢了,目前這個參數是試的第一組,效果還不錯,第二輪迭代就達到50+% ### 跑了一晚上發現大約在第10次以后開始過擬合了,所以改為epoch=10,best_val_acc = 0.683, ### 之前epoch=20時,best_val_acc大概能到能到70% ### 在我電腦上,程序跑太慢,所以就沒怎么調參數 ### 考慮到這速度,下面這種結構就沒有實現: ### [conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM] ### 目前可以實現下面兩種結構: ### [conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM] ### [conv-relu-pool]XN - [affine]XM - [softmax or SVM]model = ConvNetArch_1([(32,3),(64,3),(128,3)], [100], connect_conv=0, use_batchnorm=False,loss_fuction = 'softmax', weight_scale=5e-2, reg=0, dtype=np.float64)solver = Solver(model, data,num_epochs=10, batch_size=100,update_rule='adam',optim_config={'learning_rate': 1e-3},print_every=100,lr_decay=0.95,verbose=True) solver.train() print solver.best_val_acc (Iteration 1 / 4900) loss: 47.549210 (Epoch 0 / 10) train acc: 0.122000; val_acc: 0.124000 (Iteration 101 / 4900) loss: 1.889701 (Iteration 201 / 4900) loss: 1.628195 (Iteration 301 / 4900) loss: 1.503709 (Iteration 401 / 4900) loss: 1.284585 (Epoch 1 / 10) train acc: 0.497000; val_acc: 0.531000 (Iteration 501 / 4900) loss: 1.277860 (Iteration 601 / 4900) loss: 1.215424 (Iteration 701 / 4900) loss: 1.209591 (Iteration 801 / 4900) loss: 1.173904 (Iteration 901 / 4900) loss: 1.068132 (Epoch 2 / 10) train acc: 0.594000; val_acc: 0.580000 (Iteration 1001 / 4900) loss: 1.062385 (Iteration 1101 / 4900) loss: 1.080026 (Iteration 1201 / 4900) loss: 0.758866 (Iteration 1301 / 4900) loss: 0.818387 (Iteration 1401 / 4900) loss: 1.086589 (Epoch 3 / 10) train acc: 0.674000; val_acc: 0.616000 (Iteration 1501 / 4900) loss: 0.980248 (Iteration 1601 / 4900) loss: 0.964738 (Iteration 1701 / 4900) loss: 1.003369 (Iteration 1801 / 4900) loss: 0.974146 (Iteration 1901 / 4900) loss: 1.043093 (Epoch 4 / 10) train acc: 0.727000; val_acc: 0.645000 (Iteration 2001 / 4900) loss: 0.684164 (Iteration 2101 / 4900) loss: 0.899941 (Iteration 2201 / 4900) loss: 0.670452 (Iteration 2301 / 4900) loss: 0.715849 (Iteration 2401 / 4900) loss: 0.798524 (Epoch 5 / 10) train acc: 0.759000; val_acc: 0.661000 (Iteration 2501 / 4900) loss: 0.725618 (Iteration 2601 / 4900) loss: 0.702008 (Iteration 2701 / 4900) loss: 0.591424 (Iteration 2801 / 4900) loss: 0.943362 (Iteration 2901 / 4900) loss: 0.450685 (Epoch 6 / 10) train acc: 0.773000; val_acc: 0.674000 (Iteration 3001 / 4900) loss: 0.898413 (Iteration 3101 / 4900) loss: 0.627382 (Iteration 3201 / 4900) loss: 0.454569 (Iteration 3301 / 4900) loss: 0.446561 (Iteration 3401 / 4900) loss: 0.499366 (Epoch 7 / 10) train acc: 0.795000; val_acc: 0.667000 (Iteration 3501 / 4900) loss: 0.503052 (Iteration 3601 / 4900) loss: 0.408205 (Iteration 3701 / 4900) loss: 0.437030 (Iteration 3801 / 4900) loss: 0.510435 (Iteration 3901 / 4900) loss: 0.735819 (Epoch 8 / 10) train acc: 0.835000; val_acc: 0.678000 (Iteration 4001 / 4900) loss: 0.559391 (Iteration 4101 / 4900) loss: 0.451097 (Iteration 4201 / 4900) loss: 0.609639 (Iteration 4301 / 4900) loss: 0.549392 (Iteration 4401 / 4900) loss: 0.704371 (Epoch 9 / 10) train acc: 0.821000; val_acc: 0.682000 (Iteration 4501 / 4900) loss: 0.642858 (Iteration 4601 / 4900) loss: 0.502988 (Iteration 4701 / 4900) loss: 0.418752 (Iteration 4801 / 4900) loss: 0.306134 (Epoch 10 / 10) train acc: 0.842000; val_acc: 0.683000 0.683 print plt.subplot(2, 1, 1) plt.title('Training loss') plt.plot(solver.loss_history, 'o') plt.xlabel('Iteration')plt.subplot(2, 1, 2) plt.title('Accuracy') plt.plot(solver.train_acc_history, '-o', label='train') plt.plot(solver.val_acc_history, '-o', label='val') plt.plot([0.5] * len(solver.val_acc_history), 'k--') plt.xlabel('Epoch') plt.legend(loc='lower right') plt.gcf().set_size_inches(15, 12) plt.show()

Extra Credit Description

If you implement any additional features for extra credit, clearly describe them here with pointers to any code in this or other files if applicable.

總結

以上是生活随笔為你收集整理的cs231n:assignment2——Q4: ConvNet on CIFAR-10的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

91香蕉视频污在线 | 伊人看片 | 在线免费av网站 | 国产一区二区三区免费在线 | 婷婷丁香在线 | 国产精品一区二区三区久久久 | 久艹视频免费观看 | av一本久道久久波多野结衣 | 91麻豆精品国产91久久久久久久久 | 国产精品久久久久久久久久免费 | 中文字幕一区二区三区四区视频 | 97精品国产一二三产区 | 99r精品视频在线观看 | 曰本三级在线 | 色噜噜日韩精品欧美一区二区 | japanese黑人亚洲人4k | 国产视频在 | 久久夜色精品国产欧美乱极品 | 亚洲 欧美 91 | 色97在线 | 亚洲综合网站在线观看 | 免费在线观看黄色网 | 天天操人| 在线视频观看国产 | 国产999精品视频 | 成人av影视观看 | 最新国产福利 | 你操综合| 高清中文字幕 | 视频 国产区 | 亚洲欧美一区二区三区孕妇写真 | 精品国产乱码久久久久久久 | 国内综合精品午夜久久资源 | 91色影院| 久久精品视频观看 | 黄色三几片 | 欧美一区二区三区四区夜夜大片 | 美女性爽视频国产免费app | 波多野结衣在线播放一区 | 91精品久久久久久综合乱菊 | 国产成人精品aaa | av综合站| 中文字幕在线免费播放 | 日韩大片在线观看 | 国产黄在线| 国产精品视频全国免费观看 | 久久视频99 | 91av视频| 国产伦精品一区二区三区在线 | 免费久久久久久久 | 92国产精品久久久久首页 | 欧美精品少妇xxxxx喷水 | 五月婷影院 | 美女黄频在线观看 | 最新日韩在线观看视频 | 天天鲁天天干天天射 | 亚洲区二区 | www.久久久久 | 国产色中涩 | 99精品视频播放 | 五月综合激情婷婷 | 日韩乱码在线 | 四虎国产精品永久在线国在线 | 日韩黄色大片在线观看 | 最近乱久中文字幕 | 成人午夜影院在线观看 | 九色精品免费永久在线 | 成人免费在线观看av | 日韩mv欧美mv国产精品 | 国产一区二区在线免费观看 | 精品国产伦一区二区三区观看方式 | 最新三级在线 | 欧美福利视频 | 国产精品久久久久久久久久不蜜月 | 麻豆视频大全 | 99产精品成人啪免费网站 | 99精彩视频在线观看免费 | 在线一二区 | 黄色成品视频 | 天天操欧美 | 欧美精品一区二区三区一线天视频 | 久久天天躁| 99国产精品视频免费观看一公开 | 国产亚洲精品久久久久久网站 | 在线观看 国产 | 亚洲精品乱码久久 | 久久99精品波多结衣一区 | 免费在线看v | 国产色综合天天综合网 | 亚洲激情五月 | 亚洲欧美婷婷六月色综合 | 免费看三级 | 成人av片免费观看app下载 | 伊人五月在线 | 在线国产99 | 伊人久久在线观看 | 国产精品正在播放 | 97视频播放 | 亚洲国产中文字幕 | 久久99在线| 欧美国产日韩在线观看 | 就要干b | 亚洲午夜精品福利 | 日韩av在线免费看 | 国产一级不卡视频 | 操操爽| av不卡免费在线观看 | 91精品视频一区 | 国产成人精品一区二区三区福利 | 成人欧美一区二区三区黑人麻豆 | 亚洲韩国一区二区三区 | 成人资源站 | 天天射日| 国产精品对白一区二区三区 | 激情影音| 中文字幕一区二区三区四区久久 | 精品一二| 视频国产在线观看18 | 欧美日韩高清一区二区三区 | 亚洲精品久久久久中文字幕m男 | 在线黄色观看 | 亚洲区二区 | 天天操,夜夜操 | 久久999久久| 超碰在线网 | 黄色av成人在线 | 色婷婷视频| 日日干夜夜爱 | 美女网站视频一区 | 亚洲美女在线国产 | 久草视频看看 | 中文字幕视频网 | 99视频国产精品免费观看 | 91麻豆精品国产自产在线 | 又黄又网站 | 久久综合五月天 | 亚洲免费在线视频 | 国产色秀视频 | 91桃色在线观看视频 | 狠狠躁夜夜躁人人爽超碰97香蕉 | 射射射av | 视频91在线 | 日本黄色片一区二区 | 亚洲免费不卡 | 国产大片黄色 | 欧美成年黄网站色视频 | 色99之美女主播在线视频 | 国产精品麻豆99久久久久久 | 97在线看| 美女福利视频网 | 天天色天天搞 | 欧美黑人性猛交 | 免费精品在线观看 | 久久国产日韩 | 亚洲精品国偷拍自产在线观看蜜桃 | 亚洲无线视频 | 婷婷午夜 | 国产高清无av久久 | 亚洲视频一级 | 五月婷婷狠狠 | 亚洲.www | 中文字幕日韩电影 | a视频在线观看 | 国产在线观看a | 日韩精品欧美视频 | 日日草av | 久久久久久久国产精品视频 | 免费高清在线观看电视网站 | 亚洲综合丁香 | 久久久久久免费 | 欧美成人精品在线 | 美女视频网站久久 | 日韩av不卡在线观看 | 国产不卡av在线 | 九色精品在线 | 天天干天天干天天射 | 中文字幕精品在线 | 欧美一级片免费观看 | 激情六月婷婷久久 | 欧美国产不卡 | 国产一级二级三级视频 | 国产视频2 | 国产精品理论在线观看 | www.黄色小说.com | avhd高清在线谜片 | 青草视频在线看 | 狠狠色丁香婷婷综合橹88 | 亚洲精品在线观看av | 四虎国产精品永久在线国在线 | 91精品国产91热久久久做人人 | 蜜桃视频在线视频 | 亚洲国产理论片 | 丁香婷婷综合色啪 | 欧美男男激情videos | 久久av一区二区三区亚洲 | 天天干天天干天天色 | 99人久久精品视频最新地址 | 97色资源| 久久国产精品色婷婷 | 天天综合色网 | 97国产| 人人草在线视频 | 精品国产伦一区二区三区观看方式 | 婷婷视频导航 | 日韩在线免费看 | 欧美在线一二 | 国产免费久久精品 | 在线观看免费成人av | 五月婷激情 | 天天拍天天干 | 国产福利一区二区三区在线观看 | 一级国产视频 | 伊人久久av | 国产成人777777 | 美女在线免费视频 | 热久久这里只有精品 | 国产超碰97 | 91精品国产乱码 | 天天爱天天色 | 婷婷激情综合 | 91资源在线播放 | 天堂在线一区二区 | 国产v欧美 | 成人在线播放av | 欧美成人精品三级在线观看播放 | 欧美一级片在线播放 | 日韩理论电影在线观看 | 热久久视久久精品18亚洲精品 | 国产精品国产精品 | 狠狠狠色狠狠色综合 | 精品在线小视频 | 欧美精品一区二区免费 | 国产精品18毛片一区二区 | 99在线精品视频观看 | 在线观看免费版高清版 | 91福利视频网站 | av解说在线 | 中文字幕二区在线观看 | 国产不卡视频在线 | 久久在线免费 | 天天干天天干天天干 | 中文字幕在线观看国产 | 久热av| 日韩av一区二区三区在线观看 | 日韩欧美精品在线 | av字幕在线| 免费在线观看一区 | 精品久久久久久国产偷窥 | 日韩欧美一区二区三区黑寡妇 | 人人舔人人爽 | 欧美一级免费黄色片 | 91漂亮少妇露脸在线播放 | 天天插狠狠干 | 欧美日韩精品网站 | 毛片网在线播放 | 免费在线观看av不卡 | 三级动图 | 亚洲成人黄色在线 | 国产91大片 | 三上悠亚在线免费 | 久久免费a | 天天看天天干天天操 | 91视频免费网址 | 婷婷综合网 | 亚洲精品国产拍在线 | 国产午夜影院 | 91视频3p | 中文字幕在线人 | 国产色在线视频 | 国产亚洲精品综合一区91 | 日韩区欠美精品av视频 | 99re6热在线精品视频 | 天天干天天操人体 | 免费观看的av网站 | 中文在线免费看视频 | 最近中文字幕免费视频 | 日批视频在线 | 国产亚洲人 | 日韩精品一区二区三区中文字幕 | 高潮久久久| 夜夜躁日日躁狠狠久久88av | 涩涩在线 | 激情综合网五月婷婷 | 91亚洲精品久久久蜜桃借种 | 婷婷综合在线 | 日韩中文在线播放 | 色综合天天天天做夜夜夜夜做 | 久久综合偷偷噜噜噜色 | 国产真实精品久久二三区 | 天天翘av| 成 人 黄 色 视频 免费观看 | 欧美午夜寂寞影院 | 亚洲精品高清视频在线观看 | av成人在线观看 | 国产视频第二页 | 久在线观看视频 | 欧美日韩免费看 | 成人av一区二区在线观看 | 在线观看免费91 | 永久免费视频国产 | 97夜夜澡人人爽人人免费 | 九九九九热精品免费视频点播观看 | 久久99欧美 | 香蕉视频免费看 | 欧美日本不卡高清 | 久久精品官网 | 国产xxxx做受性欧美88 | 国产黄色大片 | 欧美成人h版 | 超碰在线9 | 国产视频精品久久 | 91精品婷婷国产综合久久蝌蚪 | 久久99婷婷 | 精品久久久久亚洲 | 狠狠色伊人亚洲综合网站野外 | 精品一二三四五区 | 日韩一二区在线观看 | 91人人澡人人爽人人精品 | 久草在线这里只有精品 | 成人免费在线视频观看 | 亚洲黄色区 | 久久久久久久久久久久国产精品 | 一区二区三区免费播放 | 国产剧情在线一区 | 国产精品女同一区二区三区久久夜 | 国产中文字幕在线免费观看 | 99久久精品免费看国产免费软件 | 在线观看av国产 | 亚洲午夜激情网 | 在线导航av | 国产精品网红福利 | 伊人婷婷色 | 91在线一区二区 | 国产综合精品久久 | 亚洲电影免费 | 天堂视频中文在线 | 五月天激情电影 | 操久在线 | 丰满少妇久久久 | 国产麻豆成人传媒免费观看 | 中文字幕乱码在线播放 | 国产精品毛片一区二区三区 | 最新高清无码专区 | 在线观看中文字幕2021 | 国产麻豆精品久久 | 久久天天躁狠狠躁亚洲综合公司 | 久草网视频在线观看 | 欧美一级欧美一级 | 亚洲视频,欧洲视频 | 有码视频在线观看 | 国产视频18 | 中文字幕色在线视频 | 中文字幕一区二区三区久久蜜桃 | 天天操天天干天天插 | av网站地址 | 91亚·色| 午夜性生活 | 色综合天天色 | 欧美一级日韩三级 | 亚洲精选视频在线 | 又大又硬又黄又爽视频在线观看 | 国产精品一区二区三区电影 | av黄色免费在线观看 | 黄色一级大片免费看 | 国产综合片 | 久久精品电影院 | 青青草国产免费 | 久久精品亚洲一区二区三区观看模式 | 亚洲极色 | 国产日韩精品在线观看 | 91看片麻豆 | 人人看黄色 | 成人一级黄色片 | 亚洲精品五月天 | 91麻豆精品国产91久久久更新时间 | 在线免费观看麻豆 | 久草在线手机视频 | 天天操天天摸天天干 | 天天天射 | 97人人超| 99久久这里有精品 | 国产不卡在线 | 国产日产亚洲精华av | 国产亚洲字幕 | 成人h电影在线观看 | 精品久久久久免费极品大片 | 久久久久久久久久久精 | 国产精品字幕 | 全久久久久久久久久久电影 | 亚洲精品美女久久久久 | 在线日本v二区不卡 | 日韩精品中文字幕有码 | 久久99久久99精品免观看粉嫩 | 精品国产观看 | 97电影在线看视频 | www黄在线 | 亚洲伊人成综合网 | .精品久久久麻豆国产精品 亚洲va欧美 | 亚洲男人天堂2018 | 日韩av影视 | 四虎影院在线观看av | 欧美99久久 | 中文字幕韩在线第一页 | 日韩免费在线视频 | 欧美色综合天天久久综合精品 | 97视频免费在线观看 | 天天操天天插 | 在线高清一区 | 成人午夜在线观看 | 国产精品美女免费视频 | 中中文字幕av在线 | 国产日韩欧美视频在线观看 | 国产精品视频免费看 | 欧美日韩另类在线 | 黄色免费在线看 | 日韩欧美网址 | 91丨九色丨91啦蝌蚪老版 | 成人免费91| 国产一区二区三区免费在线观看 | 国产精品久久99 | 久草在线视频资源 | 久久久久亚洲a | 国产破处在线播放 | 99 视频 高清 | 免费福利视频导航 | 国产亚洲精品久久久久久无几年桃 | 91综合色 | 免费国产在线精品 | 偷拍精偷拍精品欧洲亚洲网站 | 色婷婷久久久综合中文字幕 | 国产无遮挡猛进猛出免费软件 | 一区三区在线欧 | 国产精品11 | 国产亚洲情侣一区二区无 | 色综合久久久久久中文网 | 五月婷婷丁香六月 | 国产精品国产精品 | 日韩久久影院 | 91免费看片黄 | 综合色在线观看 | 亚洲视频一 | 国产无遮挡猛进猛出免费软件 | 亚洲理论片在线观看 | 精品国产一区二区三区四 | 97在线观看免费高清完整版在线观看 | 黄色一级大片在线观看 | 三级免费黄色 | 婷婷色资源| 在线久草视频 | 91色在线观看 | 热久久影视 | 国产伦精品一区二区三区照片91 | 天天射射天天 | 日韩激情视频在线观看 | 在线免费高清一区二区三区 | 国产精品视频最多的网站 | 99久高清在线观看视频99精品热在线观看视频 | 中文字幕一区二区三 | av在线免费观看不卡 | 精品视频免费观看 | 天天草天天干天天 | 久久在线免费观看视频 | 少妇av片 | 国产手机在线视频 | 久久理论电影网 | 久久久久区 | 在线观看国产永久免费视频 | 天天干夜夜爱 | 久久免费公开视频 | 99免费在线观看视频 | 免费在线一区二区 | 91看片看淫黄大片 | 五月婷婷六月丁香激情 | 久久久久日本精品一区二区三区 | 日韩欧美一区视频 | 在线免费中文字幕 | 久久久久久久久综合 | 天天综合天天综合 | 又爽又黄又无遮挡网站动态图 | 国内精品免费久久影院 | 国产日产欧美在线观看 | 青青草国产精品 | 亚洲精品videossex少妇 | 天天视频色 | 亚洲国产欧美一区二区三区丁香婷 | 国产精品区二区三区日本 | 日日麻批40分钟视频免费观看 | 日韩最新中文字幕 | 婷婷六月综合网 | 欧美 亚洲 另类 激情 另类 | 日韩区欧美久久久无人区 | 国产精品一区二区av麻豆 | 国产日韩视频在线播放 | 久要激情网 | 成人一区电影 | 992tv在线成人免费观看 | 国产精品黄色在线观看 | 三级av黄色| 有码中文字幕在线观看 | 久一网站 | 国产剧情一区二区在线观看 | 日韩欧美一区二区三区黑寡妇 | 国产精品久久久久久模特 | 国产精品视频全国免费观看 | 在线观看视频三级 | 国产剧情一区二区在线观看 | 黄色精品免费 | 国产精品久免费的黄网站 | 91麻豆精品国产91久久久无限制版 | 一级免费黄色 | 亚洲视频播放 | 激情av五月婷婷 | 亚洲国产精品一区二区久久hs | 成年人视频在线免费播放 | 亚洲最大成人网4388xx | 综合久久婷婷 | 日韩一区二区三区高清在线观看 | av网站大全免费 | 日韩视频一区二区三区 | 日韩av电影国产 | 国产精品福利在线播放 | 人人草在线视频 | 久久久国产精品视频 | 午夜10000| 国产精品欧美精品 | 午夜神马福利 | 五月激情丁香图片 | 国产一区二区三区四区大秀 | 久久久精品 一区二区三区 国产99视频在线观看 | 人人草人人做 | 久久综合加勒比 | 欧美一区二区三区不卡 | 日b视频在线观看网址 | 99成人精品 | 超碰在线个人 | 亚洲黄网站| 99热超碰 | 免费网站在线观看成人 | 日女人免费视频 | 人人爽人人乐 | 欧洲精品视频一区二区 | 亚洲国产精品va在线看 | 午夜久久精品 | 欧美精品一区二区免费 | 亚洲欧美国产精品 | av手机版 | 日韩高清免费在线观看 | 亚洲综合涩 | 精品久久久久一区二区国产 | 久久综合九色九九 | 国产精品久久久免费 | 天天草av| 亚洲精品国产精品国产 | 国产精品一区专区欧美日韩 | 最近中文字幕第一页 | 五月天综合在线 | 91成人在线免费观看 | 国产视频2区 | 久久精品福利视频 | 免费黄色看片 | 狠狠色丁香婷婷综合久久片 | 91九色在线观看视频 | www麻豆视频| 99视频免费 | 国产精品久久艹 | 亚洲精品中文字幕视频 | 色 中文字幕 | 日韩在线第一 | 99久久精品国产一区 | 国产视频丨精品|在线观看 国产精品久久久久久久久久久久午夜 | 丁香婷婷综合色啪 | 91精品在线免费观看视频 | 狠狠的干狠狠的操 | 91黄色小网站 | 丁香婷婷在线 | 蜜臀av一区二区 | 国产精品久久久久久久久久久免费 | 天天射射天天 | 欧美aaa一级 | 亚洲精品美女久久17c | 日本韩国中文字幕 | 色射爱 | 碰碰影院 | 美女黄频 | 在线国产91 | 超碰在线97国产 | 免费看的黄色录像 | 国产xxxxx在线观看 | 亚洲精品女人久久久 | 欧美一区二区在线刺激视频 | 最新av中文字幕 | 五月天激情综合网 | 欧美激情视频久久 | 日韩中文免费视频 | 成人蜜桃 | 黄视频网站大全 | 人人澡人| 97超碰人人澡人人爱学生 | 天天操操操操操操 | 亚洲精品国偷拍自产在线观看蜜桃 | 伊人看片 | 中字幕视频在线永久在线观看免费 | 久久精品视频免费观看 | 亚洲国产日韩一区 | 日韩在线观看你懂得 | 天天操 夜夜操 | 超碰人人舔 | 日日夜av| 97视频免费 | 国产在线91精品 | 91爱爱电影 | 日韩精品久久久 | 亚洲男男gⅴgay双龙 | 免费观看成人av | 亚洲一二区视频 | 99色视频 | 在线黄网站 | 亚洲在线视频观看 | 国产精品视频免费看 | 丁香影院在线 | 97精品在线| www.黄色在线| 中文字幕日韩免费视频 | 亚洲国产精品激情在线观看 | 天天综合网在线观看 | 超碰免费av | 久久久精品网站 | 亚洲欧洲精品久久 | 久久久久久久久爱 | 27xxoo无遮挡动态视频 | 久久久久久久久久久福利 | 97看片 | 国产91成人在在线播放 | 91高清不卡| 久久国产亚洲精品 | 欧美成人播放 | 中午字幕在线观看 | 久久久久成人精品亚洲国产 | 四虎精品成人免费网站 | 美女国产免费 | 久久久久久久久久久久久国产精品 | 久久久久久国产一区二区三区 | 一级理论片在线观看 | 香蕉影院在线观看 | 免费观看v片在线观看 | 色综合久久网 | 国产成人精品一区二区三区免费 | 一区电影 | 外国av网| 免费a v在线 | 久久成人国产 | 亚洲在线视频免费 | 久久亚洲欧美日韩精品专区 | 久久久久久久久久久影院 | 国产精品激情 | 亚洲国产欧美在线看片xxoo | 国产一区二区三区久久久 | 国产精品入口麻豆 | 四虎在线免费视频 | 色综合久久久久久中文网 | 91网在线观看 | 91在线播| 国产成人精品在线观看 | 国产成人福利在线 | 国产精品av一区二区 | 欧美精品久久久久久久 | 天天干天天插伊人网 | 最近中文字幕高清字幕在线视频 | 久久久久黄| 91午夜精品 | 国产精品久久久久永久免费看 | 国产精品久久久久久久av电影 | 亚洲国产精品久久久久婷婷884 | 狠狠色免费 | 青青草华人在线视频 | 狠狠躁夜夜躁人人爽超碰97香蕉 | 99精品毛片 | 91精品一区国产高清在线gif | 成人91在线 | 亚洲欧美国产日韩在线观看 | 国产又粗又长的视频 | 亚洲一区二区三区毛片 | 亚洲色视频| 国产精品亚洲片在线播放 | 国产999精品久久久久久麻豆 | 亚洲理论电影网 | 一本一本久久a久久精品综合妖精 | 精品亚洲一区二区三区 | 国产亚洲精品久久19p | 色婷婷a | 福利av影院 | 91福利在线观看 | 国产精品久久久久久久久软件 | 日韩免费一区二区三区 | 欧美一级日韩三级 | 久久精品观看 | 久久国产免费 | 蜜臀久久99精品久久久无需会员 | 午夜精品电影一区二区在线 | 欧美色图另类 | 黄色电影在线免费观看 | 日韩高清网站 | 最近最新中文字幕视频 | 日本精品一区二区 | 免费男女羞羞的视频网站中文字幕 | 亚洲精品国偷拍自产在线观看蜜桃 | 天天操夜夜摸 | 国产日韩精品一区二区在线观看播放 | 911在线 | 99se视频在线观看 | 麻豆久久久| 国产麻豆果冻传媒在线观看 | 国产码电影 | 久久这里 | 亚洲爱爱视频 | 欧美精品小视频 | 天天色宗合| 欧美久久久久久久久久 | av中文字幕网 | 精品国产一区二区三区男人吃奶 | 久久久亚洲国产精品麻豆综合天堂 | 深夜国产福利 | 日韩在线视频不卡 | 国产一级h| 黄污视频网站 | 国色天香在线观看 | 婷婷在线网 | 五月天激情视频 | 国产成人一区二区三区电影 | 亚洲 欧美 变态 国产 另类 | 97在线观看 | 久久成人黄色 | 五月婷婷精品 | 久久调教视频 | 久久夜夜夜 | 人人添人人澡 | 麻豆视传媒官网免费观看 | 九九九热 | 国产精品99页 | 久久婷婷国产色一区二区三区 | 日韩小视频 | 99久久综合狠狠综合久久 | 狠狠插天天干 | 免费成人在线观看视频 | 久久午夜免费观看 | 欧美日韩精品影院 | 午夜精品久久久久久久99水蜜桃 | 97av在线视频| www在线观看视频 | 超碰在线色 | 中文字幕91视频 | 色噜噜狠狠狠狠色综合 | 不卡视频在线 | 久久免费看av | 在线观看视频福利 | av福利网址导航 | 亚洲日本一区二区在线 | 日日射av| 国产一区二区中文字幕 | 国产精品a久久久久 | 天天干天天操天天干 | 高清不卡毛片 | 久久久久久久久久久久久影院 | 久久深夜福利免费观看 | 伊人永久| 91精品一 | 日韩在线二区 | 久久国产精品一区二区三区四区 | 国产人在线成免费视频 | 成人av午夜 | 亚洲人成人在线 | 97视频在线观看播放 | 久久久精品 一区二区三区 国产99视频在线观看 | 在线看国产精品 | 久久色亚洲 | 麻豆久久久 | 国产淫片免费看 | 在线观看午夜 | 亚洲一区二区视频 | 亚洲手机av | 麻豆传媒视频在线免费观看 | av片在线观看免费 | 欧美精品九九99久久 | 91看片看淫黄大片 | 亚洲免费激情 | 欧美午夜一区二区福利视频 | 久草综合在线观看 | 视频在线一区二区三区 | 国产看片免费 | 国产亚洲精品电影 | 久久免费视频国产 | 少妇高潮冒白浆 | 亚洲免费不卡 | 黄色三级免费 | 日韩av电影手机在线观看 | 国产123av| 人人超碰免费 | 欧美成人在线免费 | 91亚洲精品在线观看 | 99爱在线| 日韩精品久久久久 | 一本一本久久a久久精品综合妖精 | 丝袜美腿一区 | 一区二区三区电影大全 | 91久久精品日日躁夜夜躁国产 | 日韩在线资源 | 国产中文自拍 | 成人免费观看网站 | 国产黄色资源 | 婷婷综合导航 | 精品国产一区二区三区四区在线观看 | 91夫妻视频| 国产黄色片免费在线观看 | 精品一区二区三区在线播放 | 人人cao| 99视频这里只有 | 中文字幕日韩精品有码视频 | 成年人国产在线观看 | 天天婷婷| 91在线国内视频 | 免费的国产精品 | 久久精品91久久久久久再现 | 日韩区欧美久久久无人区 | 日韩在线观看高清 | 青青河边草观看完整版高清 | 久久不卡免费视频 | 免费看亚洲毛片 | 中文av免费| 九九视频免费在线观看 | 99精彩视频 | 99国产成+人+综合+亚洲 欧美 | 婷婷在线资源 | 亚洲激情在线视频 | 国产精品二区在线观看 | a级片在线播放 | 国产精品久久久久久久久久久久冷 | 午夜精品一区二区三区在线观看 | 免费看黄20分钟 | 国产精品你懂的在线观看 | 成人xxxx| 人人射人人插 | 色播六月天| 欧美激情综合五月色丁香 | 91精品1区2区 | 九九在线免费视频 | 亚洲午夜精品久久久久久久久久久久 | 日韩成人精品一区二区 | 亚洲精品婷婷 | 999精品视频| 精品国产激情 | 丁香五月缴情综合网 | 粉嫩av一区二区三区入口 | 久久美女高清视频 | 91免费在线 | 天天操天天干天天玩 | 99精品在线看 | bbb搡bbb爽爽爽 | 97在线观看免费视频 | 亚洲国产成人精品久久 | 久久久精华网 | 伊人色**天天综合婷婷 | 国产综合精品久久 | 国内久久 | 97超碰国产在线 | 亚洲三级在线播放 | 日韩性网站| 亚洲在线a | 欧美一区二区日韩一区二区 | 在线综合 亚洲 欧美在线视频 | 日日夜夜精品视频天天综合网 | 日韩超碰在线 | 91视频在线网址 | 久久久国际精品 | 中文字幕久久久精品 | 成人小视频在线观看免费 | 国际精品久久久 | 亚洲乱亚洲乱亚洲 | 天天舔天天搞 | 亚洲一区二区三区毛片 | www.狠狠色.com| 美女久久久久久 | 亚洲国产999 | 在线免费观看黄色av | 国产在线美女 | 98久久| 人人爽久久涩噜噜噜网站 | 久草视频精品 | 久久公开视频 | 日韩va在线观看 | 一区二区三区中文字幕在线观看 | 国产又粗又猛又黄视频 | 欧美电影黄色 | av在线收看| 久热精品国产 | 国产粉嫩在线观看 | av色图天堂网 | 91中文在线视频 | 日韩免费电影在线观看 | 美女精品久久久 | 亚洲精品美女在线观看播放 | 国产韩国精品一区二区三区 | 成+人+色综合 | 在线成人国产 | 日韩中文字幕网站 | 综合在线色 | 日韩试看 | 欧美日韩国产免费视频 | 久久久人人人 | 综合激情av | 又色又爽又黄高潮的免费视频 | 国产精品成人久久久久久久 | 免费看片成人 | 国产操在线 | 久久福利综合 | 日韩三级在线 | 亚洲日本国产 | 国产一区私人高清影院 | 国产一区二区午夜 | x99av成人免费| 天堂在线一区二区 | 国内精品在线一区 | www.91av在线| 亚洲精品久久久久久中文传媒 | 狠狠色丁香婷婷综合 | 五月色综合 | 九热在线| 国产精品系列在线观看 | 欧美日韩久 | 国产婷婷vvvv激情久 | 久保带人| 中文字幕一区三区 | 国产午夜精品福利视频 | 综合久久综合久久 | 亚洲va欧美va人人爽 | av黄色免费网站 | 99久久激情视频 | 婷婷丁香视频 | 在线观看黄色小视频 | 久久久久成人精品 | 91精品视频免费观看 | 999成人网| 偷拍精品一区二区三区 | 中文在线字幕免费观看 | 精品国产午夜 | 蜜桃视频在线观看一区 | 日韩1级片 | 九色视频网址 | 四虎在线观看 | 中国一 片免费观看 | 播五月婷婷 | 日韩网站中文字幕 | 国产成人av网址 | 久久99国产综合精品 | 亚洲成人免费 | 狠狠久久伊人 | 久久少妇免费视频 | 韩日视频在线 | 日韩中文字幕第一页 | 国产99久久久久 | 麻豆超碰| 久免费 | 久久99精品久久久久久久久久久久 | 18av在线视频 | 蜜臀av性久久久久av蜜臀三区 | 成人久久电影 | 伊人春色电影网 | 久久亚洲福利 | 99国产成+人+综合+亚洲 欧美 | 亚洲精品一区二区三区高潮 | 美女黄网久久 | 天天玩天天操天天射 | 狠狠色伊人亚洲综合成人 | 中文字幕av网站 | 免费黄色av| 成人一级电影在线观看 | 久久久精品欧美 | 色婷婷电影网 | 国产96av | 国产精品一区二区免费在线观看 | 四虎亚洲精品 | 最近中文字幕久久 | 91九色九色 | 片黄色毛片黄色毛片 | 免费看黄色91 | 成人av久久 | 中文字幕在线观看的网站 | 日韩欧美在线视频一区二区 | 日韩国产在线观看 | 精品毛片一区二区免费看 | 久久久久 免费视频 | 中文字幕在线看视频国产 | 免费在线成人av电影 | 国产精品久久久久久一二三四五 | 欧美一级在线观看视频 | 久草国产精品 | 96av麻豆蜜桃一区二区 | 国产精品久久久久久久久久久久午夜 | 日本一区二区三区视频在线播放 |