日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

cs231n:assignment2——Q1: Fully-connected Neural Network

發(fā)布時間:2023/12/20 编程问答 40 豆豆
生活随笔 收集整理的這篇文章主要介紹了 cs231n:assignment2——Q1: Fully-connected Neural Network 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

視頻里 Andrej Karpathy上課的時候說,這次的作業(yè)meaty but educational,確實很meaty,作業(yè)一般是由.ipynb文件和.py文件組成,這次因為每個.ipynb文件涉及到的.py文件較多,且互相之間有交叉,所以每篇博客只貼出一個.ipynb或者一個.py文件.(因為之前的作業(yè)由于是一個.ipynb文件對應(yīng)一個.py文件,所以就整合到一篇博客里)
還是那句話,有錯誤希望幫我指出來,多多指教,謝謝
FullyConnectedNets.ipynb內(nèi)容:

  • Fully-Connected Neural Nets
  • Affine layer foward
  • Affine layer backward
  • ReLU layer forward
  • ReLU layer backward
  • Sandwich layers
  • Loss layers Softmax and SVM
  • Two-layer network
  • Solver
  • Multilayer network
    • Initial loss and gradient check
  • Inline question
  • Answer
  • Update rules
  • SGDMomentum
  • RMSProp and Adam
  • Train a good model
  • Test you model

Fully-Connected Neural Nets

In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.

In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:

def layer_forward(x, w):""" Receive inputs x and weights w """# Do some computations ...z = # ... some intermediate value# Do some more computations ...out = # the outputcache = (x, w, z, out) # Values we need to compute gradientsreturn out, cache

The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:

def layer_backward(dout, cache):"""Receive derivative of loss with respect to outputs and cache,and compute derivative with respect to inputs."""# Unpack cache valuesx, w, z, out = cache# Use values in cache to compute derivativesdx = # Derivative of loss with respect to xdw = # Derivative of loss with respect to wreturn dx, dw

After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.

In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.

# As usual, a bit of setupimport time import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.fc_net import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from cs231n.solver import Solver%matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray'# for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2def rel_error(x, y):""" returns relative error """return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data.data = get_CIFAR10_data() for k, v in data.iteritems():print '%s: ' % k, v.shape X_val: (1000, 3, 32, 32) X_train: (49000, 3, 32, 32) X_test: (1000, 3, 32, 32) y_val: (1000,) y_train: (49000,) y_test: (1000,)

Affine layer: foward

Open the file cs231n/layers.py and implement the affine_forward function.

Once you are done you can test your implementaion by running the following:

# Test the affine_forward functionnum_inputs = 2 input_shape = (4, 5, 6) output_dim = 3input_size = num_inputs * np.prod(input_shape) weight_size = output_dim * np.prod(input_shape)x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape) w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim) b = np.linspace(-0.3, 0.1, num=output_dim)out, _ = affine_forward(x, w, b) correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],[ 3.25553199, 3.5141327, 3.77273342]])# Compare your output with ours. The error should be around 1e-9. print 'Testing affine_forward function:' print 'difference: ', rel_error(out, correct_out) Testing affine_forward function: difference: 9.76985004799e-10

Affine layer: backward

Now implement the affine_backward function and test your implementation using numeric gradient checking.

# Test the affine_backward functionx = np.random.randn(10, 2, 3) w = np.random.randn(6, 5) b = np.random.randn(5) dout = np.random.randn(10, 5)dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)_, cache = affine_forward(x, w, b) dx, dw, db = affine_backward(dout, cache)# The error should be around 1e-10 print 'Testing affine_backward function:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) Testing affine_backward function: dx error: 5.82176848644e-11 dw error: 1.69054721917e-10 db error: 1.40577633097e-11

ReLU layer: forward

Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:

# Test the relu_forward functionx = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)out, _ = relu_forward(x) correct_out = np.array([[ 0., 0., 0., 0., ],[ 0., 0., 0.04545455, 0.13636364,],[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])# Compare your output with ours. The error should be around 1e-8 print 'Testing relu_forward function:' print 'difference: ', rel_error(out, correct_out) Testing relu_forward function: difference: 4.99999979802e-08

ReLU layer: backward

Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:

x = np.random.randn(10, 10) dout = np.random.randn(*x.shape)dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)_, cache = relu_forward(x) dx = relu_backward(dout, cache)# The error should be around 1e-12 print 'Testing relu_backward function:' print 'dx error: ', rel_error(dx_num, dx) Testing relu_backward function: dx error: 3.27562740606e-12

“Sandwich” layers

There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.

For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:

from cs231n.layer_utils import affine_relu_forward, affine_relu_backwardx = np.random.randn(2, 3, 4) w = np.random.randn(12, 10) b = np.random.randn(10) dout = np.random.randn(2, 10)out, cache = affine_relu_forward(x, w, b) dx, dw, db = affine_relu_backward(dout, cache)dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)print 'Testing affine_relu_forward:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) Testing affine_relu_forward: dx error: 3.60036208641e-10 dw error: 2.61229361266e-09 db error: 4.99397627854e-12

Loss layers: Softmax and SVM

You implemented these loss functions in the last assignment, so we’ll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.

You can make sure that the implementations are correct by running the following:

num_classes, num_inputs = 10, 50 x = 0.001 * np.random.randn(num_inputs, num_classes) y = np.random.randint(num_classes, size=num_inputs)dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False) loss, dx = svm_loss(x, y)# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9 print 'Testing svm_loss:' print 'loss: ', loss print 'dx error: ', rel_error(dx_num, dx)dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False) loss, dx = softmax_loss(x, y)# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8 print '\nTesting softmax_loss:' print 'loss: ', loss print 'dx error: ', rel_error(dx_num, dx) Testing svm_loss: loss: 9.00052703662 dx error: 1.40215660067e-09Testing softmax_loss: loss: 2.30263822083 dx error: 1.0369484028e-08

Two-layer network

In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.

Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.

N, D, H, C = 3, 5, 50, 7 X = np.random.randn(N, D) y = np.random.randint(C, size=N)std = 1e-2 model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)print 'Testing initialization ... ' W1_std = abs(model.params['W1'].std() - std) b1 = model.params['b1'] W2_std = abs(model.params['W2'].std() - std) b2 = model.params['b2'] assert W1_std < std / 10, 'First layer weights do not seem right' assert np.all(b1 == 0), 'First layer biases do not seem right' assert W2_std < std / 10, 'Second layer weights do not seem right' assert np.all(b2 == 0), 'Second layer biases do not seem right'print 'Testing test-time forward pass ... ' model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H) model.params['b1'] = np.linspace(-0.1, 0.9, num=H) model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C) model.params['b2'] = np.linspace(-0.9, 0.1, num=C) X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T scores = model.loss(X) correct_scores = np.asarray([[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]]) scores_diff = np.abs(scores - correct_scores).sum() assert scores_diff < 1e-6, 'Problem with test-time forward pass'print 'Testing training loss (no regularization)' y = np.asarray([0, 5, 1]) loss, grads = model.loss(X, y) correct_loss = 3.4702243556 assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'model.reg = 1.0 loss, grads = model.loss(X, y) correct_loss = 26.5948426952 assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'for reg in [0.0, 0.7]:print 'Running numeric gradient check with reg = ', regmodel.reg = regloss, grads = model.loss(X, y)for name in sorted(grads):f = lambda _: model.loss(X, y)[0]grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])) Testing initialization ... Testing test-time forward pass ... Testing training loss (no regularization) Running numeric gradient check with reg = 0.0 W1 relative error: 1.22e-08 W2 relative error: 3.34e-10 b1 relative error: 4.73e-09 b2 relative error: 4.33e-10 Running numeric gradient check with reg = 0.7 W1 relative error: 2.53e-07 W2 relative error: 1.37e-07 b1 relative error: 1.56e-08 b2 relative error: 9.09e-10

Solver

In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.

Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.

model = TwoLayerNet() solver = None############################################################################## # TODO: Use a Solver instance to train a TwoLayerNet that achieves at least # # 50% accuracy on the validation set. # ############################################################################## solver = Solver(model, data,update_rule='sgd',optim_config={'learning_rate': 1e-3,},lr_decay=0.95,num_epochs=10, batch_size=100,print_every=100) solver.train() solver.best_val_acc ############################################################################## # END OF YOUR CODE # ############################################################################## (Iteration 1 / 4900) loss: 2.309509 (Epoch 0 / 10) train acc: 0.111000; val_acc: 0.124000 (Iteration 101 / 4900) loss: 2.031418 (Iteration 201 / 4900) loss: 1.712236 (Iteration 301 / 4900) loss: 1.747420 (Iteration 401 / 4900) loss: 1.549451 (Epoch 1 / 10) train acc: 0.450000; val_acc: 0.414000 (Iteration 501 / 4900) loss: 1.630659 (Iteration 601 / 4900) loss: 1.491387 (Iteration 701 / 4900) loss: 1.442918 (Iteration 801 / 4900) loss: 1.351634 (Iteration 901 / 4900) loss: 1.453418 (Epoch 2 / 10) train acc: 0.491000; val_acc: 0.484000 (Iteration 1001 / 4900) loss: 1.485202 (Iteration 1101 / 4900) loss: 1.383021 (Iteration 1201 / 4900) loss: 1.346942 (Iteration 1301 / 4900) loss: 1.252413 (Iteration 1401 / 4900) loss: 1.537722 (Epoch 3 / 10) train acc: 0.521000; val_acc: 0.480000 (Iteration 1501 / 4900) loss: 1.365271 (Iteration 1601 / 4900) loss: 1.123946 (Iteration 1701 / 4900) loss: 1.315114 (Iteration 1801 / 4900) loss: 1.597782 (Iteration 1901 / 4900) loss: 1.416204 (Epoch 4 / 10) train acc: 0.546000; val_acc: 0.494000 (Iteration 2001 / 4900) loss: 1.114552 (Iteration 2101 / 4900) loss: 1.377966 (Iteration 2201 / 4900) loss: 1.121448 (Iteration 2301 / 4900) loss: 1.306290 (Iteration 2401 / 4900) loss: 1.404830 (Epoch 5 / 10) train acc: 0.559000; val_acc: 0.500000 (Iteration 2501 / 4900) loss: 1.123347 (Iteration 2601 / 4900) loss: 1.449507 (Iteration 2701 / 4900) loss: 1.308397 (Iteration 2801 / 4900) loss: 1.375048 (Iteration 2901 / 4900) loss: 1.259040 (Epoch 6 / 10) train acc: 0.572000; val_acc: 0.491000 (Iteration 3001 / 4900) loss: 1.119232 (Iteration 3101 / 4900) loss: 1.270312 (Iteration 3201 / 4900) loss: 1.204007 (Iteration 3301 / 4900) loss: 1.214074 (Iteration 3401 / 4900) loss: 1.110863 (Epoch 7 / 10) train acc: 0.566000; val_acc: 0.514000 (Iteration 3501 / 4900) loss: 1.253669 (Iteration 3601 / 4900) loss: 1.354838 (Iteration 3701 / 4900) loss: 1.299770 (Iteration 3801 / 4900) loss: 1.184324 (Iteration 3901 / 4900) loss: 1.154244 (Epoch 8 / 10) train acc: 0.594000; val_acc: 0.498000 (Iteration 4001 / 4900) loss: 0.911092 (Iteration 4101 / 4900) loss: 1.154072 (Iteration 4201 / 4900) loss: 1.106225 (Iteration 4301 / 4900) loss: 1.279295 (Iteration 4401 / 4900) loss: 1.046316 (Epoch 9 / 10) train acc: 0.611000; val_acc: 0.503000 (Iteration 4501 / 4900) loss: 1.172954 (Iteration 4601 / 4900) loss: 1.040094 (Iteration 4701 / 4900) loss: 1.369539 (Iteration 4801 / 4900) loss: 1.106506 (Epoch 10 / 10) train acc: 0.588000; val_acc: 0.51500.51500000000000001 # Run this cell to visualize training loss and train / val accuracyplt.subplot(2, 1, 1) plt.title('Training loss') plt.plot(solver.loss_history, 'o') plt.xlabel('Iteration')plt.subplot(2, 1, 2) plt.title('Accuracy') plt.plot(solver.train_acc_history, '-o', label='train') plt.plot(solver.val_acc_history, '-o', label='val') plt.plot([0.5] * len(solver.val_acc_history), 'k--') plt.xlabel('Epoch') plt.legend(loc='lower right') plt.gcf().set_size_inches(15, 12) plt.show()

Multilayer network

Next you will implement a fully-connected network with an arbitrary number of hidden layers.

Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.

Implement the initialization, the forward pass, and the backward pass. For the moment don’t worry about implementing dropout or batch normalization; we will add those features soon.

Initial loss and gradient check

As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?

For gradient checking, you should expect to see errors around 1e-6 or less.

# 有的時候relative error會比較大,能達(dá)到1e-2的數(shù)量級,但是多運行幾次,所有參數(shù)的relative error都比較小,應(yīng)該是隨機(jī)初始化參數(shù)的影響 N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,))for reg in [0, 3.14,0.02]:print 'Running check with reg = ', regmodel = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,reg=reg, weight_scale=5e-2, dtype=np.float64)loss, grads = model.loss(X, y)print 'Initial loss: ', lossfor name in sorted(grads):f = lambda _: model.loss(X, y)[0]grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])) Running check with reg = 0 Initial loss: 2.29966459663 W1 relative error: 2.92e-07 W2 relative error: 2.17e-05 W3 relative error: 4.38e-08 b1 relative error: 3.54e-08 b2 relative error: 1.45e-08 b3 relative error: 1.31e-10 Running check with reg = 3.14 Initial loss: 6.71836699258 W1 relative error: 2.65e-07 W2 relative error: 2.28e-07 W3 relative error: 3.79e-06 b1 relative error: 7.94e-09 b2 relative error: 1.73e-08 b3 relative error: 2.05e-10 Running check with reg = 0.02 Initial loss: 2.32843212504 W1 relative error: 1.19e-07 W2 relative error: 1.47e-06 W3 relative error: 8.67e-06 b1 relative error: 2.08e-08 b2 relative error: 1.21e-02 b3 relative error: 1.39e-10

As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.

# TODO: Use a three-layer Net to overfit 50 training examples.num_train = 50 small_data = {'X_train': data['X_train'][:num_train],'y_train': data['y_train'][:num_train],'X_val': data['X_val'],'y_val': data['y_val'], }#weight_scale = 1e-2 #learning_rate = 1e-4 weight_scale = 4e-2 learning_rate = 1e-3 model = FullyConnectedNet([100, 100],weight_scale=weight_scale, dtype=np.float64) solver = Solver(model, small_data,print_every=10, num_epochs=20, batch_size=25,update_rule='sgd',optim_config={'learning_rate': learning_rate,}) solver.train()plt.plot(solver.loss_history, 'o') plt.title('Training loss history') plt.xlabel('Iteration') plt.ylabel('Training loss') plt.show() (Iteration 1 / 40) loss: 10.016980 (Epoch 0 / 20) train acc: 0.260000; val_acc: 0.110000 (Epoch 1 / 20) train acc: 0.280000; val_acc: 0.131000 (Epoch 2 / 20) train acc: 0.380000; val_acc: 0.130000 (Epoch 3 / 20) train acc: 0.540000; val_acc: 0.114000 (Epoch 4 / 20) train acc: 0.800000; val_acc: 0.110000 (Epoch 5 / 20) train acc: 0.880000; val_acc: 0.121000 (Iteration 11 / 40) loss: 0.474159 (Epoch 6 / 20) train acc: 0.940000; val_acc: 0.136000 (Epoch 7 / 20) train acc: 0.920000; val_acc: 0.143000 (Epoch 8 / 20) train acc: 1.000000; val_acc: 0.141000 (Epoch 9 / 20) train acc: 1.000000; val_acc: 0.140000 (Epoch 10 / 20) train acc: 1.000000; val_acc: 0.138000 (Iteration 21 / 40) loss: 0.049274 (Epoch 11 / 20) train acc: 1.000000; val_acc: 0.139000 (Epoch 12 / 20) train acc: 1.000000; val_acc: 0.141000 (Epoch 13 / 20) train acc: 1.000000; val_acc: 0.142000 (Epoch 14 / 20) train acc: 1.000000; val_acc: 0.141000 (Epoch 15 / 20) train acc: 1.000000; val_acc: 0.141000 (Iteration 31 / 40) loss: 0.011080 (Epoch 16 / 20) train acc: 1.000000; val_acc: 0.139000 (Epoch 17 / 20) train acc: 1.000000; val_acc: 0.138000 (Epoch 18 / 20) train acc: 1.000000; val_acc: 0.138000 (Epoch 19 / 20) train acc: 1.000000; val_acc: 0.134000 (Epoch 20 / 20) train acc: 1.000000; val_acc: 0.13300

Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.

# TODO: Use a five-layer Net to overfit 50 training examples.num_train = 50 small_data = {'X_train': data['X_train'][:num_train],'y_train': data['y_train'][:num_train],'X_val': data['X_val'],'y_val': data['y_val'], }# learning_rate = 1e-3 # weight_scale = 1e-5 learning_rate = 1e-3 weight_scale = 6e-2 model = FullyConnectedNet([100, 100, 100, 100],weight_scale=weight_scale, dtype=np.float64) solver = Solver(model, small_data,print_every=10, num_epochs=20, batch_size=25,update_rule='sgd',optim_config={'learning_rate': learning_rate,}) solver.train()plt.plot(solver.loss_history, 'o') plt.title('Training loss history') plt.xlabel('Iteration') plt.ylabel('Training loss') plt.show() (Iteration 1 / 40) loss: 8.242625 (Epoch 0 / 20) train acc: 0.040000; val_acc: 0.108000 (Epoch 1 / 20) train acc: 0.180000; val_acc: 0.119000 (Epoch 2 / 20) train acc: 0.260000; val_acc: 0.126000 (Epoch 3 / 20) train acc: 0.480000; val_acc: 0.116000 (Epoch 4 / 20) train acc: 0.500000; val_acc: 0.110000 (Epoch 5 / 20) train acc: 0.600000; val_acc: 0.114000 (Iteration 11 / 40) loss: 1.805009 (Epoch 6 / 20) train acc: 0.800000; val_acc: 0.113000 (Epoch 7 / 20) train acc: 0.860000; val_acc: 0.108000 (Epoch 8 / 20) train acc: 0.920000; val_acc: 0.116000 (Epoch 9 / 20) train acc: 0.960000; val_acc: 0.113000 (Epoch 10 / 20) train acc: 0.960000; val_acc: 0.116000 (Iteration 21 / 40) loss: 0.137192 (Epoch 11 / 20) train acc: 0.980000; val_acc: 0.113000 (Epoch 12 / 20) train acc: 0.980000; val_acc: 0.118000 (Epoch 13 / 20) train acc: 0.980000; val_acc: 0.118000 (Epoch 14 / 20) train acc: 0.980000; val_acc: 0.118000 (Epoch 15 / 20) train acc: 0.980000; val_acc: 0.118000 (Iteration 31 / 40) loss: 0.084054 (Epoch 16 / 20) train acc: 1.000000; val_acc: 0.118000 (Epoch 17 / 20) train acc: 1.000000; val_acc: 0.113000 (Epoch 18 / 20) train acc: 1.000000; val_acc: 0.115000 (Epoch 19 / 20) train acc: 1.000000; val_acc: 0.118000 (Epoch 20 / 20) train acc: 1.000000; val_acc: 0.119000

Inline question:

Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?

Answer:

training five-layer net need bigger weight_scale since it has deeper net so five-layer net’s weights get higher probablity to decrease to zero.
As five-layer net initialize weights with higher weight scale, so it needs bigger learning rate.
three-layer net is more robust than five-layer net.
5層網(wǎng)絡(luò)比三層網(wǎng)絡(luò)更深,所以計算過程中的值越來越小vanish現(xiàn)象更嚴(yán)重,所以需要講weight scale調(diào)大,因為weight scale調(diào)大了,所以同樣條件下,學(xué)習(xí)率也要調(diào)大才能在同樣步驟內(nèi)更好的訓(xùn)練網(wǎng)絡(luò).5層網(wǎng)絡(luò)比三層更敏感和脆弱.
其實不太懂他想問啥,感覺很容易就調(diào)到了100%

Update rules

So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.

SGD+Momentum

Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.

Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.

from cs231n.optim import sgd_momentumN, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)config = {'learning_rate': 1e-3, 'velocity': v} next_w, _ = sgd_momentum(w, dw, config=config)expected_next_w = np.asarray([[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]]) expected_velocity = np.asarray([[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])print 'next_w error: ', rel_error(next_w, expected_next_w) print 'velocity error: ', rel_error(expected_velocity, config['velocity']) next_w error: 8.88234703351e-09 velocity error: 4.26928774328e-09

Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.

num_train = 4000 small_data = {'X_train': data['X_train'][:num_train],'y_train': data['y_train'][:num_train],'X_val': data['X_val'],'y_val': data['y_val'], }solvers = {}for update_rule in ['sgd', 'sgd_momentum']:print 'running with ', update_rulemodel = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)solver = Solver(model, small_data,num_epochs=5, batch_size=100,update_rule=update_rule,optim_config={'learning_rate': 1e-2,},verbose=True)solvers[update_rule] = solversolver.train()printplt.subplot(3, 1, 1) plt.title('Training loss') plt.xlabel('Iteration')plt.subplot(3, 1, 2) plt.title('Training accuracy') plt.xlabel('Epoch')plt.subplot(3, 1, 3) plt.title('Validation accuracy') plt.xlabel('Epoch')for update_rule, solver in solvers.iteritems():plt.subplot(3, 1, 1)plt.plot(solver.loss_history, 'o', label=update_rule)plt.subplot(3, 1, 2)plt.plot(solver.train_acc_history, '-o', label=update_rule)plt.subplot(3, 1, 3)plt.plot(solver.val_acc_history, '-o', label=update_rule)for i in [1, 2, 3]:plt.subplot(3, 1, i)plt.legend(loc='upper center', ncol=4) plt.gcf().set_size_inches(15, 15) plt.show() running with sgd (Iteration 1 / 200) loss: 2.482962 (Epoch 0 / 5) train acc: 0.103000; val_acc: 0.116000 (Iteration 11 / 200) loss: 2.189759 (Iteration 21 / 200) loss: 2.118428 (Iteration 31 / 200) loss: 2.146263 (Epoch 1 / 5) train acc: 0.234000; val_acc: 0.231000 (Iteration 41 / 200) loss: 2.136812 (Iteration 51 / 200) loss: 2.058494 (Iteration 61 / 200) loss: 2.010344 (Iteration 71 / 200) loss: 1.935777 (Epoch 2 / 5) train acc: 0.341000; val_acc: 0.278000 (Iteration 81 / 200) loss: 1.848450 (Iteration 91 / 200) loss: 1.890258 (Iteration 101 / 200) loss: 1.851392 (Iteration 111 / 200) loss: 1.890978 (Epoch 3 / 5) train acc: 0.364000; val_acc: 0.316000 (Iteration 121 / 200) loss: 1.674997 (Iteration 131 / 200) loss: 1.753746 (Iteration 141 / 200) loss: 1.677929 (Iteration 151 / 200) loss: 1.651327 (Epoch 4 / 5) train acc: 0.378000; val_acc: 0.296000 (Iteration 161 / 200) loss: 1.707673 (Iteration 171 / 200) loss: 1.771841 (Iteration 181 / 200) loss: 1.650195 (Iteration 191 / 200) loss: 1.671102 (Epoch 5 / 5) train acc: 0.414000; val_acc: 0.347000running with sgd_momentum (Iteration 1 / 200) loss: 2.779826 (Epoch 0 / 5) train acc: 0.080000; val_acc: 0.090000 (Iteration 11 / 200) loss: 2.151418 (Iteration 21 / 200) loss: 2.005661 (Iteration 31 / 200) loss: 2.018002 (Epoch 1 / 5) train acc: 0.334000; val_acc: 0.327000 (Iteration 41 / 200) loss: 1.914837 (Iteration 51 / 200) loss: 1.745527 (Iteration 61 / 200) loss: 1.829091 (Iteration 71 / 200) loss: 1.646542 (Epoch 2 / 5) train acc: 0.354000; val_acc: 0.311000 (Iteration 81 / 200) loss: 1.561354 (Iteration 91 / 200) loss: 1.687099 (Iteration 101 / 200) loss: 1.644848 (Iteration 111 / 200) loss: 1.604384 (Epoch 3 / 5) train acc: 0.447000; val_acc: 0.334000 (Iteration 121 / 200) loss: 1.727682 (Iteration 131 / 200) loss: 1.569907 (Iteration 141 / 200) loss: 1.565606 (Iteration 151 / 200) loss: 1.674119 (Epoch 4 / 5) train acc: 0.466000; val_acc: 0.390000 (Iteration 161 / 200) loss: 1.364019 (Iteration 171 / 200) loss: 1.449550 (Iteration 181 / 200) loss: 1.510401 (Iteration 191 / 200) loss: 1.353840 (Epoch 5 / 5) train acc: 0.514000; val_acc: 0.368000

RMSProp and Adam

RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.

In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.

[1] Tijmen Tieleman and Geoffrey Hinton. “Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude.” COURSERA: Neural Networks for Machine Learning 4 (2012).

[2] Diederik Kingma and Jimmy Ba, “Adam: A Method for Stochastic Optimization”, ICLR 2015.

# Test RMSProp implementation; you should see errors less than 1e-7 from cs231n.optim import rmspropN, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)config = {'learning_rate': 1e-2, 'cache': cache} next_w, _ = rmsprop(w, dw, config=config)expected_next_w = np.asarray([[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]]) expected_cache = np.asarray([[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])print 'next_w error: ', rel_error(expected_next_w, next_w) print 'cache error: ', rel_error(expected_cache, config['cache']) next_w error: 9.50264522989e-08 cache error: 2.64779558072e-09 # Test Adam implementation; you should see errors around 1e-7 or less from cs231n.optim import adamN, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5} next_w, _ = adam(w, dw, config=config)expected_next_w = np.asarray([[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]]) expected_v = np.asarray([[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]]) expected_m = np.asarray([[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])print 'next_w error: ', rel_error(expected_next_w, next_w) print 'v error: ', rel_error(expected_v, config['v']) print 'm error: ', rel_error(expected_m, config['m']) next_w error: 1.13956917985e-07 v error: 4.20831403811e-09 m error: 4.21496319311e-09

Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:

learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3} for update_rule in ['adam', 'rmsprop']:print 'running with ', update_rulemodel = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)solver = Solver(model, small_data,num_epochs=5, batch_size=100,update_rule=update_rule,optim_config={'learning_rate': learning_rates[update_rule]},verbose=True)solvers[update_rule] = solversolver.train()printplt.subplot(3, 1, 1) plt.title('Training loss') plt.xlabel('Iteration')plt.subplot(3, 1, 2) plt.title('Training accuracy') plt.xlabel('Epoch')plt.subplot(3, 1, 3) plt.title('Validation accuracy') plt.xlabel('Epoch')for update_rule, solver in solvers.iteritems():plt.subplot(3, 1, 1)plt.plot(solver.loss_history, 'o', label=update_rule)plt.subplot(3, 1, 2)plt.plot(solver.train_acc_history, '-o', label=update_rule)plt.subplot(3, 1, 3)plt.plot(solver.val_acc_history, '-o', label=update_rule)for i in [1, 2, 3]:plt.subplot(3, 1, i)plt.legend(loc='upper center', ncol=4) plt.gcf().set_size_inches(15, 15) plt.show() running with adam (Iteration 1 / 200) loss: 2.764716 (Epoch 0 / 5) train acc: 0.128000; val_acc: 0.124000 (Iteration 11 / 200) loss: 2.040898 (Iteration 21 / 200) loss: 1.774376 (Iteration 31 / 200) loss: 1.847699 (Epoch 1 / 5) train acc: 0.376000; val_acc: 0.306000 (Iteration 41 / 200) loss: 1.926563 (Iteration 51 / 200) loss: 1.720461 (Iteration 61 / 200) loss: 1.537673 (Iteration 71 / 200) loss: 1.603966 (Epoch 2 / 5) train acc: 0.422000; val_acc: 0.313000 (Iteration 81 / 200) loss: 1.602464 (Iteration 91 / 200) loss: 1.514707 (Iteration 101 / 200) loss: 1.341900 (Iteration 111 / 200) loss: 1.671358 (Epoch 3 / 5) train acc: 0.467000; val_acc: 0.353000 (Iteration 121 / 200) loss: 1.638983 (Iteration 131 / 200) loss: 1.433005 (Iteration 141 / 200) loss: 1.259506 (Iteration 151 / 200) loss: 1.510506 (Epoch 4 / 5) train acc: 0.552000; val_acc: 0.360000 (Iteration 161 / 200) loss: 1.234063 (Iteration 171 / 200) loss: 1.344069 (Iteration 181 / 200) loss: 1.179093 (Iteration 191 / 200) loss: 1.477335 (Epoch 5 / 5) train acc: 0.590000; val_acc: 0.375000running with rmsprop (Iteration 1 / 200) loss: 2.498833 (Epoch 0 / 5) train acc: 0.164000; val_acc: 0.133000 (Iteration 11 / 200) loss: 2.099360 (Iteration 21 / 200) loss: 1.879164 (Iteration 31 / 200) loss: 1.856589 (Epoch 1 / 5) train acc: 0.367000; val_acc: 0.285000 (Iteration 41 / 200) loss: 1.831192 (Iteration 51 / 200) loss: 1.911368 (Iteration 61 / 200) loss: 1.693546 (Iteration 71 / 200) loss: 1.745929 (Epoch 2 / 5) train acc: 0.447000; val_acc: 0.339000 (Iteration 81 / 200) loss: 1.748112 (Iteration 91 / 200) loss: 1.613525 (Iteration 101 / 200) loss: 1.635379 (Iteration 111 / 200) loss: 1.491368 (Epoch 3 / 5) train acc: 0.467000; val_acc: 0.361000 (Iteration 121 / 200) loss: 1.372249 (Iteration 131 / 200) loss: 1.678353 (Iteration 141 / 200) loss: 1.521616 (Iteration 151 / 200) loss: 1.441466 (Epoch 4 / 5) train acc: 0.514000; val_acc: 0.339000 (Iteration 161 / 200) loss: 1.619760 (Iteration 171 / 200) loss: 1.432048 (Iteration 181 / 200) loss: 1.684465 (Iteration 191 / 200) loss: 1.532590 (Epoch 5 / 5) train acc: 0.528000; val_acc: 0.370000

Train a good model!

Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.

If you are careful it should be possible to get accuracies above 55%, but we don’t require it for this part and won’t assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.

You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.

best_model = None best_val_acc = 0 ################################################################################ # TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might # # batch normalization and dropout useful. Store your best model in the # # best_model variable. # ################################################################################ reg_choice = [0, 0.02, 0.05] #dropout_choice = [0.25, 0.5] #netstructure_choice = [ # [100,100], # [100, 100, 100], # [50, 50, 50, 50, 50, 50, 50]] dropout_choice = [0] netstructure_choice = [[100, 100]] for hidden_dim in netstructure_choice:for dropout in dropout_choice:model = FullyConnectedNet(hidden_dim, reg=0, weight_scale=5e-2, dtype=np.float64,use_batchnorm=True, dropout=dropout)solver = Solver(model, data,num_epochs=20, batch_size=100,update_rule='adam',optim_config={'learning_rate': 5e-3},print_every=100,lr_decay=0.95,verbose=True)solver.train() if solver.best_val_acc>best_val_acc:best_model = modelprint plt.subplot(2, 1, 1)plt.title('Training loss')plt.plot(solver.loss_history, 'o')plt.xlabel('Iteration')plt.subplot(2, 1, 2)plt.title('Accuracy')plt.plot(solver.train_acc_history, '-o', label='train')plt.plot(solver.val_acc_history, '-o', label='val')plt.plot([0.5] * len(solver.val_acc_history), 'k--')plt.xlabel('Epoch')plt.legend(loc='lower right')plt.gcf().set_size_inches(15, 12)plt.show() ################################################################################ # END OF YOUR CODE # ################################################################################ (Iteration 1 / 9800) loss: 2.263781 (Epoch 0 / 20) train acc: 0.179000; val_acc: 0.180000 (Iteration 101 / 9800) loss: 1.624115 (Iteration 201 / 9800) loss: 1.467661 (Iteration 301 / 9800) loss: 1.591997 (Iteration 401 / 9800) loss: 1.432411 (Epoch 1 / 20) train acc: 0.487000; val_acc: 0.491000 (Iteration 501 / 9800) loss: 1.241822 (Iteration 601 / 9800) loss: 1.546403 (Iteration 701 / 9800) loss: 1.411293 (Iteration 801 / 9800) loss: 1.375881 (Iteration 901 / 9800) loss: 1.242919 (Epoch 2 / 20) train acc: 0.520000; val_acc: 0.495000 (Iteration 1001 / 9800) loss: 1.316806 (Iteration 1101 / 9800) loss: 1.340302 (Iteration 1201 / 9800) loss: 1.335680 (Iteration 1301 / 9800) loss: 1.346994 (Iteration 1401 / 9800) loss: 1.156202 (Epoch 3 / 20) train acc: 0.553000; val_acc: 0.509000 (Iteration 1501 / 9800) loss: 1.111737 (Iteration 1601 / 9800) loss: 1.339837 (Iteration 1701 / 9800) loss: 1.218292 (Iteration 1801 / 9800) loss: 1.344992 (Iteration 1901 / 9800) loss: 1.198010 (Epoch 4 / 20) train acc: 0.574000; val_acc: 0.515000 (Iteration 2001 / 9800) loss: 1.185471 (Iteration 2101 / 9800) loss: 1.245266 (Iteration 2201 / 9800) loss: 1.046663 (Iteration 2301 / 9800) loss: 1.128248 (Iteration 2401 / 9800) loss: 1.100717 (Epoch 5 / 20) train acc: 0.626000; val_acc: 0.529000 (Iteration 2501 / 9800) loss: 1.076717 (Iteration 2601 / 9800) loss: 1.154111 (Iteration 2701 / 9800) loss: 1.077080 (Iteration 2801 / 9800) loss: 0.998500 (Iteration 2901 / 9800) loss: 1.051188 (Epoch 6 / 20) train acc: 0.590000; val_acc: 0.536000 (Iteration 3001 / 9800) loss: 1.004974 (Iteration 3101 / 9800) loss: 1.124638 (Iteration 3201 / 9800) loss: 1.073654 (Iteration 3301 / 9800) loss: 0.970181 (Iteration 3401 / 9800) loss: 1.115142 (Epoch 7 / 20) train acc: 0.630000; val_acc: 0.537000 (Iteration 3501 / 9800) loss: 0.869317 (Iteration 3601 / 9800) loss: 1.109377 (Iteration 3701 / 9800) loss: 1.037178 (Iteration 3801 / 9800) loss: 0.947001 (Iteration 3901 / 9800) loss: 0.989016 (Epoch 8 / 20) train acc: 0.633000; val_acc: 0.529000 (Iteration 4001 / 9800) loss: 0.949825 (Iteration 4101 / 9800) loss: 1.007835 (Iteration 4201 / 9800) loss: 0.894922 (Iteration 4301 / 9800) loss: 1.134644 (Iteration 4401 / 9800) loss: 0.932514 (Epoch 9 / 20) train acc: 0.609000; val_acc: 0.541000 (Iteration 4501 / 9800) loss: 1.117945 (Iteration 4601 / 9800) loss: 1.066002 (Iteration 4701 / 9800) loss: 0.858422 (Iteration 4801 / 9800) loss: 0.799150 (Epoch 10 / 20) train acc: 0.692000; val_acc: 0.527000 (Iteration 4901 / 9800) loss: 1.027588 (Iteration 5001 / 9800) loss: 0.903380 (Iteration 5101 / 9800) loss: 0.950514 (Iteration 5201 / 9800) loss: 0.891470 (Iteration 5301 / 9800) loss: 0.947976 (Epoch 11 / 20) train acc: 0.685000; val_acc: 0.523000 (Iteration 5401 / 9800) loss: 1.161916 (Iteration 5501 / 9800) loss: 1.039629 (Iteration 5601 / 9800) loss: 0.895261 (Iteration 5701 / 9800) loss: 0.855530 (Iteration 5801 / 9800) loss: 0.723047 (Epoch 12 / 20) train acc: 0.730000; val_acc: 0.516000 (Iteration 5901 / 9800) loss: 1.015861 (Iteration 6001 / 9800) loss: 0.921310 (Iteration 6101 / 9800) loss: 1.055507 (Iteration 6201 / 9800) loss: 0.917648 (Iteration 6301 / 9800) loss: 0.767686 (Epoch 13 / 20) train acc: 0.699000; val_acc: 0.543000 (Iteration 6401 / 9800) loss: 1.170058 (Iteration 6501 / 9800) loss: 0.810596 (Iteration 6601 / 9800) loss: 0.920641 (Iteration 6701 / 9800) loss: 0.725889 (Iteration 6801 / 9800) loss: 0.931281 (Epoch 14 / 20) train acc: 0.667000; val_acc: 0.531000 (Iteration 6901 / 9800) loss: 0.701817 (Iteration 7001 / 9800) loss: 0.788107 (Iteration 7101 / 9800) loss: 0.818656 (Iteration 7201 / 9800) loss: 0.888433 (Iteration 7301 / 9800) loss: 0.728136 (Epoch 15 / 20) train acc: 0.695000; val_acc: 0.530000 (Iteration 7401 / 9800) loss: 0.857501 (Iteration 7501 / 9800) loss: 0.867369 (Iteration 7601 / 9800) loss: 0.814501 (Iteration 7701 / 9800) loss: 0.763123 (Iteration 7801 / 9800) loss: 0.835519 (Epoch 16 / 20) train acc: 0.711000; val_acc: 0.528000 (Iteration 7901 / 9800) loss: 0.861891 (Iteration 8001 / 9800) loss: 0.667957 (Iteration 8101 / 9800) loss: 0.678417 (Iteration 8201 / 9800) loss: 0.776296 (Iteration 8301 / 9800) loss: 0.846255 (Epoch 17 / 20) train acc: 0.718000; val_acc: 0.532000 (Iteration 8401 / 9800) loss: 0.821841 (Iteration 8501 / 9800) loss: 0.737560 (Iteration 8601 / 9800) loss: 0.734345 (Iteration 8701 / 9800) loss: 0.789014 (Iteration 8801 / 9800) loss: 0.829744 (Epoch 18 / 20) train acc: 0.745000; val_acc: 0.530000 (Iteration 8901 / 9800) loss: 0.688820 (Iteration 9001 / 9800) loss: 0.726195 (Iteration 9101 / 9800) loss: 0.922960 (Iteration 9201 / 9800) loss: 0.791910 (Iteration 9301 / 9800) loss: 0.891499 (Epoch 19 / 20) train acc: 0.728000; val_acc: 0.522000 (Iteration 9401 / 9800) loss: 0.731820 (Iteration 9501 / 9800) loss: 0.721811 (Iteration 9601 / 9800) loss: 0.600602 (Iteration 9701 / 9800) loss: 0.689157 (Epoch 20 / 20) train acc: 0.739000; val_acc: 0.521000

Test you model

Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.

y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1) y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1) print 'Validation set accuracy: ', (y_val_pred == data['y_val']).mean() print 'Test set accuracy: ', (y_test_pred == data['y_test']).mean() Validation set accuracy: 0.554 Test set accuracy: 0.545

總結(jié)

以上是生活随笔為你收集整理的cs231n:assignment2——Q1: Fully-connected Neural Network的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。

人人澡av | 一区二区三区四区久久 | 免费a网 | 中文字幕一区二区三区在线播放 | 激情综合色综合久久综合 | 91字幕| 美州a亚洲一视本频v色道 | 国产精品久久久区三区天天噜 | 亚洲一区二区三区在线看 | 国产精品美女久久久久久久 | 在线91精品 | 97电影网站| 在线播放精品一区二区三区 | 久久看片网 | 成人av在线直播 | 91av亚洲| 久久综合狠狠综合久久狠狠色综合 | 久久亚洲在线 | 美女免费视频一区 | 日韩电影在线观看一区 | 黄色国产区 | 天天爱综合 | www天天干com | 日韩天堂在线观看 | 国产精品久久久久久久午夜片 | 欧美日韩精品在线播放 | 91在线亚洲 | 一区二区三区四区精品视频 | 成人黄色电影视频 | 毛片精品免费在线观看 | 日韩在线观看你懂得 | av在线网站观看 | 国产精品久久久久久电影 | 在线观看视频国产一区 | 91视频传媒 | 日韩精品国产一区 | 国产一区二区精品 | 在线观看成年人 | 国产精品av在线 | 婷婷丁香自拍 | 久久国内精品视频 | 粉嫩av一区二区三区四区五区 | 首页av在线 | 免费观看91视频 | 亚洲精品免费在线观看 | 久久免费99 | 国产91在线免费视频 | av在线免费观看黄 | 一区二区三区在线影院 | 2019免费中文字幕 | avsex| 久久久精品国产一区二区三区 | 91av色| 天天操天天色天天射 | 亚洲国产片色 | 伊人天天综合 | 91看片淫黄大片在线播放 | 国产黄色免费观看 | 成人av在线直播 | 国产97在线观看 | 免费在线观看毛片网站 | 欧美日韩国产一区 | 97在线观看免费高清完整版在线观看 | 国产在线p| 久草精品网 | 日韩资源视频 | 国产精品美女久久久久久网站 | 激情五月色播五月 | 麻豆成人在线观看 | 欧美国产精品久久久久久免费 | 中文一区在线 | 99久久久久久 | 亚洲自拍av在线 | 久久久精品国产免费观看一区二区 | bbb搡bbb爽爽爽| 91精品蜜桃 | 欧美最猛性xxxxx免费 | 亚洲精品午夜视频 | 亚洲aⅴ乱码精品成人区 | 18久久久久久 | 久久久久国产精品免费免费搜索 | 九九免费在线看完整版 | 亚洲高清视频一区二区三区 | 蜜臀av.com| 又色又爽又黄高潮的免费视频 | 久热电影 | 久久久久久蜜av免费网站 | 久久综合婷婷国产二区高清 | 欧美性生活免费 | 国产高清在线a视频大全 | 亚洲一级黄色大片 | 亚洲国产剧情 | 97视频在线看 | av福利在线导航 | 美女精品国产 | 色婷婷综合久久久 | 国产麻豆视频在线观看 | 国产色资源 | 亚洲免费色 | 特级毛片在线免费观看 | 四虎成人免费观看 | 亚洲高清激情 | 日韩啪啪小视频 | 亚洲欧美色婷婷 | 97综合视频 | 免费日韩av片 | 欧洲成人av | 一级做a爱片性色毛片www | 国精产品满18岁在线 | 国产亚洲视频在线免费观看 | 少妇精品久久久一区二区免费 | 国产精品免费观看在线 | 九九九九九九精品 | 国产精品国产三级国产专区53 | 日韩精品一区二区三区免费观看 | 亚洲污视频| av黄色亚洲 | 久久综合桃花 | 国产亚洲免费观看 | 久久99国产精品 | 色五月色开心色婷婷色丁香 | 亚洲天堂网站视频 | 中文在线a在线 | 天天操欧美 | 亚洲人成综合 | 麻豆视频国产精品 | 青青河边草观看完整版高清 | 色综合久久久久综合99 | 亚洲激情影院 | 国产亚洲精品久久久久久电影 | 亚洲精品午夜久久久久久久 | 免费看的黄色的网站 | 在线中文字幕网站 | 国产精品123 | 久久国产露脸精品国产 | 粉嫩av一区二区三区四区五区 | 国产黑丝一区二区 | 69精品人人人人 | 久久精品视频播放 | 国内成人综合 | 精品久久久久久久久久久久久久久久 | 久久久999免费视频 日韩网站在线 | 亚洲国产精品女人久久久 | 久久久亚洲精华液 | 99久久影院 | 天天操综 | 免费男女羞羞的视频网站中文字幕 | 国产精品区在线观看 | 午夜精品一区二区三区视频免费看 | 亚洲精品国产成人av在线 | 欧美色图东方 | 黄色大片免费播放 | 国产精品久久久久四虎 | 久久精品这里都是精品 | 狠狠干中文字幕 | 亚洲va天堂va欧美ⅴa在线 | 亚洲视屏 | 国产成人在线免费观看 | 丁香六月伊人 | 2019久久精品| 久草在线观看 | 久草在线播放视频 | 欧美性久久久久久 | 精品国产精品久久一区免费式 | 特黄特色特刺激视频免费播放 | 久草视频网 | 日本久久久久 | 亚洲综合婷婷 | 中文字幕成人在线 | 东方av在线免费观看 | 亚洲人人射 | 国产视频69 | 2019精品手机国产品在线 | 天天综合亚洲 | 中国一级特黄毛片大片久久 | 日韩黄色一区 | 天天操比| 日韩免费不卡视频 | av 一区 二区 久久 | 日日干夜夜骑 | 久久精品国产亚洲 | av福利在线看| 麻豆小视频在线观看 | 国产精品va在线观看入 | 久久久免费在线观看 | 97人人模人人爽人人喊网 | 亚洲成人资源网 | 久久久久国产成人精品亚洲午夜 | 亚洲电影院 | 久久视频网址 | 麻花豆传媒一二三产区 | 97超碰人人模人人人爽人人爱 | 一区二区三区高清在线观看 | 久人人 | 精品免费久久久久久 | 黄色网大全 | 国内精品视频在线播放 | 精品国产电影 | 日本精品一区二区三区在线播放视频 | 521色香蕉网站在线观看 | 黄色一级影院 | 精品国产精品国产偷麻豆 | 亚洲精品在线电影 | 亚洲一区 av| 国产精品久久久久久a | 日本亚洲国产 | 久久五月婷婷丁香 | 久久久久女人精品毛片 | 四虎影院在线观看av | 国产xxxx| 国产精品区二区三区日本 | 三三级黄色片之日韩 | 久久精品九色 | 涩av在线 | 日精品 | 国产视频精品久久 | 欧美日韩国产综合一区二区 | 成人免费在线电影 | 欧美日韩精品综合 | 国产一二三四在线观看视频 | 一级片免费在线 | 亚洲粉嫩av | 亚洲视频免费在线观看 | 国产大陆亚洲精品国产 | 91激情视频在线观看 | 精品亚洲国产视频 | 69av在线播放 | 色婷婷啪啪免费在线电影观看 | 黄色片视频在线观看 | 国产一区二区高清视频 | 在线视频日韩 | 国产一区二区三区 在线 | 视频一区久久 | av播放在线 | 欧美色图另类 | 欧美一级在线观看视频 | 免费黄色在线 | 国产精品久久久久久久久搜平片 | 中文字幕资源在线 | 天天干天天操天天拍 | 黄色不卡av| 久久综合色一综合色88 | 午夜精品久久久久久久99 | 国产成人无码AⅤ片在线观 日韩av不卡在线 | 91精品视频在线播放 | 丁香久久五月 | www.91av在线 | 亚洲成av | 91自拍视频在线 | 国产日韩在线播放 | www.日本色 | 激情五月婷婷综合网 | 久久99久久99 | 久久99国产精品免费 | 国产一区二区三区高清播放 | 夜夜视频欧洲 | 国产视频一区二区三区在线 | 日本中文字幕电影在线免费观看 | 欧美 亚洲 另类 激情 另类 | 天天爱天天插 | 国产成人三级在线播放 | 国产a级精品 | 国产精品久久久久久久久久久免费 | 久久亚洲私人国产精品va | 日日干,天天干 | 国产精品涩涩屋www在线观看 | 99亚洲精品 | av成人动漫 | 涩涩网站在线看 | 欧美在线视频不卡 | 天天操狠狠操夜夜操 | 69国产精品成人在线播放 | 国产福利电影网址 | 成人国产精品一区二区 | jizz999| 久草免费新视频 | 夜夜操天天干 | 欧美日韩在线观看一区 | 日韩在线精品 | 蜜臀av性久久久久蜜臀aⅴ四虎 | 深夜免费网站 | 在线免费性生活片 | av中文在线影视 | 日日爽天天 | 丁香婷婷亚洲 | 日韩区欧美久久久无人区 | 亚洲视频网站在线观看 | 色综合久久66 | 久久久 精品 | 亚洲精品9| 国产在线一卡 | 9幺看片 | 国产999视频在线观看 | 天天综合入口 | 一区在线观看 | 国产精品一区久久久久 | 日日操日日操 | 亚洲九九精品 | 特级免费毛片 | 国产一区视频在线播放 | 天天爽人人爽夜夜爽 | 久久精品中文视频 | 亚洲日本va在线观看 | 亚洲高清不卡av | a在线免费观看视频 | 色综合中文综合网 | 色婷婷国产精品一区在线观看 | 色婷婷亚洲精品 | 亚洲女在线 | 日韩国产精品毛片 | 视频一区二区三区视频 | 丁香花中文在线免费观看 | 日韩网站免费观看 | 一级片免费视频 | 欧美日韩三级在线观看 | 中文字幕丝袜美腿 | 免费人人干 | 99在线精品免费视频九九视 | 日韩在线小视频 | 丁香五婷 | 亚洲高清av在线 | 日韩天天干 | 欧美特一级 | 四虎在线免费视频 | 久久国产精品网站 | 不卡视频在线 | 六月婷婷久香在线视频 | 99精品国产免费久久久久久下载 | 日韩欧美一区二区三区黑寡妇 | 国产一区二区不卡视频 | 久久伊人综合 | 中文字幕在线观看视频网站 | 欧美狠狠色 | 永久免费精品视频网站 | 91少妇精拍在线播放 | 手机色在线 | 亚洲 中文 欧美 日韩vr 在线 | 欧美成人视 | 奇米影视8888在线观看大全免费 | 中文字幕在线观看免费高清电影 | 欧美精品久久久久久久久久 | 色婷婷丁香 | 欧美一级看片 | 成人h视频在线播放 | 综合网av | 一区二区三区视频在线 | 日本成址在线观看 | 日本在线观看一区二区 | 久久国产二区 | 亚洲 欧洲av | av色一区| 在线观看aa| 亚洲无人区小视频 | 97在线看片 | 日韩电影在线观看一区 | 91精品久久久久久久久久久久久 | 天天操天天操天天操天天 | 91天堂影院| 久草综合视频 | 国产老太婆免费交性大片 | 91观看视频| 国产超碰在线观看 | 国产一在线精品一区在线观看 | 久久理伦片 | 国内精品福利视频 | 99在线免费观看视频 | 欧美激情精品久久久久久 | 亚洲作爱 | 精品国产诱惑 | 国产小视频在线免费观看 | 97电影手机| 日韩欧美一区二区三区视频 | 天天天天天干 | 在线看岛国av | 国产剧情在线一区 | 国产高清在线 | 国产97在线视频 | 久久精品国产成人 | 免费看毛片网站 | 免费视频区 | 久久久久女教师免费一区 | 中文字幕成人一区 | 国产主播大尺度精品福利免费 | a成人v| 成年人在线电影 | 91精品啪在线观看国产81旧版 | 国产精品精品久久久久久 | 成人av高清 | 99精品热视频只有精品10 | 精品国产乱码久久久久久1区2匹 | 日韩精品一区二区在线 | 天天综合区| 99亚洲精品 | 亚洲成人黄色av | 亚洲无在线| 国产精品美女视频 | 国产色视频网站 | 亚洲欧洲中文日韩久久av乱码 | 在线观看成人福利 | 免费精品视频 | 国产午夜麻豆影院在线观看 | 日韩一区二区免费视频 | 国产黄免费 | 久久视频在线免费观看 | 久久中文精品视频 | 天天搞天天干天天色 | 在线视频日韩精品 | 激情综合五月网 | 国产高清在线一区 | 国产一区二区免费在线观看 | 天天天天天天干 | 日韩中文在线视频 | 国产精品视频地址 | 国产精品剧情在线亚洲 | 国产精品18久久久久久久久 | 黄色av电影一级片 | 中文字幕第一页在线 | 国产精品一区久久久久 | 91色九色 | 欧美日韩精品免费观看 | 玖玖视频在线 | 天堂av在线 | 少妇bbw搡bbbb搡bbb | 狠狠干天天| 国产精品美女免费看 | 在线观看爱爱视频 | 成年人免费在线观看网站 | 日韩av一区二区三区 | 亚洲特级片 | 久久69精品 | www.夜夜草 | 夜夜骑日日 | 五月开心六月婷婷 | 鲁一鲁影院 | 97电院网手机版 | 亚洲精品在线免费 | 久草在线高清视频 | 在线国产91 | 狠狠操操网 | 久久免费视频这里只有精品 | 免费中文字幕在线观看 | 99精品视频免费全部在线 | 久久久久99999 | 日日操日日操 | 黄色毛片网站在线观看 | 亚洲精品视频在线观看视频 | 欧美日韩国产精品久久 | 亚洲精品tv久久久久久久久久 | 久久久污 | 四虎永久免费网站 | 日本二区三区在线 | 欧美日韩精品在线播放 | 国产精品午夜在线 | 国产一区二区精品在线 | 国产成人免费av电影 | a成人v在线 | www.色五月.com | 久久国产欧美日韩精品 | 欧美少妇bbwhd | 日本护士撒尿xxxx18 | 国产精品18久久久久久久久久久久 | 久久国产露脸精品国产 | 久久视了| 亚洲伊人色 | 在线观看精品一区 | 亚洲福利精品 | 日韩二区三区 | 天天天射| 天天做天天爱天天爽综合网 | 日韩精品一区二区三区三炮视频 | 麻豆视频免费在线观看 | 日韩视频一区二区 | 国产丝袜美腿在线 | 五月综合婷 | 久久久久久久久久久久久久av | 色妞久久福利网 | 国产二区视频在线观看 | 亚洲精品美女久久17c | 国产精品欧美久久久久三级 | 国产免费国产 | 香蕉蜜桃视频 | 97电院网手机版 | 国内精品视频在线 | 久久尤物电影视频在线观看 | 免费观看全黄做爰大片国产 | 天天操天天添天天吹 | 在线视频 影院 | 午夜精品久久久久久久99水蜜桃 | 三上悠亚一区二区在线观看 | 最近日本中文字幕a | 在线免费观看黄网站 | 午夜精品久久一牛影视 | 国产精品精品久久久久久 | 黄色小说视频在线 | 欧美爽爽爽 | 日本精品一区二区 | 久草视频免费在线观看 | 精品1区2区| 久久久久久久久久久久国产精品 | 国产精品免费久久久久影院仙踪林 | av看片在线观看 | 亚洲专区欧美专区 | 日本在线中文在线 | 免费看黄色91 | 国产尤物在线视频 | 人人澡人人干 | 中文字幕乱在线伦视频中文字幕乱码在线 | 久久精品看片 | 久久人人97超碰com | 欧美日韩一级视频 | 久久久久电影网站 | 亚洲三级网站 | 超碰人人超碰 | 欧美久久久久久久久久久 | 国产色久| 亚洲欧美在线综合 | 色国产精品一区在线观看 | 欧美一级片在线观看视频 | 超碰在线免费97 | 99视频精品全国免费 | 亚洲免费av观看 | 国产精国产精品 | 亚洲成av人片在线观看www | 天天干.com| 午夜视频免费在线观看 | 久久久久国产一区二区 | 97人人澡人人爽人人模亚洲 | 伊人婷婷 | 91亚洲精品在线 | 国内揄拍国产精品 | 久久久久日本精品一区二区三区 | 在线黄色国产电影 | 不卡的av电影 | 全久久久久久久久久久电影 | 国产精品美女久久久久久网站 | 在线亚洲日本 | 亚洲精品小视频 | 国产剧情久久 | 激情欧美一区二区三区 | 超碰伊人网 | 二区三区精品 | 久久国产精品免费 | 久草在线视频免赞 | 国产一级片观看 | 国产高清av免费在线观看 | 热久久精品在线 | 欧美性生活小视频 | 色悠悠久久综合 | 手机在线黄色网址 | 久久新视频 | 久久久国产一区二区三区 | 99精品视频网站 | 国产日韩高清在线 | 伊人久久在线观看 | 日韩欧美区 | 午夜体验区 | 又紧又大又爽精品一区二区 | 色综合久久综合中文综合网 | 欧美91精品国产自产 | 激情xxxx| 成人在线视 | 欧洲一区二区在线观看 | 国产精品一区专区欧美日韩 | 美女在线黄 | 亚洲国产精品女人久久久 | 精品国产欧美一区二区三区不卡 | 免费看的黄网站软件 | 国产高清精品在线观看 | 成人观看视频 | 亚洲欧美国产精品久久久久 | 免费高清国产 | 亚洲精品小区久久久久久 | 激情久久久久久久久久久久久久久久 | 精品国产_亚洲人成在线 | 91麻豆精品国产91久久久久久久久 | 成人一区二区在线 | 国产亚洲成人精品 | 国产一区成人 | 五月婷婷在线观看 | 免费a级观看 | 91精品国产99久久久久久红楼 | 日本韩国在线不卡 | 人人干狠狠操 | 波多野结衣资源 | av在线免费网 | 免费在线国产视频 | 成人黄色中文字幕 | 一区二区三区日韩视频在线观看 | 亚洲成人资源在线观看 | 91精品对白一区国产伦 | www免费看 | 97人人模人人爽人人喊中文字 | 久久成人国产 | 亚洲精品视频在线观看免费视频 | 国产亚洲精品久久久久久网站 | 99久久精品免费看国产一区二区三区 | 99久久婷婷国产一区二区三区 | 精品在线小视频 | 久久99精品波多结衣一区 | 香蕉视频在线免费看 | 日韩久久精品一区二区三区 | 黄色影院在线观看 | www.色国产 | 国产精品久久久久久久久久妇女 | 精品免费视频. | 久久人人爽人人片 | 亚洲国产精品500在线观看 | 瑞典xxxx性hd极品 | 日韩在线视| 成人在线播放网站 | 97理论电影 | 日日久视频| 欧美伦理电影一区二区 | 久草视频中文 | 九九热免费在线观看 | 国产黄色片久久 | 亚洲 欧美变态 另类 综合 | 五月天,com| 亚洲午夜精品在线观看 | 亚洲综合欧美精品电影 | 丝袜美腿在线视频 | 日韩一二三区不卡 | 欧美激情视频一二三区 | 天天干天天做 | 久久99精品国产 | 日本在线中文在线 | 中文字幕在线观看免费高清完整版 | 91视频免费观看 | 国产精品视频久久 | 国产超碰在线 | 伊人电影在线观看 | 欧美日韩在线精品 | 亚洲综合欧美激情 | 久久久久久久久久影视 | 欧美成人基地 | 亚洲视频久久久久 | 国产精品久久久久国产精品日日 | 激情网综合 | 久久久精品国产一区二区电影四季 | 天天在线操 | 六月激情 | 国产精彩视频 | 成人精品久久久 | 久久免费视频一区 | 九九久久久久99精品 | 久久久国产一区二区三区四区小说 | 亚洲成人高清在线 | 国产一区二区精品久久 | 综合伊人av | 午夜男人影院 | 九九热av | 一区二区三区在线观看免费 | 久久国产手机看片 | 天天操天天干天天爱 | 97碰在线视频 | 国产高清久久久 | 久久久久久久久久免费视频 | 欧美日韩一级视频 | 久久激情五月丁香伊人 | 天天综合精品 | 欧美在线视频二区 | 久久久国产成人 | 黄色大全免费网站 | 永久免费精品视频网站 | 一本一道久久a久久精品 | 日韩最新在线视频 | 日韩两性视频 | 1024手机看片国产 | 少妇高潮流白浆在线观看 | 国产高清在线a视频大全 | 福利视频一区二区 | 亚洲免费视频在线观看 | 欧美一区二区精美视频 | 一级黄色在线免费观看 | 在线一二区 | 国产a高清 | 国产青春久久久国产毛片 | 婷婷久久久久 | 亚洲精品女人久久久 | www.com.日本一级 | 亚洲乱码精品久久久 | 亚洲国产一区av | 色噜噜噜| 超碰在线cao| 日韩久久久久久久 | 欧美日在线 | 国产精品亚洲人在线观看 | 久久这里只有精品视频首页 | 人人揉人人揉人人揉人人揉97 | 成人av影视观看 | 婷婷丁香国产 | 久久综合狠狠综合 | 国产成人精品综合久久久久99 | 国产成人亚洲在线电影 | 黄色电影网站在线观看 | 97国产精品一区二区 | 国产视频中文字幕在线观看 | 亚洲成人家庭影院 | 欧美在线free | 国产精品亚洲成人 | 18性欧美xxxⅹ性满足 | 99精品福利 | 亚洲精品tv久久久久久久久久 | 六月丁香在线观看 | 天海冀一区二区三区 | 天堂av在线7 | 日本久久久亚洲精品 | 久草影视在线观看 | 免费大片黄在线 | 日韩极品视频在线观看 | 亚洲国产资源 | 国产剧情av在线播放 | 国产黄色在线看 | 天天激情在线 | 久久久久电影网站 | 99国产精品一区二区 | 久久这里只有精品视频99 | 久久免费黄色网址 | 成年人免费电影 | 最近中文字幕国语免费av | 日韩xxxx视频 | 日本中文字幕在线观看 | 亚洲伦理一区 | 国产一级91 | 国产又粗又猛又黄又爽视频 | 97精品国产97久久久久久粉红 | 国产热re99久久6国产精品 | 在线看不卡av | 97超碰在| 狠狠干网 | 亚洲资源在线观看 | 中文av字幕在线观看 | 亚洲一二区视频 | 黄色影院在线免费观看 | 亚洲精品乱码久久久久久蜜桃欧美 | 日韩va在线观看 | 日韩精品一区二区三区外面 | 91九色porn在线资源 | 在线v| 国产一级在线免费观看 | 中文字幕麻豆 | 国产黄在线 | 久久影院中文字幕 | 中文字幕二区在线观看 | 99色在线视频 | aaa黄色毛片 | 99热只有精品在线观看 | 黄色精品一区 | 日韩欧美在线一区二区 | 久久久 激情 | 天天草天天干天天 | 精品在线一区二区 | 公与妇乱理三级xxx 在线观看视频在线观看 | 综合色久 | 欧美亚洲成人免费 | 国产麻豆剧果冻传媒视频播放量 | 久久久久久高潮国产精品视 | 欧美日韩不卡一区二区 | 一本一道久久a久久精品 | 国产精品免费一区二区三区在线观看 | 成人蜜桃网 | 成人精品视频 | 玖玖视频国产 | 久久草网站 | 国产一二三区在线观看 | 国产精选在线观看 | 亚洲精品国产精品久久99热 | 波多野结衣理论片 | 97av视频| 日本久久中文字幕 | 四虎成人av| 国产九九九九九 | 色999在线| 精油按摩av | 亚洲精品短视频 | 国产精品18久久久久久首页狼 | 成人午夜在线观看 | 日韩在线观看第一页 | 免费色网站| 成人丝袜 | 欧美在线一二 | 久久国产剧场电影 | 欧美一区二区三区在线观看 | 青青射 | 欧美国产日韩激情 | 91在线播放综合 | 婷婷黄色片 | 色婷婷国产精品 | 久久精品欧美一 | 国产99在线免费 | 午夜视频在线观看一区二区 | 久久久av电影 | 欧美日韩高清免费 | 国产专区第一页 | 婷婷国产一区二区三区 | 亚洲做受高潮欧美裸体 | 国产精品嫩草影视久久久 | 精品国产一区二区三区久久 | 亚洲 欧美 日韩 综合 | 九色视频网 | 国产美女免费 | 在线黄色观看 | 日韩肉感妇bbwbbwbbw | 欧美 日韩 国产 中文字幕 | 91成人国产| 婷婷 中文字幕 | 深爱婷婷网 | 日韩欧美视频在线播放 | 色诱亚洲精品久久久久久 | 视频国产一区二区三区 | 在线影视 一区 二区 三区 | 中文字幕一区二区三区在线播放 | 99麻豆久久久国产精品免费 | 亚洲一区免费在线 | 国产精品一区二区三区免费看 | 99久久精品日本一区二区免费 | 性色av一区二区三区在线观看 | 亚洲乱亚洲乱妇 | 日韩午夜视频在线观看 | 青草视频在线 | 国产精品久久久久三级 | 在线免费中文字幕 | www亚洲精品 | 久久久久久久久影视 | 亚洲成人国产 | 国产中文在线观看 | 66av99精品福利视频在线 | 久久精品视频免费观看 | 日本激情视频中文字幕 | 日韩精品一区二区在线 | 日本精品视频一区二区 | 国产一级视频在线免费观看 | 精品a视频| 永久免费精品视频网站 | 久久精品一区二区三区中文字幕 | 精品视频免费在线 | 天天爱天天操天天爽 | 欧美日韩高清在线一区 | 97视频免费播放 | 中国精品一区二区 | 插久久 | 久久久国产精品一区二区中文 | 91免费观看国产 | 麻豆久久久久久久 | 日韩丝袜| 中文欧美字幕免费 | 亚洲精品久久久久中文字幕m男 | 国产亚洲精品久久久久久大师 | 久久视频在线观看中文字幕 | 久久电影中文字幕视频 | 午夜久久久精品 | 精品免费视频123区 午夜久久成人 | 黄色三级在线观看 | 丁香视频全集免费观看 | 高清有码中文字幕 | 欧美一级电影在线观看 | 黄视频网站大全 | 国产久草在线观看 | av片子在线观看 | 久久国内精品99久久6app | www日韩在线观看 | 超碰99人人 | aa级黄色大片 | 丰满少妇一级 | 欧美精品亚洲精品日韩精品 | 我要看黄色一级片 | 日本最新一区二区三区 | 亚洲一级影院 | 久久亚洲福利视频 | 久久黄色小说视频 | 99热国产精品 | 国产无吗一区二区三区在线欢 | 六月激情丁香 | 日日日日 | 国产亚洲精品精品精品 | 久久好看 | 中文在线8新资源库 | 色com | 久久国产精品久久久 | 国产成人免费av电影 | 亚洲精品小视频在线观看 | 色欧美综合 | 国产精品乱码久久 | 国产成人一区三区 | 欧美 日韩 国产 中文字幕 | 在线播放日韩 | 亚洲人成精品久久久久 | 国产精品高| 夜夜操夜夜干 | 国产精品午夜久久久久久99热 | 欧美精品久久久久久久久久白贞 | 亚洲精品视频第一页 | 久久视了| 狠狠色噜噜狠狠 | 国产日产亚洲精华av | 久久久久久久免费 | 97超碰人人澡人人爱 | 亚洲天堂网在线观看视频 | 日产av在线播放 | www.伊人网| 亚洲一级黄色 | 黄a在线观看 | 色婷婷亚洲精品 | 欧美极品少妇xbxb性爽爽视频 | 欧美va电影 | 午夜久久电影网 | 久久欧美精品 | 中文字幕色在线 | 欧美aⅴ在线观看 | 国产青青青| 91精品国产麻豆 | 狠狠网亚洲精品 | www.av免费 | 国产一级黄大片 | 草久在线观看视频 | 天天天天天干 | 97操碰| 深夜福利视频一区二区 | av电影中文字幕 | 狠狠色丁香婷婷综合橹88 | 99在线观看视频网站 | 日本3级在线观看 | 伊人中文字幕在线 | 日韩免费观看av | 亚洲欧美国产精品18p | 国产精品美女在线 | 国产免费视频一区二区裸体 | 亚洲天堂网站视频 | 丁香婷婷色综合亚洲电影 | 久久99精品国产麻豆宅宅 | 成年一级片 | 国产人成一区二区三区影院 | 黄色免费在线视频 | 97精品免费视频 | 免费看一级黄色大全 | 国产精品久久片 | 精品在线99 | 亚洲va综合va国产va中文 | 日韩中文字幕网站 | av免费在线看网站 | 亚洲久久视频 | 日日夜夜精品免费 | www.com在线观看| 久久久精品国产一区二区电影四季 | 精品亚洲视频在线观看 | 久久99久久99久久 | 91在线看黄 | 免费观看成人av | 免费观看第二部31集 | 久久久久国产精品一区二区 | 日韩欧美高清不卡 | 久久综合九色综合久99 | 免费久久99精品国产 | 美女网站视频久久 | 992tv成人免费看片 | 国产精品毛片一区二区 | 99视频在线精品国自产拍免费观看 | 激情综合五月天 | 中文字幕 国产 一区 | 国内三级在线观看 | 黄色视屏在线免费观看 | wwwav视频| 日韩免费电影一区二区 | 四虎影视成人永久免费观看视频 | 黄色网址a| 久草在线免费在线观看 | 免费又黄又爽视频 | 园产精品久久久久久久7电影 | 毛片网在线 | 天天夜夜操 | 在线电影中文字幕 | 黄色在线看网站 | 欧美日韩p片 | 免费观看国产成人 | 国产精品久久久久永久免费看 | 亚洲成人黄色网址 | 91免费观看视频网站 | 成人黄色电影视频 | av中文字幕免费在线观看 | 午夜精品视频一区二区三区在线看 | 99精品福利 | 日韩理论影院 | 在线 视频 一区二区 | 国产精品一区免费看8c0m | 日韩av手机在线看 | 不卡精品 | 亚洲精品99久久久久中文字幕 | 精品在线视频一区二区三区 | 久久久国产精品人人片99精片欧美一 | 免费在线播放av电影 | 国产五月 | 毛片视频电影 | 麻豆成人精品视频 | 五月婷婷一级片 | 久久久国产精品成人免费 | 天天干夜夜爽 | 成人毛片100免费观看 | 久久久久免费精品视频 |