日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

软培

發布時間:2025/3/14 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 软培 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

機器學習的概念

有 監 督 學 習(Supervised Learning):

\[x, y\]
輸入高維向量(Vector)并同時輸入其標簽(Label),通過建模(Modeling),使得機器能夠自動在模型中計算最為合適的參數值(Parameter)。最終,使得我們獲得一個參數確定的模型,并在未來輸入新的高維向量時可以預測其Label值,使得Label的誤差在所允許的范圍之內。

無 監 督 學 習(Non-Supervised Learning):

\[x\]
輸入高維向量(Vector)并,通過建模(Modeling),使得機器能夠計算出模型中相應的待定參數值(Parameter)。

典型的監督學習

貝葉斯概率

概率是一種事情發生可能性的描述.

  • 基于統計的認知
  • 1.1 重要原則:
    - 結論來源于觀測
    - 定量描述優于定性描述
    - 結論條件明確,敘述簡潔,有好的泛化性
    - 高度自洽性,高度可驗證性

    1.2 古典概型(靜態概型)
    - 試驗只有有限個基本結果
    - 試驗的每個基本結果出現的可能性是一樣的

    1.3 基于統計的概率

    • 在一定條件下,重復做\(n\)次試驗(對一個事情觀測\(n\)次),一個事件\(A\)發生的概率\(P(A)\)

    \[P(A) = \frac{A發生次數}{總觀測次數}\]

    • \(A與B正相關, P(A)<P(A|B)\)
    • \(A與B負相關, P(A)>P(A|B)\)

    樸素貝葉斯分類器

    維基百科:所有樸素貝葉斯分類器都假定樣本每個特征與其他特征都不相關。

    一個普通的規則就是選出最有可能的那個:這就是大家熟知的最大后驗概率(MAP)決策準則。
    \[classify\{f_1,\dots,f_n\} = \underset{c}{\operatorname{argmax}} \ p(C=c) \displaystyle\prod_{i=1}^n p(F_i=f_i\vert C=c)\]

    高斯樸素貝葉斯

    如果要處理的是連續數據一種通常的假設是這些連續數值為高斯分布。

    例如,假設訓練集中有一個連續屬性,\(x\)。我們首先對數據根據類別分類,然后計算每個類別中\(x\)的均值和方差。令\(\mu_c\)表示為\(x\)在''c''類上的均值,令\(\sigma^2_c\)\(x\)在''c''類上的方差。在給定類中某個值的概率,\(P(x=v|c)\),可以通過將\(v\)表示為均值為\(\mu_c\)方差為\(\sigma^2_c\)正態分布計算出來。如下,
    \[ P(x=v|c)=\tfrac{1}{\sqrt{2\pi\sigma^2_c}}\,e^{ -\frac{(v-\mu_c)^2}{2\sigma^2_c} } \]
    處理連續數值問題的另一種常用的技術是通過離散化連續數值的方法。通常,當訓練樣本數量較少或者是精確的分布已知時,通過概率分布的方法是一種更好的選擇。在大量樣本的情形下離散化的方法表現更優,因為大量的樣本可以學習到數據的分布。由于樸素貝葉斯是一種典型的用到大量樣本的方法(越大計算量的模型可以產生越高的分類精確度),所以樸素貝葉斯方法都用到離散化方法,而不是概率分布估計的方法。

    from sklearn.naive_bayes import GaussianNB # 0:晴 1:陰 2:降水 3:多云 data_table =[["date", "weather"],[1, 0],[2, 1],[3, 2],[4, 1],[5, 2],[6, 0],[7, 0],[8, 3],[9, 1],[10, 1]] #當天的天氣 X = [[0], [1], [2], [1], [2], [0], [0], [3], [1]] #當天的天氣對應后一天的天氣 y = [1, 2, 1, 2, 0, 0, 3, 1, 1] #現在把訓練數據和對應的分類放入分類器中進行訓練 clf = GaussianNB().fit(X, y) p = [[1]] print(clf.predict(p)) [2]

    hello.py

    import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello)) a = tf.constant(10) b = tf.constant(32) print(sess.run(a + b)) b'Hello, TensorFlow!' 42

    實驗指導:

    實現一個線性回歸的例子

    # ex-01.py import tensorflow as tf import numpy as np#create some training data x_data = np.random.rand(100).astype(np.float32) y_data = x_data*1 + 3print ("x_data:") print (x_data)print ("y_data:") print( y_data)#create the weights and bias variables weights = tf.Variable(tf.random_uniform([1], -1.0, 1.0)) print ("weights before initializing:") print (weights)biases = tf.Variable(tf.zeros([1])) print ("bias before initializing:") print (biases)#predict (fit) value y = weights*x_data + biases#loss function loss = tf.reduce_mean(tf.square(y - y_data))#optimizer definition optimizer = tf.train.GradientDescentOptimizer(0.1)#train definition train = optimizer.minimize(loss)#initialiing the variables init = tf.initialize_all_variables()#session definition and active sess = tf.Session() sess.run(init)#train the model for step in range(501):sess.run(train)if step % 10 == 0:print (step,sess.run(weights),sess.run(biases)) x_data: [ 0.89402199 0.10053575 0.55701882 0.53727293 0.36683112 0.442012130.42864946 0.33498201 0.55696607 0.40338814 0.99002725 0.233939480.40767717 0.20491761 0.10732751 0.64065552 0.95823038 0.369610490.43446437 0.05964461 0.39571118 0.36884421 0.92716092 0.859387990.61906868 0.28850925 0.33652243 0.34363723 0.19535853 0.122284320.87395531 0.94348276 0.58827281 0.42699504 0.68471229 0.491384240.60810924 0.59445798 0.48714778 0.84556079 0.71666372 0.730189740.45137393 0.89610088 0.23163974 0.57911086 0.59919071 0.146428320.16170129 0.93183321 0.03960106 0.02113333 0.26829427 0.054383320.26300624 0.25510544 0.47461808 0.42821345 0.13820758 0.617614450.95918888 0.57477629 0.80746269 0.9087519 0.29300612 0.22989190.47188538 0.76722085 0.5297811 0.20937359 0.86967862 0.110506950.41093281 0.43827012 0.63228905 0.29376149 0.35848793 0.443831120.11717488 0.82729691 0.06545471 0.2652818 0.00641522 0.543565930.69357717 0.9950943 0.72574335 0.77202976 0.6673649 0.336639970.93204492 0.86043173 0.88384038 0.27974007 0.33629051 0.649073240.77879322 0.63549143 0.0160666 0.56776083] y_data: [ 3.89402199 3.10053587 3.55701876 3.53727293 3.36683106 3.442012073.42864943 3.33498192 3.55696607 3.40338802 3.99002719 3.233939413.40767717 3.20491767 3.10732746 3.64065552 3.9582305 3.369610553.43446445 3.0596447 3.39571118 3.36884427 3.92716098 3.859387873.61906862 3.28850937 3.33652234 3.34363723 3.19535851 3.122284413.87395525 3.94348288 3.58827281 3.42699504 3.68471241 3.491384273.60810924 3.5944581 3.48714781 3.84556079 3.71666384 3.73018983.45137405 3.896101 3.23163986 3.57911086 3.59919071 3.146428353.1617012 3.93183327 3.03960109 3.02113342 3.26829433 3.054383283.26300621 3.2551055 3.47461796 3.42821336 3.13820767 3.617614513.95918894 3.57477617 3.80746269 3.90875196 3.29300618 3.229891783.47188544 3.76722097 3.5297811 3.20937347 3.8696785 3.110507013.41093278 3.43827009 3.63228893 3.29376149 3.35848784 3.443831213.11717486 3.82729697 3.06545472 3.26528168 3.00641513 3.543565993.69357729 3.9950943 3.72574329 3.77202988 3.66736484 3.336639883.93204498 3.86043167 3.88384032 3.2797401 3.3362906 3.649073123.77879333 3.63549137 3.01606655 3.56776094] weights before initializing: <tf.Variable 'Variable:0' shape=(1,) dtype=float32_ref> bias before initializing: <tf.Variable 'Variable_1:0' shape=(1,) dtype=float32_ref> WARNING:tensorflow:From C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\util\tf_should_use.py:170: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02. Instructions for updating: Use `tf.global_variables_initializer` instead. 0 [ 0.90101707] [ 0.64157164] 10 [ 1.73624611] [ 2.48117089] 20 [ 1.6977222] [ 2.62674093] 30 [ 1.62073231] [ 2.67402244] 40 [ 1.55015469] [ 2.7114203] 50 [ 1.48748767] [ 2.74431014] 60 [ 1.43195248] [ 2.77343965] 70 [ 1.3827436] [ 2.79925013] 80 [ 1.33914053] [ 2.82212019] 90 [ 1.30050492] [ 2.84238434] 100 [ 1.266271] [ 2.86034036] 110 [ 1.23593676] [ 2.87625074] 120 [ 1.2090584] [ 2.8903482] 130 [ 1.18524206] [ 2.9028399] 140 [ 1.16413891] [ 2.91390872] 150 [ 1.14543998] [ 2.92371631] 160 [ 1.12887108] [ 2.93240666] 170 [ 1.11418998] [ 2.94010711] 180 [ 1.10118115] [ 2.94693041] 190 [ 1.08965421] [ 2.95297623] 200 [ 1.07944047] [ 2.95833325] 210 [ 1.07039058] [ 2.96307993] 220 [ 1.06237149] [ 2.96728611] 230 [ 1.0552659] [ 2.97101283] 240 [ 1.04896998] [ 2.97431517] 250 [ 1.04339111] [ 2.97724128] 260 [ 1.03844786] [ 2.97983384] 270 [ 1.03406775] [ 2.98213148] 280 [ 1.03018677] [ 2.9841671] 290 [ 1.0267477] [ 2.98597097] 300 [ 1.02370059] [ 2.98756886] 310 [ 1.02100062] [ 2.98898506] 320 [ 1.01860821] [ 2.99023986] 330 [ 1.01648819] [ 2.99135184] 340 [ 1.01460981] [ 2.99233723] 350 [ 1.01294541] [ 2.99321008] 360 [ 1.01147056] [ 2.99398375] 370 [ 1.01016378] [ 2.99466896] 380 [ 1.00900602] [ 2.99527621] 390 [ 1.00797999] [ 2.99581456] 400 [ 1.00707078] [ 2.9962914] 410 [ 1.00626528] [ 2.99671388] 420 [ 1.00555134] [ 2.99708843] 430 [ 1.00491893] [ 2.99742007] 440 [ 1.00435853] [ 2.99771404] 450 [ 1.0038619] [ 2.9979744] 460 [ 1.00342214] [ 2.99820518] 470 [ 1.00303233] [ 2.99840927] 480 [ 1.00268698] [ 2.99859095] 490 [ 1.00238085] [ 2.9987514] 500 [ 1.00210965] [ 2.9988935]

    實現一個矩陣乘法計算

    # ex-02.py import tensorflow as tfmatrix1 = tf.constant([[3,3]])matrix2 = tf.constant([[1],[2]])#matrix multiply product = tf.matmul(matrix1, matrix2)#method1 #sess = tf.Session() #result = sess.run(product) #print "result:",result #sess.close()#method2 with tf.Session() as sess:result2 = sess.run(product)print ("result2:",result2) result2: [[9]]

    常量/變量/賦值

    # ex-03.py import tensorflow as tfstate = tf.Variable(10, name='counter')print( state.name)one = tf.constant(1)new_value = tf.add(state, one)update = tf.assign(state, new_value)#this initializing is very important init = tf.global_variables_initializer()with tf.Session() as sess:sess.run(init)for x in range(10):sess.run(update)# must do sess.run to operate the variablesprint(sess.run(state)) counter_1:0 11 12 13 14 15 16 17 18 19 20

    Placeholder用法

    # ex-04.py import tensorflow as tfinput1 = tf.placeholder(tf.float32) input2 = tf.placeholder(tf.float32)output = input1 * input2with tf.Session() as sess:print(sess.run(output, feed_dict={input1: 4, input2: 2}))print(sess.run(output, feed_dict={input1: [4,2], input2: [2,7]})) 8.0 [ 8. 14.] # ex-05.py import tensorflow as tf import numpy as npdef add_layer(inputs, in_size, out_size, activation_function=None):weights = tf.Variable(tf.random_normal([in_size, out_size]))biases = tf.Variable(tf.zeros([1,out_size]))wx_b = tf.matmul(inputs, weights) + biasesif activation_function is None:outputs = wx_belse:outputs = activation_function(wx_b)return outputs#make some input value x_data = np.linspace(-1, 1, 300)[:, np.newaxis] noise = np.random.normal(0, 0.05, x_data.shape) y_data = np.square(x_data) + noisexs = tf.placeholder(tf.float32, [None,1]) ys = tf.placeholder(tf.float32, [None,1])layer1 = add_layer(xs, 1, 10, activation_function=tf.nn.sigmoid) prediction = add_layer(layer1, 10, 1, activation_function=None)loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]))train = tf.train.GradientDescentOptimizer(0.01).minimize(loss)init = tf.global_variables_initializer()sess = tf.Session() sess.run(init)for x in range(1000):sess.run(train, feed_dict = { xs: x_data, ys: y_data})if x % 10==0:print (sess.run(loss, feed_dict={xs:x_data, ys: y_data})) 1.84078 0.633829 0.382142 0.309708 0.274133 0.248449 0.227182 0.208942 0.19316 0.179466 0.167569 0.157221 0.148213 0.140364 0.13352 0.127546 0.122329 0.117768 0.113778 0.110283 0.107221 0.104534 0.102175 0.100101 0.0982751 0.0966661 0.095246 0.0939907 0.0928792 0.0918932 0.0910167 0.0902358 0.0895383 0.0889138 0.0883529 0.0878477 0.087391 0.0869768 0.0865996 0.0862548 0.0859384 0.0856467 0.0853765 0.0851253 0.0848907 0.0846704 0.0844628 0.0842661 0.0840792 0.0839006 0.0837294 0.0835647 0.0834056 0.0832514 0.0831015 0.0829553 0.0828125 0.0826725 0.082535 0.0823997 0.0822664 0.0821347 0.0820044 0.0818755 0.0817476 0.0816208 0.0814948 0.0813695 0.0812449 0.0811209 0.0809973 0.0808742 0.0807514 0.080629 0.0805069 0.080385 0.0802633 0.0801419 0.0800206 0.0798994 0.0797783 0.0796574 0.0795365 0.0794157 0.079295 0.0791743 0.0790537 0.0789332 0.0788126 0.0786921 0.0785716 0.0784511 0.0783306 0.0782102 0.0780897 0.0779693 0.0778488 0.0777284 0.0776079 0.0774875

    可視化

    tensorboard --logdir='logs/'

    注意:不能在notebook直接運行。

    ex-06-vis.py

    import time import tensorflow as tf import numpy as np import matplotlib.pyplot as pltdef add_layer(inputs, in_size, out_size, activation_function=None):weights = tf.Variable(tf.random_normal([in_size, out_size]))biases = tf.Variable(tf.zeros([1,out_size]))wx_b = tf.matmul(inputs, weights) + biasesif activation_function is None:outputs = wx_belse:outputs = activation_function(wx_b)return outputs#make some input value x_data = np.linspace(-1, 1, 300)[:, np.newaxis] noise = np.random.normal(0, 0.05, x_data.shape) y_data = np.square(x_data) + noisexs = tf.placeholder(tf.float32, [None,1]) ys = tf.placeholder(tf.float32, [None,1])layer1 = add_layer(xs, 1, 10, activation_function=tf.nn.sigmoid) prediction = add_layer(layer1, 10, 1, activation_function=None)loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]))train = tf.train.GradientDescentOptimizer(0.1).minimize(loss)init = tf.global_variables_initializer()sess = tf.Session() sess.run(init)#added coding fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.scatter(x_data, y_data) plt.ion() plt.show()for x in range(1000):sess.run(train, feed_dict = { xs: x_data, ys: y_data})if x % 10==0:# print (sess.run(loss, feed_dict={xs:x_data, ys: y_data}))try:ax.lines.remove(lines[0])except Exception:passprediction_value = sess.run(prediction, feed_dict={xs:x_data})lines = ax.plot(x_data, prediction_value, 'r-', lw=5)plt.pause(0.1) import time for i in [1,2,34,5]:print(i)time.sleep(0.1) 1 2 34 5

    ex-09-tb.py

    # from ex-05 import tensorflow as tf import numpy as npdef add_layer(inputs, in_size, out_size, n_layer, activation_function=None):layer_name = 'layer%s' % n_layerwith tf.name_scope('layer'):with tf.name_scope('weights'):weights = tf.Variable(tf.random_normal([in_size, out_size]), name='W')tf.summary.histogram(layer_name + '/weights', weights)with tf.name_scope('bias'):biases = tf.Variable(tf.zeros([1,out_size]), name='b')tf.summary.histogram(layer_name + '/biases', biases)with tf.name_scope('wx_plus_b'):wx_b = tf.add(tf.matmul(inputs, weights), biases)if activation_function is None:outputs = wx_belse:outputs = activation_function(wx_b, name='output')tf.summary.histogram(layer_name + '/outputs', outputs)return outputs#make some input value x_data = np.linspace(-1, 1, 300)[:, np.newaxis] noise = np.random.normal(0, 0.05, x_data.shape) y_data = np.square(x_data) + noisewith tf.name_scope('inputs'):xs = tf.placeholder(tf.float32, [None,1], name='x_input')ys = tf.placeholder(tf.float32, [None,1], name='y_input')layer1 = add_layer(xs, 1, 10, n_layer=1, activation_function=tf.nn.sigmoid) prediction = add_layer(layer1, 10, 1, n_layer=2, activation_function=None)with tf.name_scope('loss'):loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]))tf.summary.scalar('loss', loss)with tf.name_scope('train'):train = tf.train.GradientDescentOptimizer(0.1).minimize(loss)init = tf.global_variables_initializer()sess = tf.Session() sess.run(init)merged = tf.summary.merge_all()writer = tf.summary.FileWriter("logs/", sess.graph)for x in range(50000):sess.run(train, feed_dict = { xs: x_data, ys: y_data})if x % 50==0:sess.run(loss, feed_dict={xs:x_data, ys:y_data})result = sess.run(merged, feed_dict={xs:x_data, ys:y_data})writer.add_summary(result, x) import tensorflow as tfa = tf.constant(5, name="input_a") b = tf.constant(3, name="input_b") c = tf.multiply(a, b, name="mul_c") d = tf.add(a, b, name="add_d") e = tf.add(c, d, name="add_e")sess = tf.Session() sess.run(e)writer = tf.summary.FileWriter("E:/tensorflow/graph", tf.get_default_graph()) writer.close()

    ex-10.py

    import tensorflow as tf import gzip import numpy import collections from tensorflow.python.framework import random_seed from tensorflow.python.framework import dtypes #from tensorflow.examples.tutorials.mnist import input_dataDatasets = collections.namedtuple('Datasets', ['train', 'validation', 'test'])def _read32(bytestream):dt = numpy.dtype(numpy.uint32).newbyteorder('>')return numpy.frombuffer(bytestream.read(4), dtype=dt)[0]def dense_to_one_hot(labels_dense, num_classes):"""Convert class labels from scalars to one-hot vectors."""num_labels = labels_dense.shape[0]index_offset = numpy.arange(num_labels) * num_classeslabels_one_hot = numpy.zeros((num_labels, num_classes))labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1return labels_one_hotdef extract_images(f):"""Extract the images into a 4D uint8 numpy array [index, y, x, depth].Args:f: A file object that can be passed into a gzip reader.Returns:data: A 4D uint8 numpy array [index, y, x, depth].Raises:ValueError: If the bytestream does not start with 2051."""print('Extracting', f.name)with gzip.GzipFile(fileobj=f) as bytestream:magic = _read32(bytestream)if magic != 2051:raise ValueError('Invalid magic number %d in MNIST image file: %s' %(magic, f.name))num_images = _read32(bytestream)rows = _read32(bytestream)cols = _read32(bytestream)buf = bytestream.read(rows * cols * num_images)data = numpy.frombuffer(buf, dtype=numpy.uint8)data = data.reshape(num_images, rows, cols, 1)return datadef extract_labels(f, one_hot=False, num_classes=10):"""Extract the labels into a 1D uint8 numpy array [index].Args:f: A file object that can be passed into a gzip reader.one_hot: Does one hot encoding for the result.num_classes: Number of classes for the one hot encoding.Returns:labels: a 1D uint8 numpy array.Raises:ValueError: If the bystream doesn't start with 2049."""print('Extracting', f.name)with gzip.GzipFile(fileobj=f) as bytestream:magic = _read32(bytestream)if magic != 2049:raise ValueError('Invalid magic number %d in MNIST label file: %s' %(magic, f.name))num_items = _read32(bytestream)buf = bytestream.read(num_items)labels = numpy.frombuffer(buf, dtype=numpy.uint8)if one_hot:return dense_to_one_hot(labels, num_classes)return labelsdef read_data_sets(train_dir,fake_data=False,one_hot=False,dtype=tf.float32,reshape=True,validation_size=5000,seed=None):'''if fake_data:def fake():return DataSet([], [], fake_data=True, one_hot=one_hot, dtype=dtype, seed=seed)train = fake()validation = fake()test = fake()return base.Datasets(train=train, validation=validation, test=test)'''TRAIN_IMAGES = 'train-images-idx3-ubyte.gz'TRAIN_LABELS = 'train-labels-idx1-ubyte.gz'TEST_IMAGES = 't10k-images-idx3-ubyte.gz'TEST_LABELS = 't10k-labels-idx1-ubyte.gz'local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TRAIN_IMAGESwith open(local_file, 'rb') as f:train_images = extract_images(f)local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TRAIN_LABELSwith open(local_file, 'rb') as f:train_labels = extract_labels(f, one_hot=one_hot)local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TEST_IMAGESwith open(local_file, 'rb') as f:test_images = extract_images(f)local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TEST_LABELSwith open(local_file, 'rb') as f:test_labels = extract_labels(f, one_hot=one_hot)if not 0 <= validation_size <= len(train_images):raise ValueError('Validation size should be between 0 and {}. Received: {}.'.format(len(train_images), validation_size))validation_images = train_images[:validation_size]validation_labels = train_labels[:validation_size]train_images = train_images[validation_size:]train_labels = train_labels[validation_size:]options = dict(dtype=dtype, reshape=reshape, seed=seed)train = DataSet(train_images, train_labels, **options)validation = DataSet(validation_images, validation_labels, **options)test = DataSet(test_images, test_labels, **options)return Datasets(train=train, validation=validation, test=test)class DataSet(object):def __init__(self,images,labels,fake_data=False,one_hot=False,dtype=tf.float32,reshape=True,seed=None):"""Construct a DataSet.one_hot arg is used only if fake_data is true. `dtype` can be either`uint8` to leave the input as `[0, 255]`, or `float32` to rescale into`[0, 1]`. Seed arg provides for convenient deterministic testing."""seed1, seed2 = random_seed.get_seed(seed)# If op level seed is not set, use whatever graph level seed is returnednumpy.random.seed(seed1 if seed is None else seed2)dtype = dtypes.as_dtype(dtype).base_dtypeif dtype not in (dtypes.uint8, dtypes.float32):raise TypeError('Invalid image dtype %r, expected uint8 or float32' %dtype)if fake_data:self._num_examples = 10000self.one_hot = one_hotelse:assert images.shape[0] == labels.shape[0], ('images.shape: %s labels.shape: %s' % (images.shape, labels.shape))self._num_examples = images.shape[0]# Convert shape from [num examples, rows, columns, depth]# to [num examples, rows*columns] (assuming depth == 1)if reshape:assert images.shape[3] == 1images = images.reshape(images.shape[0],images.shape[1] * images.shape[2])if dtype == dtypes.float32:# Convert from [0, 255] -> [0.0, 1.0].images = images.astype(numpy.float32)images = numpy.multiply(images, 1.0 / 255.0)self._images = imagesself._labels = labelsself._epochs_completed = 0self._index_in_epoch = 0@propertydef images(self):return self._images@propertydef labels(self):return self._labels@propertydef num_examples(self):return self._num_examples@propertydef epochs_completed(self):return self._epochs_completeddef next_batch(self, batch_size, fake_data=False, shuffle=True):"""Return the next `batch_size` examples from this data set."""if fake_data:fake_image = [1] * 784if self.one_hot:fake_label = [1] + [0] * 9else:fake_label = 0return [fake_image for _ in xrange(batch_size)], [fake_label for _ in xrange(batch_size)]start = self._index_in_epoch# Shuffle for the first epochif self._epochs_completed == 0 and start == 0 and shuffle:perm0 = numpy.arange(self._num_examples)numpy.random.shuffle(perm0)self._images = self.images[perm0]self._labels = self.labels[perm0]# Go to the next epochif start + batch_size > self._num_examples:# Finished epochself._epochs_completed += 1# Get the rest examples in this epochrest_num_examples = self._num_examples - startimages_rest_part = self._images[start:self._num_examples]labels_rest_part = self._labels[start:self._num_examples]# Shuffle the dataif shuffle:perm = numpy.arange(self._num_examples)numpy.random.shuffle(perm)self._images = self.images[perm]self._labels = self.labels[perm]# Start next epochstart = 0self._index_in_epoch = batch_size - rest_num_examplesend = self._index_in_epochimages_new_part = self._images[start:end]labels_new_part = self._labels[start:end]return numpy.concatenate((images_rest_part, images_new_part), axis=0) , numpy.concatenate((labels_rest_part, labels_new_part), axis=0)else:self._index_in_epoch += batch_sizeend = self._index_in_epochreturn self._images[start:end], self._labels[start:end]################################################################ #from here to operate the networkmnist = read_data_sets('MNIST_data', one_hot=True)def add_layer(inputs, in_size, out_size, activation_function=None):weights = tf.Variable(tf.random_normal([in_size, out_size]))biases = tf.Variable(tf.zeros([1, out_size]) + 0.1,)wx_b = tf.matmul(inputs, weights) + biasesif activation_function is None:outputs = wx_belse:outputs = activation_function(wx_b,)return outputsxs = tf.placeholder(tf.float32, [None, 28*28]) ys = tf.placeholder(tf.float32, [None, 10])def compute_accuracy(v_xs, v_ys):global predictiony_pre = sess.run(prediction, feed_dict={xs: v_xs})correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))result = sess.run(accuracy, feed_dict={xs:v_xs, ys:v_ys})return resultprediction = add_layer(xs, 784, 10, activation_function = tf.nn.softmax)cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction), reduction_indices=[1]))train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy)sess = tf.Session()sess.run(tf.initialize_all_variables())for i in range(10000):batch_xs, batch_ys = mnist.train.next_batch(100)sess.run(train_step, feed_dict = {xs: batch_xs, ys: batch_ys})if i % 50 ==0:print "step:",i,", ",compute_accuracy(mnist.test.images, mnist.test.labels) File "<ipython-input-12-d56c6b11bc6b>", line 279print "step:",i,", ",compute_accuracy(mnist.test.images, mnist.test.labels)^ SyntaxError: invalid syntax # ex-10-ml.py# import tensorflow as tf import gzip import numpy import collections from tensorflow.python.framework import random_seed from tensorflow.python.framework import dtypes #from tensorflow.examples.tutorials.mnist import input_dataDatasets = collections.namedtuple('Datasets', ['train', 'validation', 'test' ])def _read32(bytestream):dt = numpy.dtype(numpy.uint32).newbyteorder('>')return numpy.frombuffer(bytestream.read(4), dtype=dt)[0]def dense_to_one_hot(labels_dense, num_classes):"""Convert class labels from scalars to one-hot vectors."""num_labels = labels_dense.shape[0]index_offset = numpy.arange(num_labels) * num_classeslabels_one_hot = numpy.zeros((num_labels, num_classes))labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1return labels_one_hotdef extract_images(f):"""Extract the images into a 4D uint8 numpy array [index, y, x, depth].Args:f: A file object that can be passed into a gzip reader.Returns:data: A 4D uint8 numpy array [index, y, x, depth].Raises:ValueError: If the bytestream does not start with 2051."""print('Extracting', f.name)with gzip.GzipFile(fileobj=f) as bytestream:magic = _read32(bytestream)if magic != 2051:raise ValueError('Invalid magic number %d in MNIST image file: %s' %(magic, f.name))num_images = _read32(bytestream)rows = _read32(bytestream)cols = _read32(bytestream)buf = bytestream.read(rows * cols * num_images)data = numpy.frombuffer(buf, dtype=numpy.uint8)data = data.reshape(num_images, rows, cols, 1)return datadef extract_labels(f, one_hot=False, num_classes=10):"""Extract the labels into a 1D uint8 numpy array [index].Args:f: A file object that can be passed into a gzip reader.one_hot: Does one hot encoding for the result.num_classes: Number of classes for the one hot encoding.Returns:labels: a 1D uint8 numpy array.Raises:ValueError: If the bystream doesn't start with 2049."""print('Extracting', f.name)with gzip.GzipFile(fileobj=f) as bytestream:magic = _read32(bytestream)if magic != 2049:raise ValueError('Invalid magic number %d in MNIST label file: %s' %(magic, f.name))num_items = _read32(bytestream)buf = bytestream.read(num_items)labels = numpy.frombuffer(buf, dtype=numpy.uint8)if one_hot:return dense_to_one_hot(labels, num_classes)return labelsdef read_data_sets(train_dir,fake_data=False,one_hot=False,dtype=tf.float32,reshape=True,validation_size=5000,seed=None):'''if fake_data:def fake():return DataSet([], [], fake_data=True, one_hot=one_hot, dtype=dtype, seed=seed)train = fake()validation = fake()test = fake()return base.Datasets(train=train, validation=validation, test=test)'''TRAIN_IMAGES = 'train-images-idx3-ubyte.gz'TRAIN_LABELS = 'train-labels-idx1-ubyte.gz'TEST_IMAGES = 't10k-images-idx3-ubyte.gz'TEST_LABELS = 't10k-labels-idx1-ubyte.gz'local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TRAIN_IMAGESwith open(local_file, 'rb') as f:train_images = extract_images(f)local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TRAIN_LABELSwith open(local_file, 'rb') as f:train_labels = extract_labels(f, one_hot=one_hot)local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TEST_IMAGESwith open(local_file, 'rb') as f:test_images = extract_images(f)local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TEST_LABELSwith open(local_file, 'rb') as f:test_labels = extract_labels(f, one_hot=one_hot)if not 0 <= validation_size <= len(train_images):raise ValueError('Validation size should be between 0 and {}. Received: {}.'.format(len(train_images), validation_size))validation_images = train_images[:validation_size]validation_labels = train_labels[:validation_size]train_images = train_images[validation_size:]train_labels = train_labels[validation_size:]options = dict(dtype=dtype, reshape=reshape, seed=seed)train = DataSet(train_images, train_labels, **options)validation = DataSet(validation_images, validation_labels, **options)test = DataSet(test_images, test_labels, **options)return Datasets(train=train, validation=validation, test=test)class DataSet(object):def __init__(self,images,labels,fake_data=False,one_hot=False,dtype=tf.float32,reshape=True,seed=None):"""Construct a DataSet.one_hot arg is used only if fake_data is true. `dtype` can be either`uint8` to leave the input as `[0, 255]`, or `float32` to rescale into`[0, 1]`. Seed arg provides for convenient deterministic testing."""seed1, seed2 = random_seed.get_seed(seed)# If op level seed is not set, use whatever graph level seed is returnednumpy.random.seed(seed1 if seed is None else seed2)dtype = dtypes.as_dtype(dtype).base_dtypeif dtype not in (dtypes.uint8, dtypes.float32):raise TypeError('Invalid image dtype %r, expected uint8 or float32' %dtype)if fake_data:self._num_examples = 10000self.one_hot = one_hotelse:assert images.shape[0] == labels.shape[0], ('images.shape: %s labels.shape: %s' % (images.shape, labels.shape))self._num_examples = images.shape[0]# Convert shape from [num examples, rows, columns, depth]# to [num examples, rows*columns] (assuming depth == 1)if reshape:assert images.shape[3] == 1images = images.reshape(images.shape[0],images.shape[1] * images.shape[2])if dtype == dtypes.float32:# Convert from [0, 255] -> [0.0, 1.0].images = images.astype(numpy.float32)images = numpy.multiply(images, 1.0 / 255.0)self._images = imagesself._labels = labelsself._epochs_completed = 0self._index_in_epoch = 0@propertydef images(self):return self._images@propertydef labels(self):return self._labels@propertydef num_examples(self):return self._num_examples@propertydef epochs_completed(self):return self._epochs_completeddef next_batch(self, batch_size, fake_data=False, shuffle=True):"""Return the next `batch_size` examples from this data set."""if fake_data:fake_image = [1] * 784if self.one_hot:fake_label = [1] + [0] * 9else:fake_label = 0return [fake_image for _ in xrange(batch_size)], [fake_label for _ in xrange(batch_size)]start = self._index_in_epoch# Shuffle for the first epochif self._epochs_completed == 0 and start == 0 and shuffle:perm0 = numpy.arange(self._num_examples)numpy.random.shuffle(perm0)self._images = self.images[perm0]self._labels = self.labels[perm0]# Go to the next epochif start + batch_size > self._num_examples:# Finished epochself._epochs_completed += 1# Get the rest examples in this epochrest_num_examples = self._num_examples - startimages_rest_part = self._images[start:self._num_examples]labels_rest_part = self._labels[start:self._num_examples]# Shuffle the dataif shuffle:perm = numpy.arange(self._num_examples)numpy.random.shuffle(perm)self._images = self.images[perm]self._labels = self.labels[perm]# Start next epochstart = 0self._index_in_epoch = batch_size - rest_num_examplesend = self._index_in_epochimages_new_part = self._images[start:end]labels_new_part = self._labels[start:end]return numpy.concatenate((images_rest_part, images_new_part), axis=0) , numpy.concatenate((labels_rest_part, labels_new_part), axis=0)else:self._index_in_epoch += batch_sizeend = self._index_in_epochreturn self._images[start:end], self._labels[start:end]################################################################mnist = read_data_sets('MNIST_data', one_hot=True)def add_layer(inputs, in_size, out_size, activation_function=None):weights = tf.Variable(tf.random_normal([in_size, out_size]))#biases = tf.Variable(tf.zeros([1, out_size]) + 100)biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)wx_b = tf.matmul(inputs, weights) + biasesif activation_function is None:outputs = wx_belse:outputs = activation_function(wx_b,)return outputsxs = tf.placeholder(tf.float32, [None, 28*28]) ys = tf.placeholder(tf.float32, [None, 10])def compute_accuracy(v_xs, v_ys):global predictiony_pre = sess.run(prediction, feed_dict={xs: v_xs})correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))result = sess.run(accuracy, feed_dict={xs:v_xs, ys:v_ys})return result#layer1 = add_layer(xs, 784, 50, activation_function = tf.nn.tanh) #layer1 = add_layer(xs, 784, 30, activation_function = tf.nn.sigmoid) layer1 = add_layer(xs, 784, 50, activation_function = tf.nn.relu) layer2 = add_layer(layer1, 50, 50, activation_function = tf.nn.tanh) layer3 = add_layer(layer2, 50, 50, activation_function = tf.nn.tanh) layer4 = add_layer(layer3, 50, 50, activation_function = tf.nn.tanh)prediction = add_layer(layer4, 50, 10, activation_function = tf.nn.softmax)cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction), reduction_indices=[1]))train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)sess = tf.Session()sess.run(tf.initialize_all_variables())for i in range(100000):batch_xs, batch_ys = mnist.train.next_batch(100)sess.run(layer1, feed_dict = {xs: batch_xs, ys: batch_ys})sess.run(train_step, feed_dict = {xs: batch_xs, ys:batch_ys})if i % 100 ==0:print "step:",i,", ",compute_accuracy(mnist.test.images, mnist.test.labels) File "<ipython-input-13-eaeb1867aa2d>", line 286print "step:",i,", ",compute_accuracy(mnist.test.images, mnist.test.labels)^ SyntaxError: invalid syntax # ex-11.py import tensorflow as tf from sklearn.datasets import load_digits from sklearn.cross_validation import train_test_split from sklearn.preprocessing import LabelBinarizerdigits = load_digits() x = digits.data y = digits.target #one-hot transform y = LabelBinarizer().fit_transform(y) x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3)def add_layer(inputs, in_size, out_size, layer_name, activation_function = None):weights = tf.Variable(tf.random_normal([in_size, out_size]))biases = tf.Variable(tf.zeros([1, out_size]) + 0.1,)wx_b = tf.matmul(inputs, weights) + biasesif activation_function is None:outputs = wx_belse:outputs = activation_function(wx_b)tf.summary.histogram(layer_name + '/outputs', outputs)return outputsxs = tf.placeholder(tf.float32, [None, 8*8]) ys = tf.placeholder(tf.float32, [None, 10])layer1 = add_layer(xs, 64, 100, 'layer1', activation_function = tf.nn.tanh) prediction = add_layer(layer1, 100, 10, 'layer2', activation_function = tf.nn.softmax)cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction), reduction_indices=[1]))tf.summary.scalar('loss', cross_entropy)train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy)sess = tf.Session() merged = tf.summary.merge_all()train_writer = tf.train.SummaryWriter("logs/train", sess.graph) test_writer = tf.train.SummaryWriter("logs/test", sess.graph)sess.run(tf.initialize_all_variables())for i in range(1000):sess.run(train_step, feed_dict = {xs: x_train, ys:y_train})if i % 100==0:train_result = sess.run(merged, feed_dict={xs:x_train, ys:y_train})test_result = sess.run(merged, feed_dict={xs:x_test, ys:y_test})train_writer.add_summary(train_result, i)test_writer.add_summary(test_result, i) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20."This module will be removed in 0.20.", DeprecationWarning)---------------------------------------------------------------------------AttributeError Traceback (most recent call last)<ipython-input-14-212ae64f528b> in <module>()39 merged = tf.summary.merge_all()40 ---> 41 train_writer = tf.train.SummaryWriter("logs/train", sess.graph)42 test_writer = tf.train.SummaryWriter("logs/test", sess.graph)43 AttributeError: module 'tensorflow.python.training.training' has no attribute 'SummaryWriter' # ex-11-dropout.py import tensorflow as tf from sklearn.datasets import load_digits from sklearn.cross_validation import train_test_split from sklearn.preprocessing import LabelBinarizerdigits = load_digits() x = digits.data y = digits.target #one-hot transform y = LabelBinarizer().fit_transform(y) x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3)def add_layer(inputs, in_size, out_size, layer_name, activation_function = None):weights = tf.Variable(tf.random_normal([in_size, out_size]))biases = tf.Variable(tf.zeros([1, out_size]) + 0.1,)wx_b = tf.matmul(inputs, weights) + biases#drop out and keep a proportionwx_b = tf.nn.dropout(wx_b, keep_prob)if activation_function is None:outputs = wx_belse:outputs = activation_function(wx_b)tf.histogram_summary(layer_name + '/outputs', outputs)return outputskeep_prob = tf.placeholder(tf.float32) xs = tf.placeholder(tf.float32, [None, 8*8]) ys = tf.placeholder(tf.float32, [None, 10])layer1 = add_layer(xs, 64, 100, 'layer1', activation_function = tf.nn.tanh) prediction = add_layer(layer1, 100, 10, 'layer2', activation_function = tf.nn.softmax)cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction), reduction_indices=[1]))tf.scalar_summary('loss', cross_entropy)train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy)sess = tf.Session() merged = tf.merge_all_summaries()train_writer = tf.train.SummaryWriter("logs/train", sess.graph) test_writer = tf.train.SummaryWriter("logs/test", sess.graph)sess.run(tf.initialize_all_variables())for i in range(1000):sess.run(train_step, feed_dict = {xs: x_train, ys:y_train, keep_prob:0.7})if i % 100==0:train_result = sess.run(merged, feed_dict={xs:x_train, ys:y_train, keep_prob: 0.7})test_result = sess.run(merged, feed_dict={xs:x_test, ys:y_test, keep_prob: 0.7})train_writer.add_summary(train_result, i)test_writer.add_summary(test_result, i) ---------------------------------------------------------------------------AttributeError Traceback (most recent call last)<ipython-input-15-482298ce8d49> in <module>()30 ys = tf.placeholder(tf.float32, [None, 10])31 ---> 32 layer1 = add_layer(xs, 64, 100, 'layer1', activation_function = tf.nn.tanh)33 prediction = add_layer(layer1, 100, 10, 'layer2', activation_function = tf.nn.softmax)34 <ipython-input-15-482298ce8d49> in add_layer(inputs, in_size, out_size, layer_name, activation_function)23 else:24 outputs = activation_function(wx_b) ---> 25 tf.histogram_summary(layer_name + '/outputs', outputs)26 return outputs27 AttributeError: module 'tensorflow' has no attribute 'histogram_summary'

    input_data.py

    # Copyright 2015 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # =============================================================================="""Functions for downloading and reading MNIST data.""" from __future__ import absolute_import from __future__ import division from __future__ import print_functionimport gzip import os import tempfileimport numpy from six.moves import urllib from six.moves import xrange # pylint: disable=redefined-builtin import tensorflow as tf from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets

    mnist.py

    # Copyright 2016 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # =============================================================================="""Functions for downloading and reading MNIST data."""from __future__ import absolute_import from __future__ import division from __future__ import print_functionimport gzipimport numpy from six.moves import xrange # pylint: disable=redefined-builtinfrom tensorflow.contrib.learn.python.learn.datasets import base from tensorflow.python.framework import dtypes from tensorflow.python.framework import random_seed# CVDF mirror of http://yann.lecun.com/exdb/mnist/ SOURCE_URL = 'https://storage.googleapis.com/cvdf-datasets/mnist/'def _read32(bytestream):dt = numpy.dtype(numpy.uint32).newbyteorder('>')return numpy.frombuffer(bytestream.read(4), dtype=dt)[0]def extract_images(f):"""Extract the images into a 4D uint8 numpy array [index, y, x, depth].Args:f: A file object that can be passed into a gzip reader.Returns:data: A 4D uint8 numpy array [index, y, x, depth].Raises:ValueError: If the bytestream does not start with 2051."""print('Extracting', f.name)with gzip.GzipFile(fileobj=f) as bytestream:magic = _read32(bytestream)if magic != 2051:raise ValueError('Invalid magic number %d in MNIST image file: %s' %(magic, f.name))num_images = _read32(bytestream)rows = _read32(bytestream)cols = _read32(bytestream)buf = bytestream.read(rows * cols * num_images)data = numpy.frombuffer(buf, dtype=numpy.uint8)data = data.reshape(num_images, rows, cols, 1)return datadef dense_to_one_hot(labels_dense, num_classes):"""Convert class labels from scalars to one-hot vectors."""num_labels = labels_dense.shape[0]index_offset = numpy.arange(num_labels) * num_classeslabels_one_hot = numpy.zeros((num_labels, num_classes))labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1return labels_one_hotdef extract_labels(f, one_hot=False, num_classes=10):"""Extract the labels into a 1D uint8 numpy array [index].Args:f: A file object that can be passed into a gzip reader.one_hot: Does one hot encoding for the result.num_classes: Number of classes for the one hot encoding.Returns:labels: a 1D uint8 numpy array.Raises:ValueError: If the bystream doesn't start with 2049."""print('Extracting', f.name)with gzip.GzipFile(fileobj=f) as bytestream:magic = _read32(bytestream)if magic != 2049:raise ValueError('Invalid magic number %d in MNIST label file: %s' %(magic, f.name))num_items = _read32(bytestream)buf = bytestream.read(num_items)labels = numpy.frombuffer(buf, dtype=numpy.uint8)if one_hot:return dense_to_one_hot(labels, num_classes)return labelsclass DataSet(object):def __init__(self,images,labels,fake_data=False,one_hot=False,dtype=dtypes.float32,reshape=True,seed=None):"""Construct a DataSet.one_hot arg is used only if fake_data is true. `dtype` can be either`uint8` to leave the input as `[0, 255]`, or `float32` to rescale into`[0, 1]`. Seed arg provides for convenient deterministic testing."""seed1, seed2 = random_seed.get_seed(seed)# If op level seed is not set, use whatever graph level seed is returnednumpy.random.seed(seed1 if seed is None else seed2)dtype = dtypes.as_dtype(dtype).base_dtypeif dtype not in (dtypes.uint8, dtypes.float32):raise TypeError('Invalid image dtype %r, expected uint8 or float32' %dtype)if fake_data:self._num_examples = 10000self.one_hot = one_hotelse:assert images.shape[0] == labels.shape[0], ('images.shape: %s labels.shape: %s' % (images.shape, labels.shape))self._num_examples = images.shape[0]# Convert shape from [num examples, rows, columns, depth]# to [num examples, rows*columns] (assuming depth == 1)if reshape:assert images.shape[3] == 1images = images.reshape(images.shape[0],images.shape[1] * images.shape[2])if dtype == dtypes.float32:# Convert from [0, 255] -> [0.0, 1.0].images = images.astype(numpy.float32)images = numpy.multiply(images, 1.0 / 255.0)self._images = imagesself._labels = labelsself._epochs_completed = 0self._index_in_epoch = 0@propertydef images(self):return self._images@propertydef labels(self):return self._labels@propertydef num_examples(self):return self._num_examples@propertydef epochs_completed(self):return self._epochs_completeddef next_batch(self, batch_size, fake_data=False, shuffle=True):"""Return the next `batch_size` examples from this data set."""if fake_data:fake_image = [1] * 784if self.one_hot:fake_label = [1] + [0] * 9else:fake_label = 0return [fake_image for _ in xrange(batch_size)], [fake_label for _ in xrange(batch_size)]start = self._index_in_epoch# Shuffle for the first epochif self._epochs_completed == 0 and start == 0 and shuffle:perm0 = numpy.arange(self._num_examples)numpy.random.shuffle(perm0)self._images = self.images[perm0]self._labels = self.labels[perm0]# Go to the next epochif start + batch_size > self._num_examples:# Finished epochself._epochs_completed += 1# Get the rest examples in this epochrest_num_examples = self._num_examples - startimages_rest_part = self._images[start:self._num_examples]labels_rest_part = self._labels[start:self._num_examples]# Shuffle the dataif shuffle:perm = numpy.arange(self._num_examples)numpy.random.shuffle(perm)self._images = self.images[perm]self._labels = self.labels[perm]# Start next epochstart = 0self._index_in_epoch = batch_size - rest_num_examplesend = self._index_in_epochimages_new_part = self._images[start:end]labels_new_part = self._labels[start:end]return numpy.concatenate((images_rest_part, images_new_part), axis=0) , numpy.concatenate((labels_rest_part, labels_new_part), axis=0)else:self._index_in_epoch += batch_sizeend = self._index_in_epochreturn self._images[start:end], self._labels[start:end]def read_data_sets(train_dir,fake_data=False,one_hot=False,dtype=dtypes.float32,reshape=True,validation_size=5000,seed=None):if fake_data:def fake():return DataSet([], [], fake_data=True, one_hot=one_hot, dtype=dtype, seed=seed)train = fake()validation = fake()test = fake()return base.Datasets(train=train, validation=validation, test=test)TRAIN_IMAGES = 'train-images-idx3-ubyte.gz'TRAIN_LABELS = 'train-labels-idx1-ubyte.gz'TEST_IMAGES = 't10k-images-idx3-ubyte.gz'TEST_LABELS = 't10k-labels-idx1-ubyte.gz'local_file = base.maybe_download(TRAIN_IMAGES, train_dir,SOURCE_URL + TRAIN_IMAGES)with open(local_file, 'rb') as f:train_images = extract_images(f)local_file = base.maybe_download(TRAIN_LABELS, train_dir,SOURCE_URL + TRAIN_LABELS)with open(local_file, 'rb') as f:train_labels = extract_labels(f, one_hot=one_hot)local_file = base.maybe_download(TEST_IMAGES, train_dir,SOURCE_URL + TEST_IMAGES)with open(local_file, 'rb') as f:test_images = extract_images(f)local_file = base.maybe_download(TEST_LABELS, train_dir,SOURCE_URL + TEST_LABELS)with open(local_file, 'rb') as f:test_labels = extract_labels(f, one_hot=one_hot)if not 0 <= validation_size <= len(train_images):raise ValueError('Validation size should be between 0 and {}. Received: {}.'.format(len(train_images), validation_size))validation_images = train_images[:validation_size]validation_labels = train_labels[:validation_size]train_images = train_images[validation_size:]train_labels = train_labels[validation_size:]options = dict(dtype=dtype, reshape=reshape, seed=seed)train = DataSet(train_images, train_labels, **options)validation = DataSet(validation_images, validation_labels, **options)test = DataSet(test_images, test_labels, **options)return base.Datasets(train=train, validation=validation, test=test)def load_mnist(train_dir='MNIST-data'):return read_data_sets(train_dir)

    轉載于:https://www.cnblogs.com/q735613050/p/7745962.html

    總結

    以上是生活随笔為你收集整理的软培的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

    av免费福利 | 91视频高清完整版 | 色爱区综合激月婷婷 | 在线精品视频免费播放 | 欧美高清成人 | 在线亚洲天堂网 | 69精品 | 四虎影视成人永久免费观看视频 | 久久影院亚洲 | 亚洲欧美国产精品 | 日韩精品一区不卡 | 在线免费av观看 | 国产视频日韩 | 99视频国产在线 | 国产高清专区 | 欧美日韩电影在线播放 | 国产成人综合在线观看 | 国外调教视频网站 | 久热av在线 | 91少妇精拍在线播放 | 国产精品久久久久影视 | 黄色影院在线播放 | 日韩欧美高清不卡 | 一区二区三区免费在线观看 | 天天色天天骑天天射 | 日韩中文字幕a | 人人插人人艹 | 在线国产视频 | 中文字幕免费高清在线 | 免费黄av| 日韩精品一二三 | 欧美另类一二三四区 | 激情在线免费视频 | 伊人久久在线观看 | 成年人电影免费在线观看 | 中文字幕婷婷 | 成年人视频在线观看免费 | 欧美日高清视频 | 日韩网站一区二区 | 青草视频免费观看 | 国产最顶级的黄色片在线免费观看 | 国内视频在线 | 久久五月婷婷综合 | 色综合天天狠狠 | 黄色小网站在线观看 | 在线播放一区二区三区 | 黄色三级免费 | 色中文字幕在线观看 | 视频二区在线 | 97超级碰碰碰碰久久久久 | 99久久婷婷国产精品综合 | 国产一级片网站 | 午夜999 | 日韩动态视频 | 97超碰中文 | 国内偷拍精品视频 | 国产婷婷 | 亚洲国产剧情av | 亚州国产精品 | 一区二区三区电影大全 | 亚洲国产69 | 欧美激情在线网站 | wwwwwww色| 精品免费99久久 | 国产成人三级一区二区在线观看一 | 中文国产字幕在线观看 | 中文字幕在线视频第一页 | 中文在线a天堂 | 五月天激情综合 | 91av在线国产 | 在线视频 国产 日韩 | 综合中文字幕 | 免费一级特黄毛大片 | 亚洲国产精品成人女人久久 | 免费黄色网址网站 | 久久黄色免费视频 | 国产免费一区二区三区最新 | 久草在线这里只有精品 | 中文字幕之中文字幕 | 偷拍精品一区二区三区 | 九热精品 | 97超碰在线人人 | 国产精品一区二区久久 | 久久国产精品久久国产精品 | 黄a网 | 成人国产精品电影 | 国产精品毛片网 | 亚洲精品高清视频 | 国产电影黄色av | 久久99精品久久久久久三级 | 天天操夜夜操天天射 | 国产九九九九九 | 在线黄色国产电影 | 不卡的av在线播放 | 香蕉久草 | av三级在线看 | 日韩精品久久一区二区三区 | 久久在草 | 成人久久免费 | 久草爱视频 | 中文字幕高清有码 | 色婷婷狠狠五月综合天色拍 | 国产精品久久久久久久av大片 | 成人啊 v | 免费韩国av| 黄色网在线播放 | 国产伦理久久精品久久久久_ | 五月天久久狠狠 | 免费看搞黄视频网站 | 国产日韩中文字幕在线 | 日韩免费一级a毛片在线播放一级 | 视频 国产区 | 97视频网站 | 日韩欧美一区二区在线观看 | 91视频高清 | 91视频黄色| 中文字幕久久网 | 亚洲免费av观看 | 日日夜夜免费精品视频 | 欧美综合色在线图区 | 日日夜夜天天综合 | 国产精品av在线免费观看 | 免费下载高清毛片 | 最近日本韩国中文字幕 | 亚洲三级网 | 51精品国自产在线 | 国产精品一区电影 | 精品久久久久久综合 | 日韩av在线一区二区 | 久久午夜网 | 在线观看av小说 | 久久天天躁夜夜躁狠狠躁2022 | 夜夜摸夜夜爽 | 国产一区高清在线观看 | 一区二区精品在线 | 国产精品一区二区 91 | 欧美久久综合 | 国产欧美在线一区 | 97超碰人人模人人人爽人人爱 | 日韩动态视频 | 日韩精品免费一区二区在线观看 | 国产视频第二页 | 日批视频 | a资源在线 | 91麻豆精品国产91久久久久 | 极品久久久久 | 99爱在线观看 | 91av在线免费看 | 8090yy亚洲精品久久 | www.99av | 免费视频一级片 | 中文字幕高清免费日韩视频在线 | 日韩国产精品一区 | 精品国产1区2区3区 国产欧美精品在线观看 | 婷婷综合亚洲 | 中文字幕在线观看91 | 在线免费中文字幕 | 天天射一射 | 玖玖精品在线 | 开心色停停| 24小时日本在线www免费的 | 九九在线播放 | 国产在线观看中文字幕 | 国产成人精品一二三区 | 欧美性色综合网站 | 国产一区二区在线观看视频 | 波多野结衣一区二区 | 人人狠狠综合久久亚洲婷 | 日韩黄在线观看 | 麻豆免费视频观看 | 啪啪免费观看网站 | 日韩在线网址 | 中文字幕亚洲综合久久五月天色无吗'' | av中文字幕网 | av超碰在线 | 日韩网站免费观看 | 中文字幕高清免费日韩视频在线 | 成人黄色视 | 婷婷丁香综合 | 精品久久久久一区二区国产 | 国产一级高清 | 久久99国产一区二区三区 | 欧美中文字幕久久 | 天天弄天天干 | 性色av香蕉一区二区 | 激情中文在线 | 中文字幕123区 | 色婷婷久久一区二区 | 久久精品国产精品亚洲 | 成人免费色 | 精品国产_亚洲人成在线 | 亚洲国产wwwccc36天堂 | 奇米影视8888在线观看大全免费 | 欧美乱码精品一区二区 | 午夜av在线电影 | 精品一区二区av | 午夜视频免费播放 | 亚洲热视频 | 啪嗒啪嗒免费观看完整版 | 在线观看免费av网 | 狠狠色丁香婷婷综合最新地址 | 日韩电影一区二区三区 | 日本一区二区三区视频在线播放 | 国产 字幕 制服 中文 在线 | 蜜臀av夜夜澡人人爽人人桃色 | 91精品天码美女少妇 | 毛片a级片 | 综合久久2023| 色在线视频 | 日本在线观看视频一区 | 久久久久电影网站 | 人人爽人人射 | 午夜体验区 | 在线黄av | 久久免费电影网 | 亚洲综合黄色 | 国产精品二区在线观看 | 久久精品一区二区国产 | 国内精品久久久久影院优 | 国产香蕉视频在线播放 | 国内精品久久久久久久久久久 | 亚洲欧美日韩精品久久久 | 国内揄拍国产精品 | 黄色午夜 | 一区二区三区日韩在线观看 | 免费看污污视频的网站 | 国产黄色精品在线 | 超碰人人超碰 | 麻花传媒mv免费观看 | 中文字幕你懂的 | 片网址| 丝袜一区在线 | 69av免费视频 | 西西444www大胆高清视频 | 青草视频在线免费 | 国产不卡精品 | 亚洲黄色免费观看 | 欧美日韩国产综合网 | 国产亚洲视频系列 | 中日韩欧美精彩视频 | 丰满少妇在线观看资源站 | 国产91精品久久久久久 | 欧美日本一二三 | 成人黄在线 | 国产最新视频在线 | 国产成人精品日本亚洲999 | 激情五月婷婷丁香 | 精品一区二区精品 | 天天干天天操天天操 | 麻豆视频观看 | 玖玖在线免费视频 | 亚洲精品国产自产拍在线观看 | 国产一级在线播放 | 蜜臀av性久久久久av蜜臀妖精 | 91污视频在线观看 | 91一区啪爱嗯打偷拍欧美 | 国产免费人成xvideos视频 | 日韩精品一区二区三区三炮视频 | 天堂在线成人 | 亚洲精品麻豆 | 成人av影院在线观看 | 91九色网站| 精品国产免费观看 | 免费观看视频黄 | 在线观看视频三级 | 狠狠色综合欧美激情 | 婷婷色伊人 | 9在线观看免费 | 久久av伊人| 亚洲色视频 | 国产精品高潮呻吟久久久久 | 国产精品免费小视频 | 欧美成人aa | 91精品国产自产老师啪 | 99中文在线 | 五月天亚洲精品 | 国产成人亚洲精品自产在线 | 久久免费国产精品1 | 国产在线久久久 | 最新午夜 | 天天操天天干天天插 | 欧美资源在线观看 | 五月天视频网站 | 亚洲欧洲国产日韩精品 | 成人在线视 | 日韩在线免费高清视频 | 精品国产91亚洲一区二区三区www | 国产999精品| 国产精品不卡在线观看 | 免费观看成人av | 国产一级淫片在线观看 | 亚洲天堂网在线观看视频 | 日韩免费高清在线观看 | 久久精品视频免费播放 | 最新国产精品久久精品 | 一级片黄色片网站 | 2019天天干天天色 | 欧洲黄色片 | 日韩美精品视频 | 五月天色站 | 日日干av| 在线观看网站av | 正在播放一区 | 亚洲国产中文字幕在线 | 免费国产在线观看 | www免费在线观看 | 国产精品成人一区二区 | 国产美女在线观看 | 国内精品久久久久久久久久 | 三三级黄色片之日韩 | 色欧美视频 | 中文字幕一区二区三区四区 | 依人成人综合网 | 日日天天av | 国产精品激情 | 日韩和的一区二在线 | 在线成人欧美 | 久久国产精品免费一区 | 毛片视频网址 | 国产美女网 | 成年人网站免费观看 | 午夜性福利 | 国产麻豆电影在线观看 | 久久无码av一区二区三区电影网 | 丁香六月综合网 | 久久99热这里只有精品 | 在线免费看黄色 | 中文亚洲欧美日韩 | 狠狠狠狠干 | 日韩有码在线观看视频 | 国产成年免费视频 | 天天操综合网站 | 欧美片网站yy | 国产一区二区视频在线 | 热久久免费国产视频 | 成人免费观看视频大全 | 插久久| 欧美黑吊大战白妞欧美 | 国产一区二区三区在线 | 99亚洲精品 | 婷婷色网视频在线播放 | 亚洲h在线播放在线观看h | 丁香五月网久久综合 | 91传媒在线播放 | 中文字幕在线日 | 美女久久99 | 超碰97人人在线 | av在线8 | 久久免费视频在线观看6 | 国产一区二区久久 | 久久精品成人欧美大片古装 | 国产精品欧美激情在线观看 | 国产精品久久一区二区三区, | 国产综合福利在线 | 国产一二区在线观看 | 亚洲精品中文字幕视频 | 亚洲午夜精品久久久久久久久 | 国产午夜精品一区二区三区四区 | 99视频国产精品 | 中日韩欧美精彩视频 | 欧美在线一 | 精品av在线播放 | 91在线操 | 99综合影院在线 | 成人影片在线免费观看 | 亚洲国产欧美在线看片xxoo | 日韩欧美视频免费看 | 亚洲成av人片在线观看 | 深夜免费福利网站 | 国产免费人成xvideos视频 | 在线免费观看成人 | 一级成人在线 | 在线91色| 在线播放 一区 | 69av视频在线观看 | 美女视频久久久 | av网站在线免费观看 | 精品不卡视频 | 免费日韩一区二区 | 色婷婷伊人 | 波多野结衣在线视频一区 | www.天天干 | 手机av电影在线观看 | 米奇狠狠狠888 | 亚洲波多野结衣 | 久久精品免视看 | 在线观看中文av | 久久久久国产精品免费免费搜索 | 一区在线播放 | 在线观看自拍 | 日韩在线免费电影 | 麻豆你懂的| 中文字幕免费不卡视频 | 色吊丝在线永久观看最新版本 | 日精品在线观看 | 狠狠做深爱婷婷综合一区 | 久久天天躁夜夜躁狠狠85麻豆 | 欧美成人a在线 | 香蕉看片 | 亚洲精品在线观看视频 | 亚洲精品国产日韩 | 久久国产精品视频观看 | 国产日韩欧美在线一区 | 美女精品久久久 | 亚洲综合精品视频 | 久久久精品 一区二区三区 国产99视频在线观看 | 97久久久免费福利网址 | 二区三区毛片 | 国产精品 中文在线 | 久久精品韩国 | 91人人爱 | 激情五月播播久久久精品 | 99视频国产精品 | 中文字幕二区三区 | 欧美另类高潮 | 久久午夜影院 | 久久看看 | 91污污| 欧美精品免费视频 | www.色婷婷.com | 狠狠色婷婷丁香六月 | 色播99| av福利在线导航 | 在线免费试看 | 91亚洲精品在线观看 | 精品96久久久久久中文字幕无 | 99r国产精品 | 精品99在线视频 | 综合激情网 | 996久久国产精品线观看 | 成人av日韩 | 成人在线视频免费观看 | 欧美九九九 | 手机看片99| 在线观看免费av片 | 五月婷婷一区二区三区 | 成年人在线视频观看 | 亚洲精品免费看 | 国外成人在线视频网站 | 欧美精品中文在线免费观看 | 96国产精品视频 | 国产午夜三级一区二区三桃花影视 | 免费视频91蜜桃 | 免费手机黄色网址 | 国产精品久久人 | 欧美91精品久久久久国产性生爱 | 国产精品白丝jk白祙 | 久久高清免费 | 日韩性色 | 伊人久在线 | 国产精华国产精品 | 在线看国产一区 | 久久区二区 | 久草精品视频 | 亚洲 中文 在线 精品 | 97超碰站| 国产亚洲激情视频在线 | 丁香综合av | 成人a在线观看高清电影 | 91最新在线视频 | 免费中文字幕在线观看 | 在线黄色av | 欧美高清成人 | 亚洲五月六月 | 亚洲欧美国产视频 | 日韩1级片 | 成人黄色av网站 | 日韩精品一区二区三区在线播放 | 精品美女视频 | 日日夜精品 | 一区二区视频电影在线观看 | 国产少妇在线观看 | 狠狠狠狠狠狠操 | 久草精品视频在线观看 | 日本深夜福利视频 | 国产视频一区二区三区在线 | 99色国产 | 久久成人午夜视频 | 日日日操 | 精品成人在线 | 久久好看免费视频 | 911香蕉| 国内外成人在线 | 天天干天天操天天爱 | 久久综合色婷婷 | 免费毛片aaaaaa | 欧美一级电影免费观看 | 亚洲精品成人av在线 | 在线观看免费日韩 | 国产精品6 | 在线观看蜜桃视频 | 91中文字幕在线 | 亚洲第二色 | 最新国产中文字幕 | 亚洲最大av | 草久在线视频 | 在线v片免费观看视频 | 亚洲性少妇性猛交wwww乱大交 | 亚洲人精品午夜 | 亚洲综合色播 | 久久免费精品一区二区三区 | 日批视频国产 | 久久这里只有精品久久 | 黄色免费网站 | 国产69精品久久99不卡的观看体验 | 久久国产精品99国产精 | 中文字幕一二三区 | 天天插综合 | 欧洲成人av | 亚洲精选在线 | 久久av观看| 欧美做受高潮电影o | a√天堂资源 | 久久精品99 | 久久久免费网站 | 国产无套一区二区三区久久 | 99视频精品视频高清免费 | 五月天天色 | 在线观看一级视频 | 国产精品大片免费观看 | 欧美人人爱 | 狠狠久久婷婷 | 午夜av在线电影 | 免费在线精品视频 | 精品国产一区二区三区在线 | 99精品国自产在线 | 97福利视频 | 国产999精品久久久久久绿帽 | 欧美日韩在线观看视频 | 国产一区自拍视频 | 国产一区二区在线观看视频 | 久久福利电影 | 日韩午夜三级 | 久久视频在线免费观看 | 欧美一级免费片 | 久草在线免费电影 | 欧美做受高潮 | 国产精品久久久久久久久久久久久 | 国产91在| 在线成人免费av | 久久免费视频在线观看6 | 天天插日日操 | 日日夜夜精品免费 | 久久久久久网址 | 激情综合网五月激情 | 欧美怡红院视频 | 人人干人人草 | 国产精品久久久久一区二区三区 | 欧美日韩性视频在线 | 欧美a免费 | 999在线观看视频 | 97av视频在线观看 | 日韩免费在线视频观看 | 99综合久久 | 亚洲精品小视频 | 成人av资源网 | 在线观看日韩av | 三级黄在线 | 四虎精品成人免费网站 | 爱爱av在线 | 欧美一二三在线 | 网站免费黄色 | 丁香狠狠 | 五月婷婷丁香网 | 天天色天天射天天操 | 国产精品情侣视频 | 久草在线官网 | 成人黄色电影免费观看 | 亚洲国产日韩一区 | 精品国产一区二区三区久久久久久 | 在线观看视频色 | 免费网站在线观看成人 | 天天天天天天操 | 天天干天天干天天干 | www在线观看国产 | 久久精品123 | 成人91av | 波多野结衣在线视频免费观看 | 日韩一区二区三区视频在线 | 久久久久亚洲精品 | 中文字幕一区三区 | 911久久香蕉国产线看观看 | 国产99区 | 久久精品一区二区三区国产主播 | 久久精品中文 | a√天堂资源 | 亚洲人人精品 | 国产精久久久久久妇女av | 亚洲国内精品视频 | 中文日韩在线 | 国内久久看 | 在线观看理论 | 久久综合国产伦精品免费 | 久久夜色网 | 日日操日日干 | 久久综合狠狠综合久久综合88 | 国产成人福利在线观看 | 亚洲国产精品推荐 | 亚洲精品国产精品国自产观看 | 91久久国产露脸精品国产闺蜜 | 91九色视频在线 | 色婷婷视频在线观看 | 精品国产一区二区三区久久久 | 99视频99 | 日日操夜夜操狠狠操 | 精品国产一区二区三区在线 | 国产精品免费视频观看 | 日韩在线播放视频 | 免费成视频 | 91亚色视频 | 欧美日韩国产一区二区在线观看 | 夜夜婷婷| 国产视频一区二区在线观看 | 欧美日韩国产精品久久 | 在线观看黄色 | 免费视频一区二区 | 中文字幕在线专区 | 久久精品视频2 | 就色干综合| 狠狠婷婷 | 狠狠色噜噜狠狠 | 日韩高清观看 | 欧美日韩二区在线 | 国产不卡视频在线播放 | 中文av一区二区 | 国产精品久久久久久久久久久杏吧 | 日韩精品一区二区三区在线播放 | 99热超碰| 免费在线观看av片 | 综合视频在线 | 国产婷婷久久 | 亚洲精品在线免费观看视频 | 国产偷在线 | 欧美精品久久久久 | 久久视频网 | 久久天堂精品视频 | 国产精久久久久久妇女av | 久久久久 | 国产福利精品在线观看 | 美女视频黄,久久 | 91麻豆精品国产91久久久无限制版 | 日韩和的一区二在线 | 8x8x在线观看视频 | 人人爽夜夜爽 | 国产人成免费视频 | 成人在线播放免费观看 | 久久亚洲福利视频 | 日韩国产精品毛片 | 久久精彩 | 国产精品一级视频 | 中文网丁香综合网 | 亚洲不卡av一区二区三区 | 国产精品一区二区在线观看免费 | 日韩电影在线看 | 韩日精品在线 | 激情综合网天天干 | 久久麻豆精品 | 激情婷婷六月 | 免费在线观看国产精品 | 99久久激情视频 | 欧美久久成人 | 国产色婷婷精品综合在线手机播放 | 久久精品一二三区白丝高潮 | 天天操天天怕 | 黄色小说免费在线观看 | 久久免费精品一区二区三区 | 深夜国产在线 | 欧美乱大交| 一级精品视频在线观看宜春院 | 久久久久免费网站 | 午夜美女福利直播 | 亚洲最大成人免费网站 | 久久精品欧美日韩精品 | 欧美日韩国产在线一区 | 日韩电影在线观看一区 | 午夜精品久久一牛影视 | 亚州国产视频 | 国产精品爽爽久久久久久蜜臀 | 精品国产1区二区 | 精品国偷自产在线 | 亚洲精品视频偷拍 | 久久人人爽爽人人爽人人片av | 日韩四虎| 午夜av激情 | 六月激情婷婷 | 毛片激情永久免费 | 999久久a精品合区久久久 | 免费一级片观看 | 免费久久久久久久 | 久99久中文字幕在线 | 国产免费人成xvideos视频 | 久久久精品久久 | 亚洲国产美女久久久久 | 欧美日高清视频 | av在线电影免费观看 | 天堂在线免费视频 | 免费精品在线观看 | 人人干网 | 久久久久夜色 | 国产日韩一区在线 | 在线久热| 国产午夜麻豆影院在线观看 | 久久久久久久久网站 | 欧美日韩在线精品一区二区 | 麻豆视频91 | 一本一本久久a久久精品牛牛影视 | 欧美一区在线看 | 一区二区三区在线影院 | 91久久在线观看 | 亚洲视频高清 | 国产第一页福利影院 | 在线 高清 中文字幕 | 中文字幕在线播放一区二区 | 欧美日韩精品在线免费观看 | 国产高清在线视频 | 热久久最新地址 | 久久成人资源 | 亚洲天堂网在线视频观看 | 激情婷婷六月 | 国产一二三四在线视频 | 久久久亚洲麻豆日韩精品一区三区 | 视频在线观看入口黄最新永久免费国产 | 免费男女羞羞的视频网站中文字幕 | 久久乱码卡一卡2卡三卡四 五月婷婷久 | 欧美大片第1页 | 亚洲一级影院 | 日韩av一区二区三区在线观看 | www黄色com| 五月天精品视频 | 日本精品一区二区 | 少妇搡bbbb搡bbb搡忠贞 | 91毛片在线 | 国产福利一区在线观看 | 中文在线a∨在线 | 少妇自拍av | 日韩视频一区二区 | 91视频91蝌蚪 | 国产精品视频地址 | 最近高清中文字幕 | 免费观看www7722午夜电影 | 最新日韩视频在线观看 | 国产一区二区精品91 | www.777奇米| 欧美一二三四在线 | 精品国产人成亚洲区 | 成人免费ⅴa | 久久官网 | 国产精品九九九九九 | 国产一区av在线 | 婷婷色网址 | 天天操天天爱天天干 | 一区二区三区四区五区在线 | 日韩电影一区二区在线 | www.国产精品 | 2019中文字幕网站 | 久久成人综合视频 | 91天天操 | 欧美午夜精品久久久久久浪潮 | 国产视频精品免费 | 中文字幕精品三区 | 精品一二区| 免费在线色| 精品久久免费 | 最新精品视频在线 | 免费在线成人av电影 | 又污又黄的网站 | www国产亚洲精品 | 国产成人在线免费观看 | 国产xx视频| 2017狠狠干| 91中文在线观看 | 亚洲第五色综合网 | 精品久久久久一区二区国产 | 久久在线视频精品 | 久草在线久 | 在线精品一区二区 | 精品色999| 免费亚洲片 | 色噜噜在线观看视频 | 国产精品免费观看国产网曝瓜 | 一区二区 不卡 | 日韩二区在线播放 | 国产一线天在线观看 | 中文字幕日本在线 | 91久久黄色 | 日韩欧美在线中文字幕 | www.eeuss影院av撸 | 99在线视频播放 | 成人黄色在线视频 | 在线国产99 | 国产91精品看黄网站 | 日本九九视频 | 欧美日韩在线观看一区二区三区 | 免费看污污视频的网站 | 欧美最新大片在线看 | 69精品在线 | 亚洲成人麻豆 | 在线观看黄网站 | 亚洲 中文 在线 精品 | 日本黄色免费大片 | 久久久久一区二区三区四区 | 欧美视屏一区二区 | 在线观看色网 | 国产免费国产 | 精品一区二区免费在线观看 | 国产精品黑丝在线观看 | 亚洲黄色一级视频 | 精品国产一区二区三区久久影院 | 粉嫩av一区二区三区免费 | 亚洲伊人第一页 | 蜜臀av性久久久久蜜臀aⅴ四虎 | 国产专区在线视频 | 久久久受www免费人成 | 亚洲电影图片小说 | av中文字幕在线播放 | 久久99精品久久久久蜜臀 | 国产最顶级的黄色片在线免费观看 | 天天人人 | 亚洲综合精品视频 | 精品日本视频 | 国产黑丝袜在线 | 欧美污污视频 | 婷婷国产在线观看 | 永久免费精品视频网站 | 国产精品久久久久久久久大全 | 韩日三级在线 | 特黄一级毛片 | 综合久久婷婷 | 国产精品二区在线观看 | 亚洲视屏 | 五月婷婷久草 | 国产精品video | 国产精品综合久久久久 | 伊人五月天婷婷 | 国产成人精品国内自产拍免费看 | 免费在线电影网址大全 | 亚洲人毛片 | 国产97超碰 | 亚洲黄色在线播放 | 久久影视网| 日韩精品观看 | 免费毛片aaaaaa | 日韩在线观看第一页 | 国产电影一区二区三区四区 | 香蕉视频在线网站 | 91丝袜美腿 | 天天撸夜夜操 | 国产1区2区3区精品美女 | 五月花激情 | 91精品国产综合久久福利 | 九九热国产视频 | www.久久久精品 | 国产中文字幕av | 91麻豆精品国产91 | 久久伊人精品一区二区三区 | 麻豆国产精品视频 | 中文字幕免费在线看 | 亚洲视频高清 | 欧美日韩破处 | 久久精品99国产 | 91女子私密保健养生少妇 | 国产精品二区三区 | 黄色aaa毛片 | 波多野结衣电影一区二区三区 | 亚洲国产精品成人va在线观看 | 黄色一级免费 | 国产成人黄色在线 | av免费在线观看1 | 欧美少妇xx| 91福利免费 | 久久久久久久国产精品 | 日韩国产高清在线 | 香蕉视频在线网站 | 国产一区国产二区在线观看 | 精品一区二区综合 | 久久在视频 | 成人av在线资源 | a黄色一级 | www黄色av| 国产又粗又硬又爽的视频 | 日韩性xxxx| 中文区中文字幕免费看 | 五月天婷婷视频 | 久久免费高清视频 | www.综合网.com | 大胆欧美gogo免费视频一二区 | 在线看片91| 国产成人精品女人久久久 | 国产999精品视频 | 天天操天天谢 | 91黄视频在线观看 | 日本精品久久久久影院 | 国产精品久久久久久av | 国产精品原创 | av大全在线看 | 99久热在线精品视频观看 | 精品视频在线播放 | 欧美日韩一区二区免费在线观看 | 三级av中文字幕 | 99在线热播精品免费 | 91看片在线免费观看 | 免费观看www7722午夜电影 | 久草在线最新 | 在线免费观看不卡av | 婷婷在线资源 | 国产手机精品视频 | 免费亚洲精品视频 | 免费看污在线观看 | 婷婷六月天综合 | 亚洲黄色激情小说 | 日韩一区二区三 | 国产成人精品一区二区在线 | 亚洲特级片 | 在线 你懂 | 一区二区三高清 | 久久精品中文字幕少妇 | 国产一级二级视频 | 免费高清影视 | 99免在线观看免费视频高清 | 欧美乱淫视频 | 一区三区视频在线观看 | 天天草综合 | 人人插人人草 | www黄在线| 丁香五月缴情综合网 | 久久色在线观看 | 日韩在线观看精品 | 日韩亚洲精品电影 | 国内久久精品视频 | 婷婷激情影院 | 亚洲精品成人网 | 欧美日韩午夜 | 国产欧美高清 | 99视频网址| 午夜久久福利视频 | 99色在线播放 | 国产精品99久久久久久武松影视 | 国产精品一区二区三区99 | 豆豆色资源网xfplay | 安徽妇搡bbbb搡bbbb | 国产成视频在线观看 | 一区二区视频免费在线观看 | 国产一区二区三区免费观看视频 | 97精品国产97久久久久久久久久久久 | 麻豆精品国产传媒 | 国产精品一区二区免费 | 精品一区二区免费在线观看 | 黄色免费网 | 91中文字幕网 | 91福利小视频 | 性色av一区二区 | 欧美精品中文在线免费观看 | 超碰97久久 | 久久毛片网 | 国产99爱 | 国产精品色婷婷视频 | 天天综合在线观看 | 国产精品免费久久久久久 | 99久免费精品视频在线观看 | 日韩av资源站 | 热re99久久精品国产99热 | 天天综合网~永久入口 | 免费看的黄色的网站 | 国产成人av电影 | 国产日韩中文字幕在线 | 一区二区三区免费在线观看 | 极品美女被弄高潮视频网站 | 精品在线不卡 | 亚洲一级免费电影 | 91一区二区三区久久久久国产乱 | 久久综合影音 | 高清国产午夜精品久久久久久 | 中文字幕在线观看播放 | 视频二区在线视频 | 最新av网址在线 | 人人看人人爱 | 激情五月在线观看 | 99国产在线| 九九有精品 | 国产午夜精品福利视频 | 夜夜摸夜夜爽 | 色欲综合视频天天天 | 婷婷中文在线 | 久久免费国产精品 | 特片网久久 | 日韩免费观看视频 | 热久久免费视频精品 | 国产伦精品一区二区三区四区视频 | 国产无套精品久久久久久 | 亚洲精品美女久久 | av网站免费线看精品 | 日韩欧美一区二区不卡 | 天天看天天干 | 黄色av电影在线 | 五月天婷婷在线视频 | 在线播放亚洲激情 | 91人人射 | 911在线|