日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 运维知识 > windows >内容正文

windows

对话系统(二)-普通神经网络

發(fā)布時(shí)間:2024/10/8 windows 39 豆豆
生活随笔 收集整理的這篇文章主要介紹了 对话系统(二)-普通神经网络 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

原理

流程
  • 生成數(shù)據(jù)
  • 生成權(quán)重
  • layer1:
  • layer2:
  • x{x}x:輸入數(shù)據(jù)(20,5){(20, 5)}(20,5)
    w1{w_{1}}w1?:第一層權(quán)重(5,3){(5, 3)}(5,3)
    w2{w_{2}}w2?:第二層權(quán)重(3,2){(3, 2)}(3,2)
    a1{a_{1}}a1?:乘積(20,3){(20, 3)}(20,3)
    h1{h_{1}}h1?:過(guò)激活函數(shù)(20,3){(20, 3)}(20,3)
    a2{a_{2}}a2?:乘積(20,2){(20, 2)}(20,2)
    h2{h_{2}}h2?:過(guò)激活函數(shù)(20,2){(20, 2)}(20,2)

    正向傳播

    x{x}x
    a1=x?w1{a_{1}}={x}*{w_{1}}a1?=x?w1?
    h1=sigmoid(a1){h_{1}}=sigmoid({a_{1}})h1?=sigmoid(a1?)
    a2=h1?w2{a_{2}}={h_{1}}*{w_{2}}a2?=h1??w2?
    h2=sigmoid(a2){h_{2}}=sigmoid({a_{2}})h2?=sigmoid(a2?)

    推導(dǎo)

    損失函數(shù)logloss:J=?1m∑(ylog?y^+(1?y)log?(1?y^))\displaystyle J=-\frac{1}{m}\sum(y\log{\hat{y}}+(1-y)\log(1-\hat{y}))J=?m1?(ylogy^?+(1?y)log(1?y^?))

    ?J?w2=?J?h2??h2?a2??a2?w2\displaystyle\frac{\partial{J}}{\partial{w_2}}=\frac{\partial{J}}{\partial{h_2}}*\frac{\partial{h_{2}}}{\partial{a_2}}*\frac{\partial{a_{2}}}{\partial{w_{2}}}?w2??J?=?h2??J???a2??h2????w2??a2??

    ?J?w1=?J?h2??h2?a2??a2?h1??h1?a1??a1?w1\displaystyle\frac{\partial{J}}{\partial{w_1}}=\frac{\partial{J}}{\partial{h_2}}*\frac{\partial{h_{2}}}{\partial{a_2}}*\frac{\partial{a_{2}}}{\partial{h_{1}}}*\frac{\partial{h_{1}}}{\partial{a_{1}}}*\frac{\partial{a_{1}}}{\partial{w_{1}}}?w1??J?=?h2??J???a2??h2????h1??a2????a1??h1????w1??a1??

    其中公共部分(前兩個(gè)偏導(dǎo))為:?J?h2??h2?a2\displaystyle\frac{\partial{J}}{\partial{h_2}}*\frac{\partial{h_{2}}}{\partial{a_2}}?h2??J???a2??h2??

    ?J?h2=?1m?y?h2h2(1?h2)\displaystyle\frac{\partial{J}}{\partial{h_2}}=-\frac{1}{m}*\frac{y-h_{2}}{h_{2}(1-h_{2})}?h2??J?=?m1??h2?(1?h2?)y?h2??

    ?h2?a2=h2(1?h2)\displaystyle\frac{\partial{h_{2}}}{\partial{a_2}}=h_{2}(1-h_{2})?a2??h2??=h2?(1?h2?)

    ?a2?w2=h1\displaystyle\frac{\partial{a_{2}}}{\partial{w_{2}}}=h_{1}?w2??a2??=h1?

    ?a2?h1=w2\displaystyle\frac{\partial{a_{2}}}{\partial{h_{1}}}=w_{2}?h1??a2??=w2?

    ?h1?a1=h1?(1?h1)\displaystyle\frac{\partial{h_{1}}}{\partial{a_{1}}}=h_{1}*(1-h_{1})?a1??h1??=h1??(1?h1?)

    ?a1?w1=x\displaystyle\frac{\partial{a_{1}}}{\partial{w_{1}}}=x?w1??a1??=x

  • x1{x_{1}}x1?
  • 代碼
    用numpy實(shí)現(xiàn)
    import numpy as nptrain_x_dim = 5 sample_1_num = 10 sample_0_num = 10 weight1_dim = 3 weight2_dim = 2train_x_1 = np.random.rand(sample_1_num, train_x_dim) train_x_0 = np.random.rand(sample_0_num, train_x_dim)*10train_y_1 = np.ones(sample_1_num) train_y_0 = np.zeros(sample_0_num)weight1 = np.random.rand(train_x_dim, weight1_dim)def sigmoid(x):return 1/(1+np.exp(-x))a1 = np.dot(train_x_1, weight1) h1 = sigmoid(a1)weight2 = np.random.rand(weight1_dim, weight2_dim) a2 = np.dot(h1, weight2) h2 = sigmoid(a2)def sigmoid_derv(x):return sigmoid(x)*(1-sigmoid(x))
    用tf實(shí)現(xiàn)
    from tensorflow import keras # load data fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # build model model = keras.Sequential([keras.layers.Flatten(input_shape=(28, 28)),keras.layers.Dense(128, activation=tf.nn.relu),keras.layers.Dense(10, activation=tf.nn.softmax) ]) # compile model model.compile(optimizer=tf.train.AdamOptimizer(),loss='sparse_categorical_crossentropy',metrics=['accuracy']) # train model model.fit(train_images, train_labels, epochs=5) # evaluate test_loss, test_acc = model.evaluate(test_images, test_labels)

    總結(jié)

    以上是生活随笔為你收集整理的对话系统(二)-普通神经网络的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

    如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。