日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

从0实现三层神经网络

發(fā)布時間:2023/12/10 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 从0实现三层神经网络 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

本文目標

  • 分享李沐老師關(guān)于深度學(xué)習(xí)的觀點:1??從實踐的角度入手深度學(xué)習(xí)可能比單純的研究算法更好;2??如果想學(xué)習(xí)深度學(xué)習(xí),要只用簡單的數(shù)據(jù)結(jié)構(gòu),譬如numpy、NDArray,從0實現(xiàn)一個深度學(xué)習(xí)算法,這樣才能碰到進而解決深度學(xué)習(xí)中的許多核心問題,也可以更好的理解現(xiàn)在流行的框架;3??從應(yīng)用的角度,那就直接上現(xiàn)成的框架,結(jié)合真實數(shù)據(jù)不斷練習(xí),調(diào)得一手好參;
  • 結(jié)合李航《統(tǒng)計學(xué)習(xí)方法》中的觀點,總結(jié)出機器學(xué)習(xí)(深度學(xué)習(xí))的一般代碼框架,具體看代碼。
  • 機器學(xué)習(xí)的一般框架

    從0實現(xiàn)版

    # -*- coding: utf-8 -*- import d2lzh as d2l from mxnet import nd from mxnet import autograd# data batch_size = 256 train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)# model num_inputs, num_hiddens1, num_hiddens2, num_outputs = 784, 256, 256, 10 W1 = nd.random.normal(scale=0.01, shape=(num_inputs, num_hiddens1)) b1 = nd.zeros(num_hiddens1) W2 = nd.random.normal(scale=0.01, shape=(num_hiddens1, num_hiddens2)) b2 = nd.zeros(num_hiddens2) W3 = nd.random.normal(scale=0.01, shape=(num_hiddens2, num_outputs)) b3 = nd.zeros(num_outputs) params = [W1, b1, W2, b2, W3, b3]for param in params:param.attach_grad()def relu(X):return nd.maximum(X, 0)def softmax(X):X_exp = X.exp()partition = X_exp.sum(axis=1, keepdims=True)return X_exp / partitiondef net(X):X = X.reshape((-1, num_inputs))H1 = relu(nd.dot(X, W1) + b1)H2 = relu(nd.dot(H1, W2) + b2)return softmax(H2)# strategy def cross_entropy(y_hat, y):return -nd.pick(y_hat, y).log()loss = cross_entropy# algorithm def sgd(params, lr, batch_size):for param in params:param[:] = param - lr * param.grad / batch_size# training def evaluate_accuracy(data_iter, net):acc_sum, n = 0.0, 0for X, y in data_iter:y = y.astype('float32')acc_sum += (net(X).argmax(axis=1) == y).sum().asscalar()n += y.sizereturn acc_sum / ndef train(net, train_iter, test_iter, loss, num_epochs, batch_size, params, lr):for epoch in range(num_epochs):train_l_sum, train_acc_sum, n = 0.0, 0.0, 0for X, y in train_iter:with autograd.record():y_hat = net(X)l = loss(y_hat, y).sum()l.backward()sgd(params, lr, batch_size)y = y.astype('float32')train_l_sum += l.asscalar()train_acc_sum += (y_hat.argmax(axis=1) == y).sum().asscalar()n += y.sizetest_acc = evaluate_accuracy(test_iter, net)print('epoch: %d, loss %.4f, train_acc %.3f, test_acc %.3f'% (epoch + 1, train_l_sum / n, train_acc_sum / n, test_acc))num_epochs, lr = 10, 0.3 train(net, train_iter, test_iter, loss, num_epochs, batch_size,params, lr) # predictif __name__ == '__main__':print('------ok-------')
    • 說明:代碼中還是使用了d2l.load_data_fashion_mnist來加載圖片數(shù)據(jù),有時間把這個也替換掉,用NDArray實現(xiàn);

    mxnet框架版

    # -*- coding: utf- -*- import d2lzh as d2l from mxnet import gluon, init from mxnet.gluon import loss as gloss, nn# data batch_size = 256 train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)# model net = nn.Sequential() net.add(nn.Dense(256, activation='relu'),nn.Dense(256, activation='relu'),nn.Dense(10)) net.initialize(init.Normal(sigma=0.01))# strategy loss = gloss.SoftmaxCrossEntropyLoss()# algorithm lr = 0.3 trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': lr})# training num_epochs = 10 d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, None, None, trainer) # predictif __name__ == '__main__':print('-----ok------')

    參考資料

    總結(jié)

    以上是生活随笔為你收集整理的从0实现三层神经网络的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。