日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

elman神经网络的实现

發布時間:2023/12/9 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 elman神经网络的实现 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

在看文章時,一篇文章提到了使用elman神經網絡來對癲癇病人的腦電信號與正常人的腦電信號進行區分,并且取得了較好的分類結果。于是就想自己寫一個elman神經網絡demo看看效果。

elman神經網絡和感知機的差別通過下面的圖片可以很明顯的看出哪里不一樣,elman在隱藏層多了一個“context units“,用來保存隱藏層的輸出,并作用到下一次隱藏層的計算中,關于elman原理的說明,大家可以自己查閱一些資料,這里不再贅述。(圖片來自維基百科https://en.wikipedia.org/wiki/Recurrent_neural_network)


""" coding: utf-8 @author: zhangxiang """ """ 在對腦電信號進行分類的時候,發現一篇文章對健康人,癲癇患者未發作時的腦電信號和癲癇發作時的腦電信號的分類使用了基于時序的 elman_RNN 神經網絡進行建模,于是想在預測麻醉深度類別及其它時序相關的分類問題上使用這一模型。 就寫了一個demo """ import numpy as npclass ELMAN_RNN(object):def __init__(self, input_num, hidden_num, output_num, learning_rate):self.input_num = input_numself.hidden_num = hidden_numself.output_num = output_numself.learning_rate = learning_rateself.hidden_weights = np.random.random((self.input_num, self.hidden_num))self.output_weights = np.random.random((self.hidden_num, self.output_num))self.rnn_weights = np.random.random((self.hidden_num, self.hidden_num))self.hidden_bias = np.random.rand(1)self.output_bias = np.random.rand(1)self.hidden_output = np.zeros((1, self.hidden_num))def training(self, train_input, train_output):"""training one time"""output = self.feed_forward(train_input)self.bptt(train_input, output, train_output)def calculate_the_cross_entropy(self, training_set):"""get the total error loss"""loss = 0for i in range(np.array(training_set).shape[0]):x, y = training_set[i]y = np.array(y).reshape(1,2)result = self.feed_forward(x)loss += self.get_the_total_error(y, result)return lossdef get_the_total_error(self, y, result):"""loss = -∑yi*ln(ai), y is the real label, result is the softmax result"""loss = -np.sum(y*np.log(result))return lossdef feed_forward(self, input):"""calculate feed_forward value"""self.hidden_output = self.sigmoid(np.dot(np.array(input).reshape(1,2), self.hidden_weights) + np.dot(self.hidden_output, self.rnn_weights) + self.hidden_bias)return self.softmax(np.dot(self.hidden_output, self.output_weights) + self.output_bias)def bptt(self,input, output, train_output):"""update the weights of all layers"""# claculate delta of output layersdelta_of_output_layers = [0]*self.output_numfor i in range(self.output_num):delta_of_output_layers[i] = self.calculate_output_wrt_rawout(output[0, i], train_output[i])# caculate delta of hidden layersdelta_of_hidden_layers = [0]*self.hidden_numfor i in range(self.hidden_num):d_error_wrt_hidden_output = 0.0for o in range(self.output_num):d_error_wrt_hidden_output += delta_of_output_layers[o]*self.output_weights[i, o]delta_of_hidden_layers[i] = d_error_wrt_hidden_output*self.calculate_output_wrt_netinput(self.hidden_output[0,i])# get the δw of output layers and update the weightsfor i in range(self.output_num):for weight_j in range(self.output_weights.shape[0]):delta_wrt_weight_j = delta_of_output_layers[i]*self.hidden_output[0,weight_j]self.output_weights[weight_j, i] -= self.learning_rate*delta_wrt_weight_j# get the δw of hidden layers and update the weightsfor i in range(self.hidden_num):for weight_j in range(self.hidden_weights.shape[0]):delta_wrt_weight_j = delta_of_hidden_layers[i]*input[weight_j]self.hidden_weights[weight_j, i] -= self.learning_rate*delta_wrt_weight_jdef sigmoid(self, x):"""activation function"""return 1.0/(1.0 + np.exp(-x))def softmax(self, x):"""the activation for multiple output function"""return np.exp(x)/np.sum(np.exp(x))def calculate_output_wrt_rawout(self, output, train_output):"""derivative of softmax function, actually in classification train_output equal to 1"""return (output - train_output)def calculate_output_wrt_netinput(self, output):"""the derivative of sigmoid function"""return output*(1 - output)if __name__ == "__main__":import matplotlib.pyplot as pltelman = ELMAN_RNN(input_num=2, hidden_num=4, output_num=2, learning_rate=0.02)train_x = [[1,2], [1,1], [1.5, 1.5], [2,1], [-1,-1], [-0.5, -0.5], [-1, -2], [-2, -1.5]]label_y = [[1,0], [1,0], [1,0], [1,0], [0,1], [0,1], [0,1], [0,1]]training_sets = [[[2,2],[1,0]], [[0.2, 0.8], [1,0]], [[-0.5, -0.8], [0, 1]], [[-1.2, -0.5], [0, 1]]]loss = []for i in range(1000):for x, y in zip(train_x, label_y):elman.training(x, y)loss.append(elman.calculate_the_cross_entropy(training_sets))plt.figure()plt.plot(loss)plt.title('the loss with the training')plt.show()print('training finished!')

loss函數的變化如下:


總結

以上是生活随笔為你收集整理的elman神经网络的实现的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。