日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

keras 的 example 文件 reuters_mlp.py 解析

發布時間:2023/11/27 生活经验 29 豆豆
生活随笔 收集整理的這篇文章主要介紹了 keras 的 example 文件 reuters_mlp.py 解析 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

該文件是一個文本分類的示例,

數據集采用路透社的新聞,我們可以把其內容解析一下:

word_index = reuters.get_word_index()
word_index = {k: (v + 3) for k, v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["?"] = 2  # unknown
word_index["<UNUSED>"] = 3reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])def decode_review(text):return ' '.join([reverse_word_index.get(i, '?') for i in text])for i in range(10):print(decode_review(x_train[i]))print(y_train[i])

就可以看到類似這樣的新聞原文:

<START> ? ? said as a result of its december acquisition of ? co it expects earnings per share in 1987 of 1 15 to 1 30 dlrs per share up from 70 cts in 1986 the company said pretax net should rise to nine to 10 mln dlrs from six mln dlrs in 1986 and ? ? revenues to 19 to 22 mln dlrs from 12 5 mln dlrs it said cash ? per share this year should be 2 50 to three dlrs reuter 3

上面的3,是因為?0,1,2,3保留下標,分別表示:“padding,” “start of sequence,“,“unknown.”和“unused”

然后下面采用?Tokenizer 對其進行編碼,我們添加一些打印,看下編碼前后的關系:

tmp_list = x_train[0]
tmp_list.sort()
print(tmp_list)
print('Vectorizing sequence data...')
tokenizer = Tokenizer(num_words=max_words)
x_train = tokenizer.sequences_to_matrix(x_train, mode='binary')
x_test = tokenizer.sequences_to_matrix(x_test, mode='binary')
print(x_train[0])

打印結果為:

[1, 2, 2, 2, 2, 2, 2, 4, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 10, 11, 11, 11, 11, 12, 15, 15, 15, 15, 15, 15, 16, 16, 17, 19, 19, 22, 22, 22, 25, 26, 29, 30, 32, 39, 43, 44, 48, 48, 49, 52, 67, 67, 67, 83, 84, 89, 90, 90, 90, 102, 109, 111, 124, 132, 134, 151, 154, 155, 186, 197, 207, 209, 209, 258, 270, 272, 369, 447, 482, 504, 864]
Vectorizing sequence data...
[0. 1. 1. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 1. 1. 1. 0. 1. 0. 0. 1. 0.0. 1. 1. 0. 0. 1. 1. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 1. 1. 0. 0. 0.1. 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]

也就是如果文本中包含該字符,則將相應的位置1,否則置0,不過我們也看到,此種編碼方式會丟失文本的位置信息;也就是“我要狗”跟“狗咬我”沒有區別了,不過好處是節省空間,比起Embedding編碼,空間縮小至少一個數量級

?

編碼后的shape為:

x_train shape: (8982, 1000)
y_train shape: (8982, 46)

也就是46種分類,通過一個神經網絡來進行分類:

_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense_1 (Dense)              (None, 512)               512512
_________________________________________________________________
activation_1 (Activation)    (None, 512)               0
_________________________________________________________________
dropout_1 (Dropout)          (None, 512)               0
_________________________________________________________________
dense_2 (Dense)              (None, 46)                23598
_________________________________________________________________
activation_2 (Activation)    (None, 46)                0
=================================================================
Total params: 536,110
Trainable params: 536,110
Non-trainable params: 0
_________________________________________________________________

?

代碼?reuters_mlp_relu_vs_selu.py 使用的是同樣的數據集,實現的功能就是比較一下兩種神經網絡,哪種效果好,也就是激活函數不同,dropout不一樣,初始化參數方式有點區別,這里僅打印一下神經網絡:

network1

_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense_1 (Dense)              (None, 16)                16016
_________________________________________________________________
activation_1 (Activation)    (None, 16)                0
_________________________________________________________________
dropout_1 (Dropout)          (None, 16)                0
_________________________________________________________________
dense_2 (Dense)              (None, 16)                272
_________________________________________________________________
activation_2 (Activation)    (None, 16)                0
_________________________________________________________________
dropout_2 (Dropout)          (None, 16)                0
_________________________________________________________________
dense_3 (Dense)              (None, 16)                272
_________________________________________________________________
activation_3 (Activation)    (None, 16)                0
_________________________________________________________________
dropout_3 (Dropout)          (None, 16)                0
_________________________________________________________________
dense_4 (Dense)              (None, 16)                272
_________________________________________________________________
activation_4 (Activation)    (None, 16)                0
_________________________________________________________________
dropout_4 (Dropout)          (None, 16)                0
_________________________________________________________________
dense_5 (Dense)              (None, 16)                272
_________________________________________________________________
activation_5 (Activation)    (None, 16)                0
_________________________________________________________________
dropout_5 (Dropout)          (None, 16)                0
_________________________________________________________________
dense_6 (Dense)              (None, 16)                272
_________________________________________________________________
activation_6 (Activation)    (None, 16)                0
_________________________________________________________________
dropout_6 (Dropout)          (None, 16)                0
_________________________________________________________________
dense_7 (Dense)              (None, 46)                782
_________________________________________________________________
activation_7 (Activation)    (None, 46)                0
=================================================================
Total params: 18,158
Trainable params: 18,158
Non-trainable params: 0
_________________________________________________________________

?

network2

________________________________________________________________________________
Layer (type)                        Output Shape                    Param #
================================================================================
dense_1 (Dense)                     (None, 16)                      16016
________________________________________________________________________________
activation_1 (Activation)           (None, 16)                      0
________________________________________________________________________________
alpha_dropout_1 (AlphaDropout)      (None, 16)                      0
________________________________________________________________________________
dense_2 (Dense)                     (None, 16)                      272
________________________________________________________________________________
activation_2 (Activation)           (None, 16)                      0
________________________________________________________________________________
alpha_dropout_2 (AlphaDropout)      (None, 16)                      0
________________________________________________________________________________
dense_3 (Dense)                     (None, 16)                      272
________________________________________________________________________________
activation_3 (Activation)           (None, 16)                      0
________________________________________________________________________________
alpha_dropout_3 (AlphaDropout)      (None, 16)                      0
________________________________________________________________________________
dense_4 (Dense)                     (None, 16)                      272
________________________________________________________________________________
activation_4 (Activation)           (None, 16)                      0
________________________________________________________________________________
alpha_dropout_4 (AlphaDropout)      (None, 16)                      0
________________________________________________________________________________
dense_5 (Dense)                     (None, 16)                      272
________________________________________________________________________________
activation_5 (Activation)           (None, 16)                      0
________________________________________________________________________________
alpha_dropout_5 (AlphaDropout)      (None, 16)                      0
________________________________________________________________________________
dense_6 (Dense)                     (None, 16)                      272
________________________________________________________________________________
activation_6 (Activation)           (None, 16)                      0
________________________________________________________________________________
alpha_dropout_6 (AlphaDropout)      (None, 16)                      0
________________________________________________________________________________
dense_7 (Dense)                     (None, 46)                      782
________________________________________________________________________________
activation_7 (Activation)           (None, 46)                      0
================================================================================
Total params: 18,158
Trainable params: 18,158
Non-trainable params: 0
________________________________________________________________________________

?

總結

以上是生活随笔為你收集整理的keras 的 example 文件 reuters_mlp.py 解析的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。