日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

tensorflow一维卷积输入_tensorflow中一维卷积conv1d处理语言序列的一点记录

發(fā)布時間:2023/12/19 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 tensorflow一维卷积输入_tensorflow中一维卷积conv1d处理语言序列的一点记录 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

工作中用卷積方法進行自然語言處理(NLP)相關(guān)任務(wù),用到了tensorflow中的一些函數(shù)及方法:

tf.nn.conv1d

tf.layters.conv1d

用cov2d實現(xiàn)cov1d

兩種池化操作

不同核尺寸卷積操作

下面分別介紹

tf.nn.conv1d:

函數(shù)形式:

tf.nn.conv1d(value, filters, stride, padding,

use_cudnn_on_gpu=None, data_format=None,

name=None):

程序舉例:

import tensorflow as tf

import numpy as np

sess = tf.InteractiveSession()

# --------------- tf.nn.conv1d -------------------

inputs=tf.ones((64,10,3)) # [batch, n_sqs, embedsize]

w=tf.constant(1,tf.float32,(5,3,32)) # [w_high, embedsize, n_filers]

conv1 = tf.nn.conv1d(inputs,w,stride=2 ,padding='SAME') # conv1=[batch, round(n_sqs/stride), n_filers],stride是步長。

tf.global_variables_initializer().run()

out = sess.run(conv1)

print(out)

tf.layters.conv1d:

函數(shù)形式:tf.layters.conv1d(inputs,

filters,

kernel_size,

strides=1,

padding='valid',

data_format='channels_last',

dilation_rate=1,

activation=None,

use_bias=True,...)

程序舉例:

import tensorflow as tf

import numpy as np

sess = tf.InteractiveSession()

# --------------- tf.layters.conv1d -------------------

inputs=tf.ones((64,10,3)) # [batch, n_sqs, embedsize]

num_filters=32

kernel_size =5

conv2 = tf.layers.conv1d(inputs, num_filters, kernel_size,strides=2, padding='valid',name='conv2') # shape = (batchsize, round(n_sqs/strides),num_filters)

tf.global_variables_initializer().run()

out = sess.run(conv2)

print(out)

二維卷積實現(xiàn)一維卷積:

import tensorflow as tf

sess = tf.InteractiveSession()

def conv2d(x, W):

return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding='SAME')

def max_pool_1x2(x):

return tf.nn.avg_pool(x, ksize=[1,1,2,1], strides=[1,1,2,1], padding='SAME')

'''

ksize = [x, pool_height, pool_width, x]

strides = [x, pool_height, pool_width, x]

'''

x = tf.Variable([[1,2,3,4]], dtype=tf.float32)

x = tf.reshape(x, [1,1,4,1]) #這一步必不可少,否則會報錯說維度不一致;

'''

[batch, in_height, in_width, in_channels] = [1,1,4,1]

'''

W_conv1 = tf.Variable([1,1,1],dtype=tf.float32) # 權(quán)重值

W_conv1 = tf.reshape(W_conv1, [1,3,1,1]) # 這一步同樣必不可少

'''

[filter_height, filter_width, in_channels, out_channels]

'''

h_conv1 = conv2d(x, W_conv1) # 結(jié)果:[4,8,12,11]

h_pool1 = max_pool_1x2(h_conv1)

tf.global_variables_initializer().run()

print(sess.run(h_conv1)) # 結(jié)果array([6,11.5])x

兩種池化操作:

# 1:stride max pooling

convs = tf.expand_dims(conv, axis=-1) # shape=[?,596,256,1]

smp = tf.nn.max_pool(value=convs, ksize=[1, 3, self.config.num_filters, 1], strides=[1, 3, 1, 1],

padding='SAME') # shape=[?,299,256,1]

smp = tf.squeeze(smp, -1) # shape=[?,299,256]

smp = tf.reshape(smp, shape=(-1, 199 * self.config.num_filters))

# 2: global max pooling layer

gmp = tf.reduce_max(conv, reduction_indices=[1], name='gmp')

不同核尺寸卷積操作:

kernel_sizes = [3,4,5] # 分別用窗口大小為3/4/5的卷積核

with tf.name_scope("mul_cnn"):

pooled_outputs = []

for kernel_size in kernel_sizes:

# CNN layer

conv = tf.layers.conv1d(embedding_inputs, self.config.num_filters, kernel_size, name='conv-%s' % kernel_size)

# global max pooling layer

gmp = tf.reduce_max(conv, reduction_indices=[1], name='gmp')

pooled_outputs.append(gmp)

self.h_pool = tf.concat(pooled_outputs, 1) #池化后進行拼接

總結(jié)

以上是生活随笔為你收集整理的tensorflow一维卷积输入_tensorflow中一维卷积conv1d处理语言序列的一点记录的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。