日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 综合教程 >内容正文

综合教程

TensorFlow 官网API

發布時間:2023/12/13 综合教程 34 生活家
生活随笔 收集整理的這篇文章主要介紹了 TensorFlow 官网API 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

tf.summary.scalar

tf.summary.FileWriter

tf.summary.histogram

tf.summary.merge_all

tf.equal

tf.argmax

tf.cast

tf.div(x, y, name=None)

tf.pow(x, y, name=None)


tf.unstack(value, num=None, axis=0, name='unstack')

tf.stack(values, axis=0, name='stack')


tf.transpose(a, perm=None, name='transpose')

tf.set_random_seed(seed)

tf.reshape(tensor, shape, name=None)
tf.multiply(x, y, name=None

tf.name_scope(args,*kwds)
tf.variable_scope(args,*kwds)


class tf.contrib.rnn.BasicLSTMCell

tf.nn.dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None)

tf.nn.softmax_cross_entropy_with_logits(_sentinel=None, labels=None, logits=None, dim=-1, name=None)

tf.nn.moments(x, axes, shift=None, name=None, keep_dims=False)

tf.contrib.legacy_seq2seq.sequence_loss_by_example(logits, targets, weights, average_across_timesteps=True, softmax_loss_function=None, name=None)


tf.gradients

apply_gradients

tf.distributions.Normal




tf.summary.scalar

https://www.tensorflow.org/api_docs/python/tf/summary/scalar

tf.summary.FileWriter:

https://www.tensorflow.org/api_docs/python/tf/summary/FileWriter

tf.summary.histogram

https://www.tensorflow.org/api_docs/python/tf/summary/histogram

tf.summary.merge_all

https://www.tensorflow.org/api_docs/python/tf/summary/merge_all

tf.equal:

https://www.tensorflow.org/api_docs/python/tf/equal

tf.argmax:

https://www.tensorflow.org/api_docs/python/tf/argmax

tf.cast

https://www.tensorflow.org/api_docs/python/tf/cast

tf.div(x, y, name=None)

參考鏈接:https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/div

tf.pow(x, y, name=None)

參考鏈接:https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/pow


tf.unstack(value, num=None, axis=0, name='unstack')

https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/unstack

tf.stack(values, axis=0, name='stack')

https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/stack

 1 ###tf.stack()/unstack():
 2 import tensorflow as tf
 3 
 4 a = tf.constant([1, 2, 3])
 5 b = tf.constant([4, 5, 6])
 6 c = tf.stack([a, b], axis=0)
 7 d = tf.stack([a, b], axis=1)
 8 e = tf.unstack(c, axis=0)
 9 f = tf.unstack(c, axis=1)
10 with tf.Session() as sess:
11     print(sess.run(c))
12     print(sess.run(d))
13     print(sess.run(e))
14     print(sess.run(f))
[[1 2 3]
 [4 5 6]]
[[1 4]
 [2 5]
 [3 6]]

[array([1, 2, 3], dtype=int32), 
array([4, 5, 6], dtype=int32)]

[array([1, 4], dtype=int32), 
array([2, 5], dtype=int32), 
array([3, 6], dtype=int32)]

參考鏈接:https://blog.csdn.net/u012193416/article/details/77411535

1 ###tf.stack()/unstack():
2 import tensorflow as tf
3 
4 g = tf.constant([[[1, 2, 3, 4],[5, 6, 7, 8],[9, 10, 11, 12]],[[13, 14, 15, 16],[17, 18, 19, 20],[21, 22, 23, 24]]])
5 h = tf.unstack(g)
6 with tf.Session() as sess:
7     print(sess.run(h))
[array([[ 1,  2,  3,  4],
       [ 5,  6,  7,  8],
       [ 9, 10, 11, 12]], dtype=int32), 
array([[13, 14, 15, 16],
       [17, 18, 19, 20],
       [21, 22, 23, 24]], dtype=int32)]

tf.transpose(a, perm=None, name='transpose')

官方鏈接:https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/transpose

 1 ###tf.transpose()
 2 import tensorflow as tf
 3 
 4 a = tf.constant([[1, 2, 3],
 5                  [4, 5, 6]])
 6 b = tf.constant([[[1, 2, 3, 4],[5, 6, 7, 8],[9, 10, 11, 12]],[[13, 14, 15, 16],[17, 18, 19, 20],[21, 22, 23, 24]]])
 7 c = tf.transpose(a, [0, 1])
 8 d = tf.transpose(a, [1, 0])
 9 e = tf.transpose(b, [0, 1, 2])
10 f = tf.transpose(b, [1, 0, 2])
11 g = tf.transpose(b, [0, 2, 1])
12 with tf.Session() as sess:
13     print(sess.run(c))
14     print(sess.run(d))
15     print(sess.run(e))
16     print(sess.run(f))
17     print(sess.run(g))
[[1 2 3]
 [4 5 6]]
[[1 4]
 [2 5]
 [3 6]]
[[[ 1  2  3  4]
  [ 5  6  7  8]
  [ 9 10 11 12]]

 [[13 14 15 16]
  [17 18 19 20]
  [21 22 23 24]]]
[[[ 1  2  3  4]
  [13 14 15 16]]

 [[ 5  6  7  8]
  [17 18 19 20]]

 [[ 9 10 11 12]
  [21 22 23 24]]]
[[[ 1  5  9]
  [ 2  6 10]
  [ 3  7 11]
  [ 4  8 12]]

 [[13 17 21]
  [14 18 22]
  [15 19 23]
  [16 20 24]]]

博客鏈接:https://www.cnblogs.com/studyDetail/p/6533316.html

tf.set_random_seed(seed)

實例運行參見: Jupyter notebook:TensorFlowAPI

https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/set_random_seed

tf.reshape(tensor, shape, name=None)

Args:

tensor: ATensor.
shape: ATensor. Must be one of the following types:int32,int64. Defines the shape of the output tensor.
name: A name for the operation (optional).

mport tensorflow as tf

t = tf.constant([1, 2, 3, 4, 5, 6, 7, 8, 9])
m = tf.constant([1, 2, 3, 4, 5, 6, 7, 8])
n =  tf.constant([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18])

with tf.Session() as sess:
    print('t->[3, 3]:

', sess.run(tf.reshape(t, [3,3 ])), '
')
    
    print('m->[2, 4]:

', sess.run(tf.reshape(m, [2,4 ])), '
')
    
    print('n->[3, 2, 3]:

', sess.run(tf.reshape(n, [3, 2, 3 ])), '
')
    
    print('n->[2, -1]:

', sess.run(tf.reshape(n, [2, -1])), '
')
    
    print('n->[-1, 9]:

', sess.run(tf.reshape(n, [-1, 9])), '
')
    
    print('n->[2, -1, 3]:

', sess.run(tf.reshape(n, [2, -1, 3])), '
')
t->[3, 3]:

 [[1 2 3]
 [4 5 6]
 [7 8 9]] 

m->[2, 4]:

 [[1 2 3 4]
 [5 6 7 8]] 

n->[3, 2, 3]:

 [[[ 1  2  3]
  [ 4  5  6]]

 [[ 7  8  9]
  [10 11 12]]

 [[13 14 15]
  [16 17 18]]] 

n->[2, -1]:

 [[ 1  2  3  4  5  6  7  8  9]
 [10 11 12 13 14 15 16 17 18]] 

n->[-1, 9]:

 [[ 1  2  3  4  5  6  7  8  9]
 [10 11 12 13 14 15 16 17 18]] 

n->[2, -1, 3]:

 [[[ 1  2  3]
  [ 4  5  6]
  [ 7  8  9]]

 [[10 11 12]
  [13 14 15]
  [16 17 18]]] 

https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/reshape

tf.multiply(x, y, name=None

import tensorflow as tf    
  
#兩個矩陣相乘  
x=tf.constant([[1.0,2.0,3.0],[1.0,2.0,3.0],[1.0,2.0,3.0]])    
y=tf.constant([[0,0,1.0],[0,0,1.0],[0,0,1.0]])  
#注意這里這里x,y要有相同的數據類型,不然就會因為數據類型不匹配而出錯  
z=tf.multiply(x,y)  
  
#兩個數相乘  
x1=tf.constant(1)  
y1=tf.constant(2)  
#注意這里這里x1,y1要有相同的數據類型,不然就會因為數據類型不匹配而出錯  
z1=tf.multiply(x1,y1)  
  
#數和矩陣相乘  
x2=tf.constant([[1.0,2.0,3.0],[1.0,2.0,3.0],[1.0,2.0,3.0]])  
y2=tf.constant(2.0)  
#注意這里這里x1,y1要有相同的數據類型,不然就會因為數據類型不匹配而出錯  
z2=tf.multiply(x2,y2)  
  
with tf.Session() as sess:    
    print(sess.run(z))  
    print(sess.run(z1))  
    print(sess.run(z2)) 
[[0. 0. 3.]
 [0. 0. 3.]
 [0. 0. 3.]]
2
[[2. 4. 6.]
 [2. 4. 6.]
 [2. 4. 6.]]

https://blog.csdn.net/m0_37041325/article/details/77036513

https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/multiply 

tf.name_scope(args,*kwds)

Returns a context manager for use when defining a Python op.

https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/name_scope

tf.variable_scope(args,*kwds)

Returns a context manager for defining ops that creates variables (layers).

This context manager validates that the (optional)valuesare from the same graph, ensures that graph is the default graph, and pushes a name scope and a variable scope.

https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/variable_scope   

class tf.contrib.rnn.BasicLSTMCell

鏈接:https://tensorflow.google.cn/versions/r1.9/api_docs/python/tf/contrib/rnn/BasicLSTMCell

tf.nn.dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None)

官方鏈接:https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/nn/dynamic_rnn

鏈接:https://github.com/MorvanZhou/tutorials/blob/master/tensorflowTUT/tf20_RNN2/full_code.py


tf.nn.softmax_cross_entropy_with_logits(_sentinel=None, labels=None, logits=None, dim=-1, name=None)

鏈接:https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/nn/softmax_cross_entropy_with_logits

tf.nn.moments(x, axes, shift=None, name=None, keep_dims=False)

鏈接:https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/nn/moments

tf.contrib.legacy_seq2seq.sequence_loss_by_example(logits, targets, weights, average_across_timesteps=True, softmax_loss_function=None, name=None)

Weighted cross-entropy loss for a sequence of logits (per example).

鏈接:https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/contrib/legacy_seq2seq/sequence_loss_by_example

tf.distributions.Normal

Aliases:

Classtf.contrib.distributions.Normal
Classtf.distributions.Normal

The Normal distribution with locationlocandscaleparameters.

whereloc = muis the mean,scale = sigmais the std. deviation, and,Zis the normalization constant.

Methods:

總結

以上是生活随笔為你收集整理的TensorFlow 官网API的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。