日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

cs20_8-1

發布時間:2025/6/17 编程问答 21 豆豆
生活随笔 收集整理的這篇文章主要介紹了 cs20_8-1 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1. VAE

  • lecture link: https://docs.google.com/presentation/d/1VSNlkGcR-b39tMcuREjzZdhYOPvoZudpcbuNlf5hOIM/edit#slide=id.g334db163d4_0_41
  • todo, 我打印了個pdf, tutorial to VAE
  • 2. tensorflow distribution(并不是分布式!而是概率分布)

  • PPL(probabilities program language)
    • https://www.zhihu.com/question/59442141
    • http://webppl.org/ (體驗下PPL)
  • tensorflow distribution
    • https://zhuanlan.zhihu.com/p/36032114
    • https://zhuanlan.zhihu.com/p/35782672
    • The TensorFlow Distributions library has moved to TensorFlow Probability
  • 3. VAE in Tensorflow

  • 如下面測試代碼

    import osos.environ["CUDA_VISIBLE_DEVICES"] = "0" # import tensorflow as tf import numpy as np import scipy.miscdef make_prior(code_size=2):mean, stddev = tf.zeros([code_size]), tf.ones([code_size])return tfd.MultivariateNormalDiag(mean, stddev)def make_encoder(images, code_size=2):images = tf.layers.flatten(images)hidden = tf.layers.dense(images, 100, tf.nn.relu)mean = tf.layers.dense(hidden, code_size)stddev = tf.layers.dense(hidden, code_size, tf.nn.softplus)return tfd.MultivariateNormalDiag(mean, stddev)def make_decoder(code, data_shape=[28, 28]):hidden = tf.layers.dense(code, 100, tf.nn.relu)logit = tf.layers.dense(hidden, np.prod(data_shape))logit = tf.reshape(logit, [-1] + data_shape)return tfd.Independent(tfd.Bernoulli(logit), len(data_shape))tfd = tf.contrib.distributions images = tf.placeholder(tf.float32, [None, 28, 28]) prior = make_prior() posterior = make_encoder(images) dist = make_decoder(posterior.sample()) elbo = dist.log_prob(images) - tfd.kl_divergence(posterior, prior) optimize = tf.train.AdamOptimizer().minimize(-elbo) samples = make_decoder(prior.sample(10)).mean() # For visualization print("samples-shape: ", tf.shape(samples)) print("samples: : ", samples) # 轉換shape 為 28x28xC //現在是 10x28x28,我要換成 28x28x10,利用 tf.transpose samples = tf.transpose(samples, [1,2,0]) samples = samples[:][:][0] # 同理,使用 for 可以找出其他剩余9張圖片 print("samples-1: : ", samples)with tf.Session() as sess:sess.run(tf.initialize_all_variables())sess.run(tf.global_variables_initializer())img_numpy = samples.eval(session=sess) # tensor to numpy_arrprint(type(img_numpy)) scipy.misc.imsave('VAE_TF.png', img_numpy) # numpy_arr to image# The tfd.Independent(dist, 2) tells TensorFlow to treat the two innermost dimensions as # data dimensions rather than batch dimensions # This means dist.log_prob(images) returns # a number per images rather than per pixel # As the name tfd.Independent() says, # it's just summing the pixel log probabilities
  • 4. BNN in Tensorflow

  • 如下示例代碼:

    import osos.environ["CUDA_VISIBLE_DEVICES"] = "0" # import tensorflow as tf import numpy as np# Byase NNdef define_network(images, num_classes=10):mean = tf.get_variable('mean', [28 * 28, num_classes])stddev = tf.get_variable('stddev', [28 * 28, num_classes])prior = tfd.MultivariateNormalDiag(tf.zeros_like(mean), tf.ones_like(stddev))posterior = tfd.MultivariateNormalDiag(mean, tf.nn.softplus(stddev))bias = tf.get_variable('bias', [num_classes]) # Or Bayesian, toologit = tf.nn.relu(tf.matmul(posterior.sample(), images) + bias)return tfd.Categorical(logit), posterior, priortfd = tf.contrib.distributions images = None # to do label = None # to dodist, posterior, prior = define_network(images) elbo = (tf.reduce_mean(dist.log_prob(label)) -tf.reduce_mean(tfd.kl_divergence(posterior, prior)))
  • 轉載于:https://www.cnblogs.com/LS1314/p/10371229.html

    總結

    以上是生活随笔為你收集整理的cs20_8-1的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。