日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

TENSORFLOW GUIDE: EXPONENTIAL MOVING AVERAGE FOR IMPROVED CLASSIFICATION

發布時間:2025/3/15 编程问答 44 豆豆
生活随笔 收集整理的這篇文章主要介紹了 TENSORFLOW GUIDE: EXPONENTIAL MOVING AVERAGE FOR IMPROVED CLASSIFICATION 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Parameter Selection via Exponential Moving Average

When training a classifier via gradient decent, we update the current classifier’s parameters?θθ?via

θt+1=θt+αΔθt,θt+1=θt+αΔθt,

where?θtθt?is the current state of the parameters and?ΔθtΔθt?is the update step proposed by your favorite optimizer. Often times, after?NN?iterations, we simply stop the optimization procedure (where?NN?is chosen using some sort of decision rule) and use?θNθN?as our trained classifier’s parameters.

However, we often observe empirically that a post-processing step can be applied to improve the classifier’s performance. Once such example is?Polyak averaging. A closely related—and quite popular—procedure is to take an exponential moving averaging (EMA) of the optimization trajectory?(θn)(θn),

θema=(1?λ)Ni=0λiθN?i,θema=(1?λ)∑i=0NλiθN?i,

where?λ[0,1)λ∈[0,1)?is the decay rate or momemtum of the EMA. It’s a simple modification to the optimization procedure that often yields better generalization than simply selecting?θNθN, and has also been used quite effectively in?semi-supervised learning.

Implementation-wise, the best to apply EMA to a classifier is to use the built-in?tf.train.ExponentialMovingAverage?function. However, the?documentation?doesn’t provide a guide for how to cleanly use?tf.train.ExponentialMovingAverage?to construct an EMA-classifier. Since I’ve been?playing with EMA recently, I thought that it would be helpful to write a gentle guide to implementing an EMA-classifier in Tensorflow.

Understanding tf.train.ExponentialMovingAverage

For those who wish to dive straight into the full codebase, you can find it?here. For self-containedness, let’s start with the code that constructs the classifier.

def classifier(x, phase, scope='class', reuse=None, internal_update=False, getter=None):with tf.variable_scope(scope, reuse=reuse, custom_getter=getter):with arg_scope([leaky_relu], a=0.1), \arg_scope([conv2d, dense], activation=leaky_relu, bn=True, phase=phase), \arg_scope([batch_norm], internal_update=internal_update):x = conv2d(x, 96, 3, 1)x = conv2d(x, 96, 3, 1)x = conv2d(x, 96, 3, 1)x = max_pool(x, 2, 2)x = dropout(x, training=phase)x = conv2d(x, 192, 3, 1)x = conv2d(x, 192, 3, 1)x = conv2d(x, 192, 3, 1)x = max_pool(x, 2, 2)x = dropout(x, training=phase)x = conv2d(x, 192, 3, 1)x = conv2d(x, 192, 3, 1)x = conv2d(x, 192, 3, 1)x = avg_pool(x, global_pool=True)x = dense(x, 10, activation=None)return x

Here, I use a fairly standard CNN architecture. The first thing to note is the use of variable scoping. This puts all of the classifier’s variables within the scope?class/. To create the classifier, simply call

train_y_pred = classifer(train_x, phase=True, internal_update=True)

Once the classifier is created in the computational graph, variable scoping allows for easy access of the classifier’s trainable variables via

var_class = tf.get_collection('trainable_variables', 'class') # Get list of the classifier's trainable variables ema = tf.train.ExponentialMovingAverage(decay=0.998) ema_op = ema.apply(var_class)

After getting the list of trainable variables via?tf.get_collection, we use?ema.apply, which serves two purposes. First, it constructs an auxiliary variable for each corresponding variable in?var_class?to hold the exponential moving average. Next, it returns a tensorflow Op which updates the EMA variables. The object?ema?can then access the EMA via the function?ema.average

# Demonstration of ema.average var_ema_at_index_0 = ema.average(var_class[0])

Populating the Classifier with the EMA Variables

So far, we’ve figured out how to create the EMA variables and how to access them. But what’s the easiest way to make the classifier use the EMA variables? Here, we leverage the?custom_getter?argument that appears in?tf.variable_scope. According to?documentation, whenever you call?tf.get_variable, the default getter gets an existing tensor according to the tensor variable’s name. However, a custom getter can be applied to change the existing tensor that is returned by?tf.get_variable.

To construct the custom getter, locally define?ema_getter?after you’ve already created the?ema?object

def ema_getter(getter, name, *args, **kwargs):var = getter(name, *args, **kwargs)ema_var = ema.average(var)return ema_var if ema_var else var

To apply the EMA classifier during test-time, we simply call?classifier?again, this time with the custom?ema_getter

test_y_pred = classifer(test_x, phase=False, internal_update=False, getter=ema_getter)

And that’s it! We can now verify that applying EMA does in fact improve the performance of the classifier on the CIFAR-10 test data set.

You can find the full code for training the CIFAR-10 classifier below.


CODE ON GITHUB


http://ruishu.io/2017/11/22/ema/

總結

以上是生活随笔為你收集整理的TENSORFLOW GUIDE: EXPONENTIAL MOVING AVERAGE FOR IMPROVED CLASSIFICATION的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。