日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 >

MXNet的Model API

發(fā)布時(shí)間:2025/3/15 46 豆豆
生活随笔 收集整理的這篇文章主要介紹了 MXNet的Model API 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

MXNet的API

mxnet里面的model API不是真的API,它只不過是一個(gè)對ndarray的一個(gè)封裝,使其更容易使用。

訓(xùn)練一個(gè)模型

為了訓(xùn)練一個(gè)模型,你需要遵循以下兩步,第一步是使用symbol來構(gòu)造,然后調(diào)用model.Feedforward.create這個(gè)方法來創(chuàng)建一個(gè)model。下面的代碼創(chuàng)建了一個(gè)兩層的神經(jīng)網(wǎng)絡(luò)。 # configure a two layer neuralnetwork data = mx.symbol.Variable('data') fc1 = mx.symbol.FullyConnected(data, name='fc1', num_hidden=128) act1 = mx.symbol.Activation(fc1, name='relu1', act_type='relu') fc2 = mx.symbol.FullyConnected(act1, name='fc2', num_hidden=64) softmax = mx.symbol.SoftmaxOutput(fc2, name='sm') # create a model model = mx.model.FeedForward.create(softmax,X=data_set,num_epoch=num_epoch,learning_rate=0.01) 你還可以使用scikit-learn一樣的風(fēng)格來構(gòu)造和擬合一個(gè)模型 # create a model using sklearn-style two step way model = mx.model.FeedForward(softmax,num_epoch=num_epoch,learning_rate=0.01)model.fit(X=data_set) 你如果想看更多的功能,請看Model API Reference

保存模型

# save a model to mymodel-symbol.json and mymodel-0100.params prefix = 'mymodel' iteration = 100 model.save(prefix, iteration)# load model back model_loaded = mx.model.FeedForward.load(prefix, iteration) 我們往往用一個(gè)腳本進(jìn)行對數(shù)據(jù)的訓(xùn)練,往往以前綴加序號(hào)的形式如mymodel-0100.params這樣的形式保存,然后用另一個(gè)腳本加載模型,并進(jìn)行預(yù)測來完成相應(yīng)的功能。

階段性的點(diǎn)檢測(Checkpoint)

我們進(jìn)行周期性的點(diǎn)檢測是很有必要的。為了做這個(gè),你只要簡單的加一個(gè)回調(diào)函數(shù)do_checkpoint(path)在函數(shù)里面。這個(gè)訓(xùn)練的過程將會(huì)自動(dòng)的在每次迭代的時(shí)候,在特殊的位置進(jìn)行點(diǎn)檢測。 prefix='models/chkpt' model = mx.model.FeedForward.create(softmax,X=data_set,iter_end_callback=mx.callback.do_checkpoint(prefix),...) 你可以加載模型的點(diǎn)檢測在使用Feedforward.load之后。

使用多個(gè)設(shè)備

簡單的設(shè)置ctx,其內(nèi)容為你要訓(xùn)練設(shè)備(cpu,gpu)的列表。 devices = [mx.gpu(i) for i in range(num_device)] model = mx.model.FeedForward.create(softmax,X=dataset,ctx=devices,...) 這個(gè)訓(xùn)練過程將會(huì)通過一個(gè)并行的方式在你指定的GPUS進(jìn)行。

模型API

MXNet模型模塊

mxnet.model.BatchEndParam?

alias of?BatchEndParams


BatchEndParam是BatchEndParams的參數(shù)
mxnet.model.save_checkpoint(prefix,?epoch,?symbol,?arg_params,?aux_params)

Checkpoint the model data into file.

Parameters:
  • prefix?(str) – Prefix of model name.
  • epoch?(int) – The epoch number of the model.
  • symbol?(Symbol) – The input symbol
  • arg_params?(dict of str to NDArray) – Model parameter, dict of name to NDArray of net’s weights.
  • aux_params?(dict of str to NDArray) – Model parameter, dict of name to NDArray of net’s auxiliary states.

Notes

  • prefix-symbol.json?will be saved for symbol.
  • prefix-epoch.params?will be saved for parameters.

類功能:對模型數(shù)據(jù)點(diǎn)檢測后存入到文件中。 參數(shù): prefix(str)-模型名的前綴(可以是個(gè)文件夾) epoch(int)-模型的epoch的數(shù)量(epoch在機(jī)器學(xué)習(xí)里面指的是把所有的樣本進(jìn)行一次全部操作(前向傳播,反向傳播等等),和普通的迭代相比,epoch的尺度比較大) symbol(Symbol)-輸入的symbol。 arg_params(一個(gè)NDArray的字符字典)-模型參數(shù),以及網(wǎng)絡(luò)權(quán)重字典。 aux_params(一個(gè)NDArray的字符字典)-模型參數(shù),以及一些附加狀態(tài)的字典。

Notes

  • prefix-symbol.json?will be saved for symbol.
  • prefix-epoch.params?will be saved for parameters.
注意: prefix-symbol.json將會(huì)存儲(chǔ)symbol。 prefix-epoch.params會(huì)存儲(chǔ)參數(shù)。 一個(gè)模型的symbol文件往往是唯一確定的,而params文件可以很多,最后你可以把一些沒用的params文件給刪掉。一般params的個(gè)數(shù)等于epoch的個(gè)數(shù),因?yàn)樵酵竺娴膒arams越好,所以你可以只保留最后一個(gè)的params文件。
mxnet.model.load_checkpoint(prefix,?epoch)

Load model checkpoint from file.

Parameters:Returns:
  • prefix?(str) – Prefix of model name.
  • epoch?(int) – Epoch number of model we would like to load.

  • symbol?(Symbol) – The symbol configuration of computation network.
  • arg_params?(dict of str to NDArray) – Model parameter, dict of name to NDArray of net’s weights.
  • aux_params?(dict of str to NDArray) – Model parameter, dict of name to NDArray of net’s auxiliary states.
類功能:加載檢測點(diǎn)(感覺還是翻譯成檢測點(diǎn)比較好)
參數(shù): prefix(str)-模型名稱的前綴 epoch(int)-你想加載的模型的epoch的序號(hào),一般是最大的那個(gè)。 返回值: symbol(Symbol)-我們要計(jì)算網(wǎng)絡(luò)的模型配置 arg_params(一個(gè)NDArray的字符字典)-模型參數(shù),以及網(wǎng)絡(luò)權(quán)重字典。 aux_params(一個(gè)NDArray的字符字典)-模型參數(shù),以及一些附加狀態(tài)的字典。
class?mxnet.model.FeedForward(symbol,?ctx=None,?num_epoch=None,?epoch_size=None,optimizer='sgd',?initializer=<mxnet.initializer.Uniform object>,?numpy_batch_size=128,arg_params=None,?aux_params=None,?allow_extra_params=False,?begin_epoch=0,?**kwargs)?

Model class of MXNet for training and predicting feedforward nets. This class is designed for a single-data single output supervised network.

Parameters:
  • symbol?(Symbol) – The symbol configuration of computation network.
  • ctx?(Context or list of Context, optional) – The device context of training and prediction. To use multi GPU training, pass in a list of gpu contexts.
  • num_epoch?(int, optional) – Training parameter, number of training epochs(epochs).
  • epoch_size?(int, optional) – Number of batches in a epoch. In default, it is set to ceil(num_train_examples / batch_size)
  • optimizer?(str or Optimizer, optional) – Training parameter, name or optimizer object for training.
  • initializer?(initializer function, optional) – Training parameter, the initialization scheme used.
  • numpy_batch_size?(int, optional) – The batch size of training data. Only needed when input array is numpy.
  • arg_params?(dict of str to NDArray, optional) – Model parameter, dict of name to NDArray of net’s weights.
  • aux_params?(dict of str to NDArray, optional) – Model parameter, dict of name to NDArray of net’s auxiliary states.
  • allow_extra_params?(boolean, optional) – Whether allow extra parameters that are not needed by symbol to be passed by aux_params and arg_params. If this is True, no error will be thrown when aux_params and arg_params contain extra parameters than needed.
  • begin_epoch?(int, optional) – The begining training epoch.
  • kwargs?(dict) – The additional keyword arguments passed to optimizer.
類功能:
MXNet的用來訓(xùn)練和預(yù)測前向傳播網(wǎng)絡(luò)的模型類。這個(gè)類設(shè)計(jì)來是為了得到一個(gè)單一輸出的監(jiān)督網(wǎng)絡(luò)。 參數(shù): symbol(Symbol)-計(jì)算網(wǎng)絡(luò)的symbol構(gòu)造。 ctx(Context or list of Context,optional)-用來訓(xùn)練和預(yù)測的設(shè)備。如果要使用多個(gè)GPU,請傳入gpu上下文。 num_epoch(int,optional)-訓(xùn)練epoches的個(gè)數(shù)。 epoch_size(int,optional)- 一個(gè)epoch里面的batch的個(gè)數(shù)。默認(rèn)ceil(num_train_examples/batch_size)即訓(xùn)練的樣本的個(gè)數(shù)/batch的大小然后取整。 optimizer(str or Optimizer,optional)-訓(xùn)練參數(shù),名字或者相應(yīng)的優(yōu)化類用來訓(xùn)練的。 initializer(initializer function,optional)-訓(xùn)練參數(shù),用來初始化的組合。 numpy_batch_size(int,optional)-訓(xùn)練集的batch尺寸。只有當(dāng)輸入的數(shù)組是numpy的時(shí)候需要。 arg_params(一個(gè)NDArray的字符字典)-模型參數(shù),以及網(wǎng)絡(luò)權(quán)重字典。 aux_params(一個(gè)NDArray的字符字典)-模型參數(shù),以及一些附加狀態(tài)的字典。 allow_extra_params(boolean,optional)-是否需要一些額外的參數(shù),aux_params和arg_params不需要的。如果這是真的,那么就不會(huì)拋出錯(cuò)誤當(dāng)參數(shù)的個(gè)數(shù)超出所需要的參數(shù)的時(shí)候。 begin_epoch(int,optional)-開始訓(xùn)練的epoch,也就是說這一epoch后面的epoch都會(huì)重新訓(xùn)練。 kwargs(dict)-額外的關(guān)鍵參數(shù)被傳到optimizer里面的。
predict(X,?num_batch=None,?return_data=False,?reset=True)?

Run the prediction, always only use one device. :param X: :type X: mxnet.DataIter :param num_batch: the number of batch to run. Go though all batches if None :type num_batch: int or None

Returns:Return type:
y?– The predicted value of the output.
numpy.ndarray or a list of numpy.ndarray if the network has multiple outputs.
類方法功能:進(jìn)行預(yù)測,只能使用一個(gè)device.參數(shù)X是X類型的,batch的運(yùn)行數(shù)量,如果被設(shè)置為None的話,會(huì)對里面的所有的批進(jìn)行處理。
返回值:我們的預(yù)測值。
score(X,?eval_metric='acc',?num_batch=None,?batch_end_callback=None,?reset=True)

Run the model on X and calculate the score with eval_metric :param X: :type X: mxnet.DataIter :param eval_metric: The metric for calculating score :type eval_metric: metric.metric :param num_batch: the number of batch to run. Go though all batches if None :type num_batch: int or None

Returns:Return type:
s?– the final score
float
類方法功能:在X上運(yùn)行模型并且用評估矩陣計(jì)算分?jǐn)?shù)。
返回值:我們的最終分?jǐn)?shù)。
fit(X,?y=None,?eval_data=None,?eval_metric='acc',?epoch_end_callback=None,batch_end_callback=None,?kvstore='local',?logger=None,?work_load_list=None,?monitor=None,eval_batch_end_callback=None)

Fit the model.

Parameters:
  • X?(DataIter, or numpy.ndarray/NDArray) – Training data. If X is an DataIter, the name or, if not available, position, of its outputs should match the corresponding variable names defined in the symbolic graph.
  • y?(numpy.ndarray/NDArray, optional) – Training set label. If X is numpy.ndarray/NDArray, y is required to be set. While y can be 1D or 2D (with 2nd dimension as 1), its 1st dimension must be the same as X, i.e. the number of data points and labels should be equal.
  • eval_data?(DataIter or numpy.ndarray/list/NDArray pair) – If eval_data is numpy.ndarray/list/NDArray pair, it should be (valid_data, valid_label).
  • eval_metric?(metric.EvalMetric or str or callable) – The evaluation metric, name of evaluation metric. Or a customize evaluation function that returns the statistics based on minibatch.
  • epoch_end_callback?(callable(epoch, symbol, arg_params, aux_states)) – A callback that is invoked at end of each epoch. This can be used to checkpoint model each epoch.
  • batch_end_callback?(callable(epoch)) – A callback that is invoked at end of each batch For print purpose
  • kvstore?(KVStore or str, optional) – The KVStore or a string kvstore type: ‘local’, ‘dist_sync’, ‘dist_async’ In default uses ‘local’, often no need to change for single machiine.
  • logger?(logging logger, optional) – When not specified, default logger will be used.
  • work_load_list?(float or int, optional) – The list of work load for different devices, in the same order as ctx
類方法功能:模型擬合
參數(shù): X:訓(xùn)練集。 Y:訓(xùn)練集標(biāo)簽。可以是二維的,不過第二維是一,標(biāo)簽的個(gè)數(shù)需要和輸入點(diǎn)的個(gè)數(shù)一致。 eval_data:解析數(shù)據(jù)(和javascript里面的eval函數(shù)差不多),輸入應(yīng)該是(vaild_data,vaild_label) eval_metric評估矩陣 epoch_end_callback-在執(zhí)行到每一epoch的結(jié)尾的時(shí)候調(diào)用。通常用來點(diǎn)檢測。 batch_end_callback-在每一批結(jié)尾都會(huì)調(diào)用,只是為了打印出來看。 kvstore:這個(gè)通常不用改,基本上都是'local' logger:當(dāng)沒有指定的時(shí)候,會(huì)用默認(rèn)的logger。 work_load_list:不同設(shè)備的工作流列表,和ctx的順序一樣。
save(prefix,?epoch=None)

Checkpoint the model checkpoint into file. You can also use pickle to do the job if you only work on python. The advantage of load/save is the file is language agnostic. This means the file saved using save can be loaded by other language binding of mxnet. You also get the benefit being able to directly load/save from cloud storage(S3, HDFS)

Parameters:
prefix?(str) – Prefix of model name.

Notes

  • prefix-symbol.json?will be saved for symbol.
  • prefix-epoch.params?will be saved for parameters.
static?load(prefix,?epoch,?ctx=None,?**kwargs)

Load model checkpoint from file.

Parameters:Returns:Return type:
  • prefix?(str) – Prefix of model name.
  • epoch?(int) – epoch number of model we would like to load.
  • ctx?(Context or list of Context, optional) – The device context of training and prediction.
  • kwargs?(dict) – other parameters for model, including num_epoch, optimizer and numpy_batch_size

model?– The loaded model that can be used for prediction.

FeedForward

保存和加載的比較簡單,我就不說了。
static?create(symbol,?X,?y=None,?ctx=None,?num_epoch=None,?epoch_size=None,optimizer='sgd',?initializer=<mxnet.initializer.Uniform object>,?eval_data=None,?eval_metric='acc',epoch_end_callback=None,?batch_end_callback=None,?kvstore='local',?logger=None,work_load_list=None,?eval_batch_end_callback=None,?**kwargs)?

Functional style to create a model. This function will be more consistent with functional languages such as R, where mutation is not allowed.

Parameters:
  • symbol?(Symbol) – The symbol configuration of computation network.
  • X?(DataIter) – Training data
  • y?(numpy.ndarray, optional) – If X is numpy.ndarray y is required to set
  • ctx?(Context or list of Context, optional) – The device context of training and prediction. To use multi GPU training, pass in a list of gpu contexts.
  • num_epoch?(int, optional) – Training parameter, number of training epochs(epochs).
  • epoch_size?(int, optional) – Number of batches in a epoch. In default, it is set to ceil(num_train_examples / batch_size)
  • optimizer?(str or Optimizer, optional) – Training parameter, name or optimizer object for training.
  • initializier?(initializer function, optional) – Training parameter, the initialization scheme used.
  • eval_data?(DataIter or numpy.ndarray pair) – If eval_set is numpy.ndarray pair, it should be (valid_data, valid_label)
  • eval_metric?(metric.EvalMetric or str or callable) – The evaluation metric, name of evaluation metric. Or a customize evaluation function that returns the statistics based on minibatch.
  • epoch_end_callback?(callable(epoch, symbol, arg_params, aux_states)) – A callback that is invoked at end of each epoch. This can be used to checkpoint model each epoch.
  • batch_end_callback?(callable(epoch)) – A callback that is invoked at end of each batch For print purpose
  • kvstore?(KVStore or str, optional) – The KVStore or a string kvstore type: ‘local’, ‘dist_sync’, ‘dis_async’ In default uses ‘local’, often no need to change for single machiine.
  • logger?(logging logger, optional) – When not specified, default logger will be used.
  • work_load_list?(list of float or int, optional) – The list of work load for different devices, in the same order as ctx
創(chuàng)建模型這個(gè)API和前面也是大同小異。

接下去的這些API不常用到


初使化的API參考

class?mxnet.initializer.Initializer?

Base class for Initializer.

__call__(name,?arr)

Override () function to do Initialization

Parameters:
  • name?(str) – name of corrosponding ndarray
  • arr?(NDArray) – ndarray to be Initialized
class?mxnet.initializer.Load(param,?default_init=None,?verbose=False)

Initialize by loading pretrained param from file or dict

Parameters:
  • param?(str or dict of str->NDArray) – param file or dict mapping name to NDArray.
  • default_init?(Initializer) – default initializer when name is not found in param.
  • verbose?(bool) – log source when initializing.
class?mxnet.initializer.Mixed(patterns,?initializers)

Initialize with mixed Initializer

Parameters:
  • patterns?(list of str) – list of regular expression patterns to match parameter names.
  • initializers?(list of Initializer) – list of Initializer corrosponding to patterns
class?mxnet.initializer.Uniform(scale=0.07)

Initialize the weight with uniform [-scale, scale]

Parameters:
scale?(float, optional) – The scale of uniform distribution
class?mxnet.initializer.Normal(sigma=0.01)

Initialize the weight with normal(0, sigma)

Parameters:
sigma?(float, optional) – Standard deviation for gaussian distribution.
class?mxnet.initializer.Orthogonal(scale=1.414,?rand_type='uniform')

Intialize weight as Orthogonal matrix

Parameters:
  • scale?(float optional) – scaling factor of weight
  • rand_type?(string optional) – use “uniform” or “normal” random number to initialize weight
  • Reference?–
  • ---------?–
  • solutions to the nonlinear dynamics of learning in deep linear neural networks(Exact) –
  • preprint arXiv?(arXiv) –
class?mxnet.initializer.Xavier(rnd_type='uniform',?factor_type='avg',?magnitude=3)

Initialize the weight with Xavier or similar initialization scheme.

Parameters:
  • rnd_type?(str, optional) – Use?`gaussian`?or?`uniform`?to init
  • factor_type?(str, optional) – Use?`avg`,?`in`, or?`out`?to init
  • magnitude?(float, optional) – scale of random number range



評估矩陣(Evalution Metric)API

Online evaluation metric module.

mxnet.metric.check_label_shapes(labels,?preds,?shape=0)

Check to see if the two arrays are the same size.

class?mxnet.metric.EvalMetric(name,?num=None)

Base class of all evaluation metrics.

update(label,?pred)

Update the internal evaluation.

Parameters:
  • labels?(list of NDArray) – The labels of the data.
  • preds?(list of NDArray) – Predicted values.
reset()

Clear the internal statistics to initial state.

get()

Get the current evaluation result.

Returns:
  • name?(str) – Name of the metric.
  • value?(float) – Value of the evaluation.
get_name_value()

Get zipped name and value pairs

class?mxnet.metric.CompositeEvalMetric(**kwargs)

Manage multiple evaluation metrics.

add(metric)

Add a child metric.

get_metric(index)

Get a child metric.

class?mxnet.metric.Accuracy

Calculate accuracy

class?mxnet.metric.TopKAccuracy(**kwargs)

Calculate top k predictions accuracy

class?mxnet.metric.F1

Calculate the F1 score of a binary classification problem.

class?mxnet.metric.MAE

Calculate Mean Absolute Error loss

class?mxnet.metric.MSE

Calculate Mean Squared Error loss

class?mxnet.metric.RMSE

Calculate Root Mean Squred Error loss

class?mxnet.metric.CrossEntropy

Calculate Cross Entropy loss

class?mxnet.metric.Torch

Dummy metric for torch criterions

class?mxnet.metric.CustomMetric(feval,?name=None,?allow_extra_outputs=False)

Custom evaluation metric that takes a NDArray function.

Parameters:
  • feval?(callable(label, pred)) – Customized evaluation function.
  • name?(str, optional) – The name of the metric
  • allow_extra_outputs?(bool) – If true, the prediction outputs can have extra outputs. This is useful in RNN, where the states are also produced in outputs for forwarding.
mxnet.metric.np(numpy_feval,?name=None,?allow_extra_outputs=False)

Create a customized metric from numpy function.

Parameters:
  • numpy_feval?(callable(label, pred)) – Customized evaluation function.
  • name?(str, optional) – The name of the metric.
  • allow_extra_outputs?(bool) – If true, the prediction outputs can have extra outputs. This is useful in RNN, where the states are also produced in outputs for forwarding.
mxnet.metric.create(metric,?**kwargs)

Create an evaluation metric.

Parameters:
metric?(str or callable) – The name of the metric, or a function providing statistics given pred, label NDArray
優(yōu)化API

Common Optimization algorithms with regularizations.

class?mxnet.optimizer.Optimizer(rescale_grad=1.0,?param_idx2name=None,?wd=0.0,clip_gradient=None,?learning_rate=0.01,?lr_scheduler=None,?sym=None)

Base class of all optimizers.

static?register(klass)

Register optimizers to the optimizer factory

static?create_optimizer(name,?rescale_grad=1,?**kwargs)

Create an optimizer with specified name.

Parameters:Returns:Return type:
  • name?(str) – Name of required optimizer. Should be the name of a subclass of Optimizer. Case insensitive.
  • rescale_grad?(float) – Rescaling factor on gradient.
  • kwargs?(dict) – Parameters for optimizer

opt?– The result optimizer.

Optimizer

create_state(index,?weight)

Create additional optimizer state such as momentum. override in implementations.

update(index,?weight,?grad,?state)

Update the parameters. override in implementations

set_lr_scale(args_lrscale)

set lr scale is deprecated. Use set_lr_mult instead.

set_lr_mult(args_lr_mult)

Set individual learning rate multipler for parameters

Parameters:
args_lr_mult?(dict of string/int to float) – set the lr multipler for name/index to float. setting multipler by index is supported for backward compatibility, but we recommend using name and symbol.
set_wd_mult(args_wd_mult)

Set individual weight decay multipler for parameters. By default wd multipler is 0 for all params whose name doesn’t end with _weight, if param_idx2name is provided.

Parameters:
args_wd_mult?(dict of string/int to float) – set the wd multipler for name/index to float. setting multipler by index is supported for backward compatibility, but we recommend using name and symbol.
mxnet.optimizer.register(klass)

Register optimizers to the optimizer factory

class?mxnet.optimizer.SGD(momentum=0.0,?**kwargs)

A very simple SGD optimizer with momentum and weight regularization.

Parameters:
  • learning_rate?(float, optional) – learning_rate of SGD
  • momentum?(float, optional) – momentum value
  • wd?(float, optional) – L2 regularization coefficient add to all the weights
  • rescale_grad?(float, optional) – rescaling factor of gradient.
  • clip_gradient?(float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
  • param_idx2name?(dict of string/int to float, optional) – special treat weight decay in parameter ends with bias, gamma, and beta
create_state(index,?weight)

Create additional optimizer state such as momentum.

Parameters:
weight?(NDArray) – The weight data
update(index,?weight,?grad,?state)

Update the parameters.

Parameters:
  • index?(int) – An unique integer key used to index the parameters
  • weight?(NDArray) – weight ndarray
  • grad?(NDArray) – grad ndarray
  • state?(NDArray or other objects returned by init_state) – The auxiliary state used in optimization.
class?mxnet.optimizer.NAG(**kwargs)

SGD with nesterov It is implemented according to?https://github.com/torch/optim/blob/master/sgd.lua

update(index,?weight,?grad,?state)

Update the parameters.

Parameters:
  • index?(int) – An unique integer key used to index the parameters
  • weight?(NDArray) – weight ndarray
  • grad?(NDArray) – grad ndarray
  • state?(NDArray or other objects returned by init_state) – The auxiliary state used in optimization.
class?mxnet.optimizer.SGLD(**kwargs)

Stochastic Langevin Dynamics Updater to sample from a distribution.

Parameters:
  • learning_rate?(float, optional) – learning_rate of SGD
  • wd?(float, optional) – L2 regularization coefficient add to all the weights
  • rescale_grad?(float, optional) – rescaling factor of gradient.
  • clip_gradient?(float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
  • param_idx2name?(dict of string/int to float, optional) – special treat weight decay in parameter ends with bias, gamma, and beta
create_state(index,?weight)

Create additional optimizer state such as momentum.

Parameters:
weight?(NDArray) – The weight data
update(index,?weight,?grad,?state)

Update the parameters.

Parameters:
  • index?(int) – An unique integer key used to index the parameters
  • weight?(NDArray) – weight ndarray
  • grad?(NDArray) – grad ndarray
  • state?(NDArray or other objects returned by init_state) – The auxiliary state used in optimization.
class?mxnet.optimizer.ccSGD(momentum=0.0,?**kwargs)

A very simple SGD optimizer with momentum and weight regularization. Implemented in C++.

Parameters:
  • learning_rate?(float, optional) – learning_rate of SGD
  • momentum?(float, optional) – momentum value
  • wd?(float, optional) – L2 regularization coefficient add to all the weights
  • rescale_grad?(float, optional) – rescaling factor of gradient.
  • clip_gradient?(float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
update(index,?weight,?grad,?state)

Update the parameters.

Parameters:
  • index?(int) – An unique integer key used to index the parameters
  • weight?(NDArray) – weight ndarray
  • grad?(NDArray) – grad ndarray
  • state?(NDArray or other objects returned by init_state) – The auxiliary state used in optimization.
class?mxnet.optimizer.Adam(learning_rate=0.001,?beta1=0.9,?beta2=0.999,?epsilon=1e-08,decay_factor=0.99999999,?**kwargs)

Adam optimizer as described in?[King2014].

[King2014] Diederik Kingma, Jimmy Ba,?Adam: A Method for Stochastic Optimization,http://arxiv.org/abs/1412.6980

the code in this class was adapted from?https://github.com/mila-udem/blocks/blob/master/blocks/algorithms/__init__.py#L765

Parameters:
  • learning_rate?(float, optional) – Step size. Default value is set to 0.002.
  • beta1?(float, optional) – Exponential decay rate for the first moment estimates. Default value is set to 0.9.
  • beta2?(float, optional) – Exponential decay rate for the second moment estimates. Default value is set to 0.999.
  • epsilon?(float, optional) – Default value is set to 1e-8.
  • decay_factor?(float, optional) – Default value is set to 1 - 1e-8.
  • wd?(float, optional) – L2 regularization coefficient add to all the weights
  • rescale_grad?(float, optional) – rescaling factor of gradient.
  • clip_gradient?(float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
create_state(index,?weight)

Create additional optimizer state: mean, variance

Parameters:
weight?(NDArray) – The weight data
update(index,?weight,?grad,?state)

Update the parameters.

Parameters:
  • index?(int) – An unique integer key used to index the parameters
  • weight?(NDArray) – weight ndarray
  • grad?(NDArray) – grad ndarray
  • state?(NDArray or other objects returned by init_state) – The auxiliary state used in optimization.
class?mxnet.optimizer.AdaGrad(eps=1e-07,?**kwargs)

AdaGrad optimizer of Duchi et al., 2011,

This code follows the version in?http://arxiv.org/pdf/1212.5701v1.pdf?Eq(5) by Matthew D. Zeiler, 2012. AdaGrad will help the network to converge faster in some cases.

Parameters:
  • learning_rate?(float, optional) – Step size. Default value is set to 0.05.
  • wd?(float, optional) – L2 regularization coefficient add to all the weights
  • rescale_grad?(float, optional) – rescaling factor of gradient.
  • eps?(float, optional) – A small float number to make the updating processing stable Default value is set to 1e-7.
  • clip_gradient?(float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
class?mxnet.optimizer.RMSProp(gamma1=0.95,?gamma2=0.9,?**kwargs)

RMSProp optimizer of Tieleman & Hinton, 2012,

This code follows the version in?http://arxiv.org/pdf/1308.0850v5.pdf?Eq(38) - Eq(45) by Alex Graves, 2013.

Parameters:
  • learning_rate?(float, optional) – Step size. Default value is set to 0.002.
  • gamma1?(float, optional) – decay factor of moving average for gradient, gradient^2. Default value is set to 0.95.
  • gamma2?(float, optional) – “momentum” factor. Default value if set to 0.9.
  • wd?(float, optional) – L2 regularization coefficient add to all the weights
  • rescale_grad?(float, optional) – rescaling factor of gradient.
  • clip_gradient?(float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
create_state(index,?weight)

Create additional optimizer state: mean, variance :param weight: The weight data :type weight: NDArray

update(index,?weight,?grad,?state)

Update the parameters. :param index: An unique integer key used to index the parameters

Parameters:
  • weight?(NDArray) – weight ndarray
  • grad?(NDArray) – grad ndarray
  • state?(NDArray or other objects returned by init_state) – The auxiliary state used in optimization.
class?mxnet.optimizer.AdaDelta(rho=0.9,?epsilon=1e-05,?**kwargs)

AdaDelta optimizer as described in Zeiler, M. D. (2012).?ADADELTA: An adaptive learning rate method.

http://arxiv.org/abs/1212.5701

Parameters:
  • rho?(float) – Decay rate for both squared gradients and delta x
  • epsilon?(float) – The constant as described in the thesis
  • wd?(float) – L2 regularization coefficient add to all the weights
  • rescale_grad?(float, optional) – rescaling factor of gradient.
  • clip_gradient?(float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
class?mxnet.optimizer.Test(**kwargs)

For test use

create_state(index,?weight)

Create a state to duplicate weight

update(index,?weight,?grad,?state)

performs w += rescale_grad * grad

mxnet.optimizer.create(name,?rescale_grad=1,?**kwargs)

Create an optimizer with specified name.

Parameters:Returns:Return type:
  • name?(str) – Name of required optimizer. Should be the name of a subclass of Optimizer. Case insensitive.
  • rescale_grad?(float) – Rescaling factor on gradient.
  • kwargs?(dict) – Parameters for optimizer

opt?– The result optimizer.

Optimizer

mxnet.optimizer.get_updater(optimizer)

Return a clossure of the updater needed for kvstore

Parameters:Returns:Return type:
optimizer?(Optimizer) – The optimizer
updater?– The clossure of the updater
function



總結(jié)

以上是生活随笔為你收集整理的MXNet的Model API的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。