日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程语言 > python >内容正文

python

python sklearn.neural_network.MLPClassifier() 神经网络改变模型复杂度的四种方法

發(fā)布時(shí)間:2025/3/19 python 15 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python sklearn.neural_network.MLPClassifier() 神经网络改变模型复杂度的四种方法 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
  • MLPClassifier() 改變模型復(fù)雜度的四種方法
  • 調(diào)整神經(jīng)網(wǎng)絡(luò)每一個(gè)隱藏層上的節(jié)點(diǎn)數(shù)
  • 調(diào)節(jié)神經(jīng)網(wǎng)絡(luò)隱藏層的層數(shù)
  • 調(diào)節(jié)activation的方式
  • 通過(guò)調(diào)整alpha值來(lái)改變模型正則化的程度(增大alpha會(huì)降低模型復(fù)雜度, 模型會(huì)變得更加簡(jiǎn)單)
  • 官方doc:

    Init signature: MLPClassifier(hidden_layer_sizes=(100,),activation='relu',solver='adam',alpha=0.0001,batch_size='auto',learning_rate='constant',learning_rate_init=0.001,power_t=0.5,max_iter=200,shuffle=True,random_state=None,tol=0.0001,verbose=False,warm_start=False,momentum=0.9,nesterovs_momentum=True,early_stopping=False,validation_fraction=0.1,beta_1=0.9,beta_2=0.999,epsilon=1e-08,n_iter_no_change=10, ) Docstring: Multi-layer Perceptron classifier.This model optimizes the log-loss function using LBFGS or stochastic gradient descent... versionadded:: 0.18Parameters ---------- hidden_layer_sizes : tuple, length = n_layers - 2, default (100,)The ith element represents the number of neurons in the ithhidden layer.activation : {'identity', 'logistic', 'tanh', 'relu'}, default 'relu'Activation function for the hidden layer.- 'identity', no-op activation, useful to implement linear bottleneck,returns f(x) = x- 'logistic', the logistic sigmoid function,returns f(x) = 1 / (1 + exp(-x)).- 'tanh', the hyperbolic tan function,returns f(x) = tanh(x).- 'relu', the rectified linear unit function,returns f(x) = max(0, x)solver : {'lbfgs', 'sgd', 'adam'}, default 'adam'The solver for weight optimization.- 'lbfgs' is an optimizer in the family of quasi-Newton methods.- 'sgd' refers to stochastic gradient descent.- 'adam' refers to a stochastic gradient-based optimizer proposedby Kingma, Diederik, and Jimmy BaNote: The default solver 'adam' works pretty well on relativelylarge datasets (with thousands of training samples or more) in terms ofboth training time and validation score.For small datasets, however, 'lbfgs' can converge faster and performbetter.alpha : float, optional, default 0.0001L2 penalty (regularization term) parameter.batch_size : int, optional, default 'auto'Size of minibatches for stochastic optimizers.If the solver is 'lbfgs', the classifier will not use minibatch.When set to "auto", `batch_size=min(200, n_samples)`learning_rate : {'constant', 'invscaling', 'adaptive'}, default 'constant'Learning rate schedule for weight updates.- 'constant' is a constant learning rate given by'learning_rate_init'.- 'invscaling' gradually decreases the learning rate at eachtime step 't' using an inverse scaling exponent of 'power_t'.effective_learning_rate = learning_rate_init / pow(t, power_t)- 'adaptive' keeps the learning rate constant to'learning_rate_init' as long as training loss keeps decreasing.Each time two consecutive epochs fail to decrease training loss by atleast tol, or fail to increase validation score by at least tol if'early_stopping' is on, the current learning rate is divided by 5.Only used when ``solver='sgd'``.learning_rate_init : double, optional, default 0.001The initial learning rate used. It controls the step-sizein updating the weights. Only used when solver='sgd' or 'adam'.power_t : double, optional, default 0.5The exponent for inverse scaling learning rate.It is used in updating effective learning rate when the learning_rateis set to 'invscaling'. Only used when solver='sgd'.max_iter : int, optional, default 200Maximum number of iterations. The solver iterates until convergence(determined by 'tol') or this number of iterations. For stochasticsolvers ('sgd', 'adam'), note that this determines the number of epochs(how many times each data point will be used), not the number ofgradient steps.shuffle : bool, optional, default TrueWhether to shuffle samples in each iteration. Only used whensolver='sgd' or 'adam'.random_state : int, RandomState instance or None, optional, default NoneIf int, random_state is the seed used by the random number generator;If RandomState instance, random_state is the random number generator;If None, the random number generator is the RandomState instance usedby `np.random`.tol : float, optional, default 1e-4Tolerance for the optimization. When the loss or score is not improvingby at least ``tol`` for ``n_iter_no_change`` consecutive iterations,unless ``learning_rate`` is set to 'adaptive', convergence isconsidered to be reached and training stops.verbose : bool, optional, default FalseWhether to print progress messages to stdout.warm_start : bool, optional, default FalseWhen set to True, reuse the solution of the previouscall to fit as initialization, otherwise, just erase theprevious solution. See :term:`the Glossary <warm_start>`.momentum : float, default 0.9Momentum for gradient descent update. Should be between 0 and 1. Onlyused when solver='sgd'.nesterovs_momentum : boolean, default TrueWhether to use Nesterov's momentum. Only used when solver='sgd' andmomentum > 0.early_stopping : bool, default FalseWhether to use early stopping to terminate training when validationscore is not improving. If set to true, it will automatically setaside 10% of training data as validation and terminate training whenvalidation score is not improving by at least tol for``n_iter_no_change`` consecutive epochs. The split is stratified,except in a multilabel setting.Only effective when solver='sgd' or 'adam'validation_fraction : float, optional, default 0.1The proportion of training data to set aside as validation set forearly stopping. Must be between 0 and 1.Only used if early_stopping is Truebeta_1 : float, optional, default 0.9Exponential decay rate for estimates of first moment vector in adam,should be in [0, 1). Only used when solver='adam'beta_2 : float, optional, default 0.999Exponential decay rate for estimates of second moment vector in adam,should be in [0, 1). Only used when solver='adam'epsilon : float, optional, default 1e-8Value for numerical stability in adam. Only used when solver='adam'n_iter_no_change : int, optional, default 10Maximum number of epochs to not meet ``tol`` improvement.Only effective when solver='sgd' or 'adam'.. versionadded:: 0.20Attributes ---------- classes_ : array or list of array of shape (n_classes,)Class labels for each output.loss_ : floatThe current loss computed with the loss function.coefs_ : list, length n_layers - 1The ith element in the list represents the weight matrix correspondingto layer i.intercepts_ : list, length n_layers - 1The ith element in the list represents the bias vector corresponding tolayer i + 1.n_iter_ : int,The number of iterations the solver has ran.n_layers_ : intNumber of layers.n_outputs_ : intNumber of outputs.out_activation_ : stringName of the output activation function.Notes ----- MLPClassifier trains iteratively since at each time step the partial derivatives of the loss function with respect to the model parameters are computed to update the parameters.It can also have a regularization term added to the loss function that shrinks model parameters to prevent overfitting.This implementation works with data represented as dense numpy arrays or sparse scipy arrays of floating point values.References ---------- Hinton, Geoffrey E."Connectionist learning procedures." Artificial intelligence 40.1(1989): 185-234.Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty oftraining deep feedforward neural networks." International Conferenceon Artificial Intelligence and Statistics. 2010.He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-levelperformance on imagenet classification." arXiv preprintarXiv:1502.01852 (2015).Kingma, Diederik, and Jimmy Ba. "Adam: A method for stochasticoptimization." arXiv preprint arXiv:1412.6980 (2014). File: c:\users\huawei\appdata\local\programs\python\python36\lib\site-packages\sklearn\neural_network\multilayer_perceptron.py Type: ABCMeta Subclasses: 與50位技術(shù)專家面對(duì)面20年技術(shù)見(jiàn)證,附贈(zèng)技術(shù)全景圖

    總結(jié)

    以上是生活随笔為你收集整理的python sklearn.neural_network.MLPClassifier() 神经网络改变模型复杂度的四种方法的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

    如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。