日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 >

stacking与blending的区别

發(fā)布時(shí)間:2025/3/19 22 豆豆
生活随笔 收集整理的這篇文章主要介紹了 stacking与blending的区别 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

網(wǎng)上通用的解釋:

stacking是k折交叉驗(yàn)證,元模型的訓(xùn)練數(shù)據(jù)等同于基于模型的訓(xùn)練數(shù)據(jù),該方法為每個(gè)樣本都生成了元特征,每生成元特征的模型不一樣(k是多少,每個(gè)模型的數(shù)量就是多少);測試集生成元特征時(shí),需要用到k(k fold不是模型)個(gè)加權(quán)平均;

blending是holdout方法,直接將訓(xùn)練集切割成兩個(gè)部分,僅10%用于元模型的訓(xùn)練;

暫時(shí)先用這個(gè)方法來區(qū)分吧

再來看quora上的回答:

?

Stacking and Blending are two similar approaches of combining classifiers (ensembling).

First at all, let me refer you to this?Kaggle Ensembling Guide. I believe it is very simple and easy to understand (easier than the paper).

The difference is that Stacking uses out-of-fold predictions for the train set, and Blending uses a validation set (let’s say, 10% of the training set) to train the next layer.

Ensembling

Ensembling approaches train several classifiers in the hope that combining their predictions will outperform any single classifier (worst case scenario, be better than the worse classifier). The combination rule can be: majority vote, mean, max, min, product … the average rule is the most used.

Blending and Stacking

As said before, blending and stacking are two very similar approaches. In fact, some people use the terms as synonyms. Such approaches train a first layer of classifiers and use their outputs (i.e. probabilities) to train a second layer of classifiers. Any number of layers can be used. The final prediction is usually performed by the average rule or by a final base classifier (such as Logistic Regression in binary classification).

Figure from?Kaggle Ensembling Guide

You can’t (or shouldn’t) use the training set itself to pass to the next layer. For this reason, there are rules such as using cross-fold-validation (the out-of-fold is used to train the next layer) -?Stacking?- or using a holdout validation (part of the train is used in the first layer, part in the second …) -?Blending.

Keep in Mind

Keep in mind that, even though the examples from?Kaggle Ensembling Guideshows the same base classifiers (XGB) several times, the classifiers must be diverse enough in order for ensembling to produce good results. This might be accomplished by using different base classifiers, training using different features, training using different parts of the training set, or using different parameters.

再來看CSDN解釋:

?csdn部分內(nèi)容轉(zhuǎn)載自:https://blog.csdn.net/maqunfi/article/details/82220115

再來看一張非常美麗的圖:
?

轉(zhuǎn)載自:https://blog.csdn.net/weixin_38526306/article/details/81356325?

總結(jié)

以上是生活随笔為你收集整理的stacking与blending的区别的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。