日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 >

Complete Guide to Parameter Tuning in XGBoost (with codes in Python)

發布時間:2025/3/19 46 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Complete Guide to Parameter Tuning in XGBoost (with codes in Python) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Introduction

If?things don’t go your way in predictive modeling, use XGboost. ?XGBoost algorithm has become the ultimate weapon of many data scientist. It’s a?highly sophisticated algorithm, powerful enough to deal with all sorts of irregularities of data.

Building a model using XGBoost is easy. But, improving the model using XGBoost is difficult (at least I struggled a lot). This algorithm uses multiple parameters. To improve the model, parameter tuning is must. It is very difficult to get answers to practical questions like – Which set of parameters you should tune ? What is the ideal value of these parameters to obtain optimal output ?

This article is best suited to people who are new to XGBoost. In this article, we’ll learn the art of parameter tuning along with some useful information about XGBoost. Also, we’ll practice this algorithm using a ?data set?in Python.

?

What should you know ?

XGBoost (eXtreme Gradient Boosting)?is an advanced implementation of gradient boosting algorithm. Since I covered Gradient Boosting Machine in detail in my previous article –?Complete Guide to Parameter Tuning in Gradient Boosting (GBM) in Python, I highly recommend going through that before reading further. It will help you bolster your understanding of boosting in general and parameter tuning for GBM.

Special Thanks:?Personally, I would like to acknowledge the timeless support provided by?Mr. Sudalai Rajkumar?(aka SRK), currently?AV Rank 2. This article wouldn’t be possible without his help.?He is helping us guide thousands of data scientists. A big thanks to SRK!

?

Table of Contents

  • The XGBoost Advantage
  • Understanding XGBoost?Parameters
  • Tuning Parameters (with Example)
  • ?

    1.?The XGBoost Advantage

    I’ve always admired the boosting capabilities that this algorithm infuses in a predictive model. When I explored more about its performance and science behind its high accuracy, I discovered many advantages:

  • Regularization:
    • Standard GBM implementation has no?regularization?like?XGBoost,?therefore?it also helps to reduce overfitting.
    • In fact, XGBoost is also known as?‘regularized boosting‘ technique.
  • Parallel Processing:
    • XGBoost implements parallel processing and is?blazingly faster?as compared to GBM.
    • But hang on, we know that?boosting?is sequential process so how can it be parallelized? We know that each tree can be built only after the previous one, so?what stops us from making a tree using all cores? I hope you?get?where I’m coming from. Check?this link?out to explore further.
    • XGBoost also supports implementation on Hadoop.
  • High Flexibility
    • XGBoost allow users to define?custom optimization objectives and evaluation criteria.
    • This adds a whole new dimension to the model and there is no limit to what we can do.
  • Handling Missing Values
    • XGBoost has an in-built routine to handle?missing values.
    • User is required to?supply?a different value than other observations and pass that as a parameter. XGBoost?tries different things as it encounters a missing value on each node and learns which path to take for missing values in future.
  • Tree Pruning:
    • A GBM would stop splitting a node when it encounters a negative loss in the split. Thus it is more of a?greedy algorithm.
    • XGBoost on the other hand make?splits upto the max_depth?specified and then start?pruning?the tree backwards and remove splits beyond which there is no positive gain.
    • Another?advantage is that sometimes a split of negative loss say -2 may be followed by a split of positive loss +10. GBM would stop as it encounters -2. But XGBoost will go deeper and it will see a combined effect of +8 of the split and keep both.
  • Built-in Cross-Validation
    • XGBoost allows user to run a?cross-validation at each iteration?of the boosting process and thus it is easy to get the exact optimum number of boosting iterations in a single run.
    • This is unlike GBM where we have to run a grid-search and only a limited values can be tested.
  • Continue on Existing Model
    • User can start training an XGBoost model from its last iteration of previous run. This can be of significant advantage in certain specific applications.
    • GBM implementation of sklearn also has this feature so they are even on this point.
  • I hope now you understand the sheer power XGBoost algorithm. Note that these are the points which I could muster. You know a few more? Feel free to drop?a comment below and I will update the list.

    Did I whet your appetite ? Good.?You can refer to following web-pages for a deeper understanding:

    • XGBoost Guide – Introduction to Boosted Trees
    • Words from the Author of XGBoost?[Video]

    ?

    2. XGBoost?Parameters

    The overall parameters have been?divided into 3 categories by XGBoost authors:

  • General?Parameters:?Guide the overall functioning
  • Booster Parameters:?Guide the individual booster (tree/regression) at each step
  • Learning Task?Parameters:?Guide the optimization performed
  • I will give analogies to GBM here and highly recommend to read?this article?to learn from the very basics.

    General Parameters

    These define the overall functionality of XGBoost.

  • booster [default=gbtree]
    • Select the type of model to run at each iteration. It has 2 options:
      • gbtree: tree-based models
      • gblinear: linear models
  • silent [default=0]:
    • Silent mode is activated is set to 1, i.e. no running messages will be printed.
    • It’s generally good to keep it 0 as the messages?might help in understanding the model.
  • nthread [default to maximum number of threads available if not set]
    • This is used for parallel processing and number of cores in the system should be entered
    • If you wish to run on all cores, value?should not be entered and algorithm will detect automatically
  • There are 2 more parameters which are set automatically by XGBoost and you need not worry about them. Lets move on to Booster parameters.

    ?

    Booster Parameters

    Though?there are 2 types of boosters, I’ll consider only?tree booster?here because it always outperforms the linear booster and thus the later is rarely used.

  • eta [default=0.3]
    • Analogous to learning rate in GBM
    • Makes the model more robust by shrinking the weights on each step
    • Typical final values to be used: 0.01-0.2
  • min_child_weight [default=1]
    • Defines the minimum?sum of weights of all observations required in a child.
    • This is similar to?min_child_leaf?in GBM but not exactly. This refers to min “sum of weights” of observations while GBM has min “number of observations”.
    • Used to control over-fitting. Higher values prevent a model from learning relations which might be highly?specific to the?particular sample selected for a tree.
    • Too high values can lead to under-fitting hence, it should be tuned using CV.
  • max_depth [default=6]
    • The maximum depth of a tree, same as GBM.
    • Used to control over-fitting as higher depth will allow model to learn relations very specific to a particular sample.
    • Should be tuned using CV.
    • Typical values: 3-10
  • max_leaf_nodes
    • The maximum number of terminal nodes or leaves in a tree.
    • Can be defined in place of?max_depth. Since binary trees are created, a depth of ‘n’ would produce a maximum of 2^n leaves.
    • If this is defined, GBM will ignore max_depth.
  • gamma [default=0]
    • A node is split only when the resulting split gives a positive reduction in the loss function. Gamma specifies the minimum loss reduction required to make a split.
    • Makes the algorithm conservative. The values can vary depending on the loss function and should be tuned.
  • max_delta_step [default=0]
    • In maximum delta step we allow each tree’s weight estimation to be. If the value is set to 0, it means there is no constraint. If it is set to a positive value, it can help making the update step more conservative.
    • Usually this parameter is not needed, but it might help in logistic regression when class is extremely imbalanced.
    • This is generally not used but you can explore further if you wish.
  • subsample [default=1]
    • Same as the subsample of GBM. Denotes the fraction of observations to be randomly samples for each tree.
    • Lower values make the algorithm more conservative and prevents overfitting but too small values might lead to under-fitting.
    • Typical values: 0.5-1
  • colsample_bytree [default=1]
    • Similar to max_features in GBM. Denotes the fraction of columns?to be randomly samples for each tree.
    • Typical values: 0.5-1
  • colsample_bylevel [default=1]
    • Denotes the subsample ratio of columns for each split, in each level.
    • I don’t use this often because subsample and colsample_bytree will do the job for you.?but you can explore further if you feel so.
  • lambda [default=1]
    • L2 regularization term on weights (analogous to Ridge regression)
    • This used to handle the regularization part of XGBoost. Though many data scientists don’t use it often, it should be explored to reduce overfitting.
  • alpha [default=0]
    • L1 regularization term on weight?(analogous to Lasso?regression)
    • Can be used in case of very high dimensionality so that the algorithm runs faster when implemented
  • scale_pos_weight [default=1]
    • A value greater than 0 should be?used in case of high class imbalance as it helps in faster convergence.
  • ?

    Learning Task Parameters

    These parameters are used to define the optimization objective the metric to be calculated at each step.

  • objective [default=reg:linear]
    • This defines the?loss function to be minimized. Mostly used values are:
      • binary:logistic?–logistic regression for binary classification, returns?predicted probability (not class)
      • multi:softmax?–multiclass classification using the softmax objective, returns predicted class (not probabilities)
        • you also need to set an additional?num_class?(number of classes) parameter defining the number of unique classes
      • multi:softprob?–same as softmax, but returns?predicted probability of each data point belonging to each class.
  • eval_metric [ default according to objective ]
    • The metric to be used for?validation data.
    • The default values are rmse for regression and error for classification.
    • Typical?values are:
      • rmse?– root mean square error
      • mae?–?mean absolute error
      • logloss?–?negative?log-likelihood
      • error?–?Binary classification error rate (0.5 threshold)
      • merror?–?Multiclass classification error rate
      • mlogloss?–?Multiclass logloss
      • auc:?Area under the curve
  • seed [default=0]
    • The random number seed.
    • Can be used for generating reproducible results and also for parameter tuning.
  • If you’ve been using Scikit-Learn till now, these parameter names might not look familiar. A good news is that xgboost module in python has an sklearn wrapper called XGBClassifier. It uses sklearn style naming convention. The parameters names which will change are:

  • eta –> learning_rate
  • lambda –> reg_lambda
  • alpha –> reg_alpha
  • You must be wondering that we have defined everything except something similar to the “n_estimators” parameter in GBM. Well this exists as a parameter in XGBClassifier. However, it has to be passed as “num_boosting_rounds” while calling the fit function in the standard xgboost implementation.

    I recommend you to go through the following parts of xgboost guide to better understand the parameters and codes:

  • XGBoost Parameters (official guide)
  • XGBoost Demo Codes (xgboost GitHub repository)
  • Python API Reference (official guide)
  • ?

    3. Parameter Tuning with Example

    We will take the data set from Data Hackathon 3.x AV hackathon, same as that taken in the?GBM article. The details of the problem can be found on the?competition page. You can download the data set from?here. I have performed the following steps:

  • City variable dropped because of too many categories
  • DOB converted to Age | DOB dropped
  • EMI_Loan_Submitted_Missing created which is 1 if EMI_Loan_Submitted was missing else 0 | Original variable EMI_Loan_Submitted dropped
  • EmployerName dropped because of too many categories
  • Existing_EMI imputed with 0 (median) since only 111 values were missing
  • Interest_Rate_Missing created which is 1 if Interest_Rate was missing else 0 | Original variable Interest_Rate dropped
  • Lead_Creation_Date dropped because made little intuitive impact on outcome
  • Loan_Amount_Applied, Loan_Tenure_Applied imputed with median values
  • Loan_Amount_Submitted_Missing created which is 1 if Loan_Amount_Submitted was missing else 0 | Original variable Loan_Amount_Submitted dropped
  • Loan_Tenure_Submitted_Missing created which is 1 if Loan_Tenure_Submitted was missing else 0 | Original variable Loan_Tenure_Submitted dropped
  • LoggedIn, Salary_Account dropped
  • Processing_Fee_Missing created which is 1 if Processing_Fee was missing else 0 | Original variable Processing_Fee dropped
  • Source – top 2 kept as is and all others combined into different category
  • Numerical and One-Hot-Coding performed
  • For those who have the original data from competition, you can check out these steps from the data_preparation?iPython notebook in the repository.

    Lets start by importing the required libraries and loading the data:

    #Import libraries: import pandas as pd import numpy as np import xgboost as xgb from xgboost.sklearn import XGBClassifier from sklearn import cross_validation, metrics #Additional scklearn functions from sklearn.grid_search import GridSearchCV #Perforing grid searchimport matplotlib.pylab as plt %matplotlib inline from matplotlib.pylab import rcParams rcParams['figure.figsize'] = 12, 4train = pd.read_csv('train_modified.csv') target = 'Disbursed' IDcol = 'ID'

    Note that I have imported 2 forms of XGBoost:

  • xgb?– this is the direct xgboost library. I will use a specific function “cv” from this library
  • XGBClassifier?– this is an sklearn wrapper for XGBoost. This allows us to use sklearn’s Grid Search with parallel processing in?the same way we did for GBM
  • Before proceeding further, lets define a function which will help us create XGBoost?models and perform cross-validation. The best part is that you can take this function as it is and use it later for your own models.

    def modelfit(alg, dtrain, predictors,useTrainCV=True, cv_folds=5, early_stopping_rounds=50):if useTrainCV:xgb_param = alg.get_xgb_params()xgtrain = xgb.DMatrix(dtrain[predictors].values, label=dtrain[target].values)cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], nfold=cv_folds,metrics='auc', early_stopping_rounds=early_stopping_rounds, show_progress=False)alg.set_params(n_estimators=cvresult.shape[0])#Fit the algorithm on the dataalg.fit(dtrain[predictors], dtrain['Disbursed'],eval_metric='auc')#Predict training set:dtrain_predictions = alg.predict(dtrain[predictors])dtrain_predprob = alg.predict_proba(dtrain[predictors])[:,1]#Print model report:print "\nModel Report"print "Accuracy : %.4g" % metrics.accuracy_score(dtrain['Disbursed'].values, dtrain_predictions)print "AUC Score (Train): %f" % metrics.roc_auc_score(dtrain['Disbursed'], dtrain_predprob)feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)feat_imp.plot(kind='bar', title='Feature Importances')plt.ylabel('Feature Importance Score')

    This code is slightly different from what I used for GBM. The focus of this article is to cover the concepts and not coding.?Please feel free to drop a note in the comments if you find any challenges in understanding any part of it. Note that xgboost’s sklearn wrapper doesn’t have a “feature_importances” metric but a get_fscore() function which does the same job.

    ?

    General Approach for Parameter Tuning

    We will use an?approach similar to that of GBM here. The various steps to be?performed are:

  • Choose a relatively?high learning rate. Generally a learning rate?of 0.1 works but somewhere between 0.05 to 0.3 should work for different problems.?Determine the?optimum number of trees for this learning rate. XGBoost has a very useful function called as “cv” which performs cross-validation at each boosting iteration and thus returns the optimum number of trees required.
  • Tune tree-specific parameters?( max_depth, min_child_weight, gamma, subsample, colsample_bytree) for decided learning rate and number of trees. Note that we can choose different parameters to define a tree and I’ll take up an example here.
  • Tune?regularization parameters?(lambda, alpha) for xgboost which can help reduce model complexity and enhance performance.
  • Lower the learning rate?and decide the optimal parameters?.
  • Let us look at a more detailed step by step approach.

    ?

    Step 1: Fix learning rate and number of estimators for tuning tree-based parameters

    In order to decide on boosting parameters, we need to set some initial values of other parameters. Lets take the following values:

  • max_depth?= 5?: This should be between?3-10.?I’ve started with 5 but you can choose a different number as well. 4-6 can be good starting points.
  • min_child_weight?= 1?: A smaller value is chosen because it is a highly imbalanced class problem and leaf nodes can have smaller size groups.
  • gamma?= 0?:?A smaller value like 0.1-0.2 can also be chosen for starting. This will anyways be tuned later.
  • subsample, colsample_bytree = 0.8?: This is a commonly used used start value. Typical values range between 0.5-0.9.
  • scale_pos_weight = 1: Because?of high class imbalance.
  • Please note that all the above are just initial estimates and will be tuned later. Lets take the default learning rate of 0.1 here and check the optimum number of trees using cv function of xgboost. The function defined above will do it for us.

    #Choose all predictors except target & IDcols predictors = [x for x in train.columns if x not in [target, IDcol]] xgb1 = XGBClassifier(learning_rate =0.1,n_estimators=1000,max_depth=5,min_child_weight=1,gamma=0,subsample=0.8,colsample_bytree=0.8,objective= 'binary:logistic',nthread=4,scale_pos_weight=1,seed=27) modelfit(xgb1, train, predictors)

    As you can see that here we got 140?as the optimal estimators for 0.1 learning rate. Note that this value might be too high for you depending on the power of your system. In that case you can increase the learning rate and re-run the command to get the reduced number of estimators.

    Note: You will?see the test AUC as “AUC Score (Test)” in the?outputs here. But this would not appear if you try to run the command on your system as the data is not made public. It’s provided here just for reference. The part of the code which generates this output has been removed here.

    ?

    Step 2: Tune max_depth and min_child_weight

    We tune these first as they will have the highest impact on model outcome.?To start with, let’s set wider ranges and then we will perform another?iteration for smaller ranges.

    Important Note:?I’ll be doing some heavy-duty grid searched in this section which can take 15-30 mins or even more time to run depending on your system. You can vary the number of values you are testing based on what your system can handle.

    param_test1 = {'max_depth':range(3,10,2),'min_child_weight':range(1,6,2) } gsearch1 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=5,min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8,objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27), param_grid = param_test1, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch1.fit(train[predictors],train[target]) gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_

    Here, we have run 12?combinations with wider intervals between values. The ideal values are?5?for max_depth?and?5?for min_child_weight. Lets go one step deeper and look for optimum values. We’ll search for values 1 above and below the optimum values because we took an interval of two.

    param_test2 = {'max_depth':[4,5,6],'min_child_weight':[4,5,6] } gsearch2 = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=5,min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8,objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test2, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch2.fit(train[predictors],train[target]) gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_

    Here, we get the optimum values as?4?for max_depth?and?6 for min_child_weight. Also, we can see the CV score increasing slightly. Note that as the model performance increases, it becomes exponentially difficult to achieve even marginal gains in performance. You would have noticed that here we got 6 as optimum?value for min_child_weight but we haven’t tried values more than 6. We can do that as follow:.

    param_test2b = {'min_child_weight':[6,8,10,12] } gsearch2b = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=4,min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8,objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test2b, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch2b.fit(train[predictors],train[target]) modelfit(gsearch3.best_estimator_, train, predictors) gsearch2b.grid_scores_, gsearch2b.best_params_, gsearch2b.best_score_

    We see 6 as the optimal value.

    ?

    Step 3: Tune gamma

    Now lets tune gamma value using the parameters already tuned above. Gamma?can take various values but I’ll check for 5 values here. You can go into more precise values as.

    param_test3 = {'gamma':[i/10.0 for i in range(0,5)] } gsearch3 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=4,min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test3, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch3.fit(train[predictors],train[target]) gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_

    This shows that our original value of gamma, i.e.?0 is the optimum one. Before proceeding, a good idea would be to re-calibrate the number of boosting rounds for the updated parameters.

    xgb2 = XGBClassifier(learning_rate =0.1,n_estimators=1000,max_depth=4,min_child_weight=6,gamma=0,subsample=0.8,colsample_bytree=0.8,objective= 'binary:logistic',nthread=4,scale_pos_weight=1,seed=27) modelfit(xgb2, train, predictors)

    Here, we can see the improvement in score. So the final parameters are:

    • max_depth:?4
    • min_child_weight: 6
    • gamma:?0

    ?

    Step 4: Tune subsample and colsample_bytree

    The next step would be try different subsample and colsample_bytree values. Lets do this in 2 stages as well and take values 0.6,0.7,0.8,0.9 for both to start with.

    param_test4 = {'subsample':[i/10.0 for i in range(6,10)],'colsample_bytree':[i/10.0 for i in range(6,10)] } gsearch4 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test4, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch4.fit(train[predictors],train[target]) gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_

    Here, we found?0.8 as the optimum value for both?subsample and colsample_bytree.?Now we should try values in 0.05 interval around these.

    param_test5 = {'subsample':[i/100.0 for i in range(75,90,5)],'colsample_bytree':[i/100.0 for i in range(75,90,5)] } gsearch5 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test5, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch5.fit(train[predictors],train[target])

    Again we got the same values as before. Thus the optimum values are:

    • subsample: 0.8
    • colsample_bytree: 0.8

    ?

    Step 5: Tuning Regularization Parameters

    Next step is to apply regularization to?reduce overfitting. Though many people don’t use this parameters much as gamma provides a substantial way of controlling complexity. But we should always try it. I’ll tune ‘reg_alpha’ value here and leave it upto you to try different values of ‘reg_lambda’.

    param_test6 = {'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100] } gsearch6 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8,objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test6, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch6.fit(train[predictors],train[target]) gsearch6.grid_scores_, gsearch6.best_params_, gsearch6.best_score_

    We can see that?the CV score is less than the previous case. But the?values tried are?very widespread, we?should try values closer to the optimum here (0.01) to see if we get something better.

    param_test7 = {'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05] } gsearch7 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8,objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test7, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch7.fit(train[predictors],train[target]) gsearch7.grid_scores_, gsearch7.best_params_, gsearch7.best_score_

    You can see that we got a better CV. Now we can apply this regularization in the model and look at the impact:

    xgb3 = XGBClassifier(learning_rate =0.1,n_estimators=1000,max_depth=4,min_child_weight=6,gamma=0,subsample=0.8,colsample_bytree=0.8,reg_alpha=0.005,objective= 'binary:logistic',nthread=4,scale_pos_weight=1,seed=27) modelfit(xgb3, train, predictors)

    Again we can see slight improvement in the score.

    Step 6: Reducing Learning Rate

    Lastly, we should lower the learning rate and add more trees. Lets use the?cv function of XGBoost to do the job again.

    xgb4 = XGBClassifier(learning_rate =0.01,n_estimators=5000,max_depth=4,min_child_weight=6,gamma=0,subsample=0.8,colsample_bytree=0.8,reg_alpha=0.005,objective= 'binary:logistic',nthread=4,scale_pos_weight=1,seed=27) modelfit(xgb4, train, predictors)

    Now we can see a significant boost in performance and the effect of parameter tuning is clearer.

    As we come to the end, I would like to share?2 key thoughts:

  • It is?difficult to get a very big leap?in performance by just using?parameter tuning?or?slightly better models. The max score for GBM was 0.8487 while XGBoost gave 0.8494. This is a decent improvement but not something very substantial.
  • A significant jump can be obtained by other methods?like?feature engineering, creating?ensemble?of models,?stacking, etc
  • You can also download the iPython notebook with all these model codes from my?GitHub account. For codes in R, you can refer to?this article.

    ?

    End Notes

    This article was based on developing a XGBoost?model?end-to-end. We started with discussing?why XGBoost has superior performance over GBM?which was followed by detailed discussion on the?various parameters?involved. We also defined a generic function which you can re-use for making models.

    Finally, we discussed the?general approach?towards tackling a problem with XGBoost?and also worked out?the?AV Data Hackathon 3.x problem?through that approach.

    I hope you found this useful and now you feel more confident to?apply XGBoost?in solving a?data science problem. You can try this out in out upcoming hackathons.

    Did you like this article? Would you like to share some other?hacks which you implement while making XGBoost?models? Please feel free to drop a note in the comments below and I’ll be glad to discuss.

    轉載自:https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/

    總結

    以上是生活随笔為你收集整理的Complete Guide to Parameter Tuning in XGBoost (with codes in Python)的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

    久久久久久久久久电影 | 欧美一区二区视频97 | 中文字幕中文 | 色视频 在线 | 午夜三级影院 | 国产美女精品 | 99精品国产视频 | 成片视频在线观看 | 日韩黄色在线电影 | 日韩有码在线播放 | 久久精品国产v日韩v亚洲 | 成人观看视频 | 亚洲国产精品资源 | 91色亚洲 | 91亚洲网站 | 91一区啪爱嗯打偷拍欧美 | 日本在线视频一区二区三区 | 国产手机视频在线播放 | 国产.精品.日韩.另类.中文.在线.播放 | 91免费版在线 | 中文字幕观看av | 国产精品久久久久久999 | 亚洲国产午夜视频 | 狠狠久久综合 | 射九九 | 精品视频免费久久久看 | 久久综合九色综合网站 | 成人av日韩 | 久久久麻豆精品一区二区 | 日韩免费电影网 | 在线亚洲小视频 | 黄色av影视 | 日本最大色倩网站www | 中文字幕黄网 | 91pony九色丨交换 | 中文不卡视频在线 | 四虎永久国产精品 | 日韩91av| 欧美一区二区日韩一区二区 | 99精品成人 | 日韩精品播放 | 国产黄网在线 | 国产高清在线a视频大全 | 看av免费| 国产精品wwwwww| www夜夜 | 久久久久久久久久福利 | 久久免费视频99 | 国产精品久久久777 成人手机在线视频 | 天天干天天做 | 欧美日一级片 | 人人添人人澡人人澡人人人爽 | 亚洲精品mv在线观看 | 不卡的av| 狠狠色伊人亚洲综合网站野外 | 欧美精品在线观看一区 | 91av在线免费播放 | 天天色视频 | 欧美福利在线播放 | 天天曰天天干 | 在线观看一级视频 | 最新av免费在线观看 | 成人黄色短片 | 久草在线欧美 | 午夜色大片在线观看 | 亚洲成a人片在线www | 久久综合狠狠狠色97 | 久久99热精品这里久久精品 | 久久午夜免费视频 | 99精品视频精品精品视频 | 久久婷婷激情 | 久久久久久亚洲精品 | 国产日产高清dvd碟片 | 久久99久久99精品免观看粉嫩 | 国产精品永久免费 | 国产日韩精品一区二区 | 911香蕉| 欧美调教网站 | 麻豆视频免费在线播放 | 久久久亚洲网站 | 国产色在线 | 伊人日日干 | 麻豆免费视频 | www.com操| 亚洲高清视频在线观看 | 六月婷婷网 | 国产精品久久久久久久久岛 | 9999亚洲| 日韩欧美一区二区三区免费观看 | 亚洲午夜久久久久久久久 | 91麻豆精品国产91 | 国产亚洲激情视频在线 | 中文字幕在线看 | 亚洲免费观看视频 | 丁香综合五月 | 中文字幕制服丝袜av久久 | 日韩欧美有码在线 | 亚州精品在线视频 | 久久免费久久 | 涩涩网站在线播放 | 麻豆视频免费网站 | 一区二区三区精品在线 | 免费看十八岁美女 | 一区二区三区视频网站 | 国产高清av免费在线观看 | 精品视频专区 | 五月婷婷视频在线 | 久久久精品国产免费观看同学 | 99电影 | 四虎永久免费在线观看 | 亚洲国产精品影院 | av一级免费| 成人精品久久久 | 国产黄在线 | 久热只有精品 | 92中文资源在线 | 亚洲精品美女在线观看播放 | 欧美一区中文字幕 | 97久久精品午夜一区二区 | 久久久久久综合 | 青青久视频 | 欧美一级电影在线观看 | 在线日韩中文 | 91色吧 | 久久久精品久久 | 婷婷丁香激情五月 | 欧洲激情在线 | 在线观看视频日韩 | 69亚洲精品| 国产一区二区三区黄 | 97电影网手机版 | 久久高清片 | 国产精品理论片在线播放 | 激情开心站| 中文字幕在线观看一区二区三区 | 五月天综合激情 | 国产99久久久久久免费看 | 天天操夜夜想 | 精品成人a区在线观看 | 国产专区免费 | 超碰99在线| 黄色成人在线网站 | 日韩精品视频在线观看免费 | 久综合网 | 久久精品在线视频 | 午夜免费电影院 | 99热精品免费观看 | 国产日韩精品一区二区三区 | 色婷婷激情综合 | 日韩av免费在线看 | 在线观看国产区 | 亚洲美女在线一区 | 国产成人精品一区二三区 | 2024国产精品视频 | 中文字幕在线免费观看 | 国产精品入口66mio女同 | 亚洲成人一二三 | 超碰97免费在线 | 97精品国产91久久久久久 | 五月丁香 | 欧美日韩中文国产一区发布 | 亚洲黄色av网址 | 在线观看v片 | a极黄色片 | 欧洲视频一区 | 欧美a级成人淫片免费看 | 在线之家免费在线观看电影 | 超碰在线国产 | 日日干夜夜骑 | 国产黑丝袜在线 | 精品夜夜嗨av一区二区三区 | 丁香婷婷综合网 | 国产中文字幕视频在线观看 | 国产成人精品午夜在线播放 | 免费午夜av | 精品视频亚洲 | 一区二区三区久久精品 | 亚洲综合色丁香婷婷六月图片 | 天天天干 | 国产精品av免费观看 | 欧美精品一区二区免费 | 欧美精品天堂 | 免费看片亚洲 | 亚洲欧美成人综合 | 国色天香永久免费 | 日本久热| 91麻豆网| 成人福利在线观看 | 日韩午夜精品福利 | 午夜电影中文字幕 | 国产美女网 | 国产大片免费久久 | 在线观看国产中文字幕 | 中文字幕欧美日韩va免费视频 | 美女福利视频在线 | 麻豆影视在线观看 | 久久中文字幕在线视频 | 在线看黄色的网站 | 欧美日韩国产精品久久 | 久久激情日本aⅴ | 久久99精品久久久久久久久久久久 | 亚洲精品国产精品国自 | 久久中文字幕视频 | www色av| 国产乱码精品一区二区蜜臀 | 美州a亚洲一视本频v色道 | 亚洲国产中文字幕 | 久久激情视频免费观看 | 91九色国产视频 | 日韩在线观看的 | 91刺激视频| 美女视频黄在线 | 国产小视频在线免费观看 | 97视频在线观看免费 | 亚洲综合色激情五月 | 天天躁日日躁狠狠 | 天天干天天干天天干天天干天天干天天干 | 免费视频99 | 五月天激情开心 | 免费视频久久久久久久 | 日本黄色a级大片 | 18国产精品白浆在线观看免费 | 日韩另类在线 | 日韩成人看片 | 在线看小早川怜子av | 国模精品在线 | 丁香在线观看完整电影视频 | 天堂av网站 | 日韩专区一区二区 | 成人免费在线观看电影 | 精品久久久久久久 | 热re99久久精品国产99热 | 97品白浆高清久久久久久 | 免费看久久久 | 久久综合给合久久狠狠色 | 亚洲作爱视频 | 一区二区三区精品久久久 | 四虎www | 91av在线视频免费观看 | 婷婷去俺也去六月色 | 青青草国产精品视频 | 99riav1国产精品视频 | 久草在线视频免费资源观看 | 五月天婷婷狠狠 | 国产在线资源 | 天天爱天天操天天射 | 在线视频 精品 | 免费高清在线观看电视网站 | 免费一级日韩欧美性大片 | 国产精品久久人 | 欧美激情另类文学 | 国产专区在线 | 久久成年人网站 | 久久精品一区二区国产 | 国产99亚洲 | 久久久久一区二区三区 | 最新国产精品拍自在线播放 | 亚洲一二三区精品 | 天天射,天天干 | 国产精品aⅴ | 成 人 黄 色 片 在线播放 | 国产精品一区免费观看 | 91超碰免费在线 | 久草国产视频 | 久久久亚洲精华液 | 一区二区三区中文字幕在线观看 | 久久躁日日躁aaaaxxxx | 特级xxxxx欧美 | 久久黄色精品视频 | 九九视频免费观看视频精品 | 91麻豆精品一区二区三区 | 成人免费观看电影 | 中文字幕制服丝袜av久久 | 日韩欧美在线视频一区二区三区 | 免费看黄视频 | 久久成人精品视频 | 亚洲午夜久久久久久久久久久 | 在线亚洲激情 | 色久av | 天天躁日日躁狠狠躁av中文 | 日韩精品一区二区三区三炮视频 | 国产乱对白刺激视频不卡 | 国产精品久久久久四虎 | 亚洲波多野结衣 | 国产亚洲欧美在线视频 | 亚洲激情婷婷 | 国产中文字幕在线 | 日韩精品中字 | 中文字幕丝袜美腿 | 探花视频免费在线观看 | 91在线视频导航 | 亚洲少妇激情 | 91麻豆精品国产91久久久使用方法 | 在线观看av网 | av黄色在线观看 | 欧美色图狠狠干 | 国产a国产 | 色老板在线 | 婷婷视频 | 久久看毛片 | 亚洲精品久久久久久中文传媒 | av黄色亚洲 | 成年人视频在线免费观看 | 九九欧美视频 | 成年人电影免费看 | 国产视频在线观看一区 | 六月激情婷婷 | 国产视频欧美视频 | 人人添人人澡人人澡人人人爽 | 免费成人av在线 | 97综合网 | 日韩三级视频在线观看 | 日韩欧美在线观看一区二区 | av电影在线免费 | 亚洲精品视频一 | 国产精品久久久久久久久久久久午夜 | 五月激情丁香婷婷 | 在线小视频国产 | 狠狠干夜夜操天天爽 | 亚洲一区二区观看 | 久久欧美在线电影 | 欧美精品一区二区蜜臀亚洲 | 久久久官网 | 成人一级黄色片 | 色婷婷激情 | 黄网站免费看 | 欧美精品黑人性xxxx | 亚洲精品久久激情国产片 | 香蕉手机在线 | 狠狠成人| 国产高清久久久久 | 亚洲成人二区 | 国产在线精品福利 | 午夜av电影院 | 五月天亚洲激情 | 97av在线视频 | 欧美色888 | 三级黄色大片在线观看 | 91伊人久久大香线蕉蜜芽人口 | 精品夜夜嗨av一区二区三区 | 天天综合网久久综合网 | 黄色小说视频在线 | 精品 一区 在线 | 国产一区高清在线观看 | 激情开心色 | 毛片二区 | 久久久久久国产精品美女 | 亚洲在线精品视频 | 国产97av| 国产视频2| 久久视频这里有久久精品视频11 | 久久久久久亚洲精品 | 久久涩涩网站 | 亚洲一区久久 | 亚洲精品乱码白浆高清久久久久久 | 91在线超碰 | 亚洲成人一二三 | 日三级在线 | 51久久成人国产精品麻豆 | 国产玖玖精品视频 | 91爱爱视频 | 粉嫩高清一区二区三区 | 国产一级二级在线观看 | 91av播放| 97超碰在线久草超碰在线观看 | 久久国产精品电影 | zzijzzij亚洲日本少妇熟睡 | 91在线产啪 | 久久久久久久久毛片精品 | 亚洲欧美日韩国产 | 97狠狠操| 97精品国自产拍在线观看 | 精品毛片久久久久久 | 国产高清不卡在线 | 91免费版在线观看 | 久久久久99精品成人片三人毛片 | 欧美色道 | 91成人欧美| 国产精品久久在线 | 天天色成人 | 91精品国产欧美一区二区成人 | 黄色性av | 在线观看成人小视频 | 中国一级特黄毛片大片久久 | 欧美精品一区二区三区一线天视频 | 在线不卡a | av成人黄色| 久久人人97超碰com | 97人人视频 | 91色综合| 91视视频在线直接观看在线看网页在线看 | 久久久久久久久久久黄色 | 久久av一区二区三区亚洲 | 久久久久国产精品视频 | 国产男女爽爽爽免费视频 | av大全在线 | 亚洲国内精品视频 | 精品欧美一区二区精品久久 | 免费网站黄 | 黄色在线观看免费 | 久久精品79国产精品 | 亚洲一区视频在线播放 | 亚洲国产三级在线 | 干干操操| 久久久免费毛片 | 国产一区二区播放 | 91丨九色丨国产丨porny精品 | 中文字幕色综合网 | 天海冀一区二区三区 | 992tv成人免费看片 | www..com黄色片| 婷婷丁香色| 亚洲视频免费在线观看 | 91av原创 | 91禁在线看 | 九九在线视频免费观看 | av免费网页 | 久久久国际精品 | 国产精品麻豆欧美日韩ww | 国产日韩欧美精品在线观看 | 欧美成人h版在线观看 | 91成人看片| 日本黄色免费网站 | 亚州视频在线 | 日本天天操 | 看片一区二区三区 | www.国产视频 | 亚洲精品女人久久久 | 97视频免费观看 | 天天躁日日躁狠狠 | 国产盗摄精品一区二区 | 色婷婷骚婷婷 | 国产成人精品免高潮在线观看 | 人人爽人人爽人人 | 久草视频观看 | 免费观看性生活大片 | 婷婷色在线观看 | 射综合网| 欧美一级在线看 | 成人在线电影观看 | 成人午夜电影网站 | 97视频免费在线观看 | 精品国产成人av在线免 | 999视频网站 | 久久久久女教师免费一区 | 超碰在线免费福利 | 久久久久久久电影 | 99精品成人 | 成人免费在线观看电影 | 91香蕉视频| 国产亚洲精品久久 | 一区二区三区国产欧美 | 午夜免费久久看 | 国产精品女教师 | 国产在线色站 | 欧美人操人 | 伊人亚洲综合网 | 欧美日韩国产一区二区三区在线观看 | 久久精品欧美视频 | 国产精品美女久久久久久 | 亚洲国产播放 | 91精品国产乱码久久 | 国产精品久久久久久久久久三级 | aaa黄色毛片 | 日韩黄色一级电影 | 亚洲九九九 | 我要看黄色一级片 | 欧美一区二区精品在线 | 久久久久久久免费观看 | 国产精品久久久久久久久久久久冷 | a视频在线 | 日本h视频在线观看 | 中文字幕在线免费 | 美女视频黄,久久 | 黄色成年网站 | 国产精品欧美久久久久三级 | av免费看在线 | 四虎国产精 | 色七七亚洲影院 | 91亚洲精品在线 | 精品国产一区二区三区噜噜噜 | 99久久久精品 | 免费看黄在线 | 在线免费观看麻豆 | 日韩最新在线视频 | 美女很黄免费网站 | 亚洲精品中文在线资源 | 97色在线观看免费视频 | 日日夜夜狠狠干 | 2022中文字幕在线观看 | 欧美日韩1区2区 | 又黄又爽又色无遮挡免费 | 又黄又刺激又爽的视频 | 欧美激情第十页 | 天堂在线v | 欧美日韩中文字幕综合视频 | 福利视频入口 | 性色av一区二区三区在线观看 | 国产精品一区二区三区免费看 | 欧美一区二区在线刺激视频 | 久久免费视频4 | 波多野结衣电影一区二区三区 | 免费av电影网站 | 国产美女免费看 | 狠狠狠综合 | 精品国产91亚洲一区二区三区www | 欧美aaa大片 | 午夜色影院 | 欧美analxxxx | 久久精品久久久久 | 国产精品99久久久久久小说 | 欧美在线视频日韩 | 免费网站v | 最近最新最好看中文视频 | 国产精品无av码在线观看 | 精品久久国产 | 91免费网 | 国产一区国产二区在线观看 | 国产小视频你懂的在线 | 国产精品高潮呻吟久久av无 | 日韩在线 一区二区 | 日韩视频精品在线 | 天天干天天摸 | 久久久久免费视频 | 美女搞黄国产视频网站 | 日韩网站在线播放 | 麻豆超碰 | 伊人天天操 | 五月天激情综合 | 国产精品久久久av久久久 | 国产色在线 | 日韩精品久久久久久中文字幕8 | 国产一区高清在线 | 久免费视频 | 国产午夜精品一区二区三区欧美 | 国产精品久久在线观看 | 500部大龄熟乱视频 欧美日本三级 | 国产亚洲资源 | 欧美日韩亚洲在线观看 | 欧美另类交人妖 | 久久夜靖品 | 精品亚洲视频在线观看 | 天天综合久久 | 久久99精品久久只有精品 | 国产在线观看你懂得 | 精品国产成人在线影院 | 丁香六月久久综合狠狠色 | 九九热精品视频在线播放 | 精品96久久久久久中文字幕无 | 69国产盗摄一区二区三区五区 | 国产精品一区二区久久精品爱微奶 | 亚洲最大成人网4388xx | 久久久久久久久久久久国产精品 | www.狠狠操 | 免费在线成人av | 精品 激情| 久久精品爱爱视频 | 天天性天天草 | 日韩精品一区二区三区高清免费 | 日韩三级.com| 91人人爱| 国产精品久久久久四虎 | 狠狠做六月爱婷婷综合aⅴ 日本高清免费中文字幕 | 免费日韩 精品中文字幕视频在线 | www视频免费在线观看 | 国产午夜视频在线观看 | 免费一级片在线观看 | 五月开心激情网 | 中文字幕在线乱 | 中文字幕之中文字幕 | 国产一级在线视频 | 国内精品久久久 | 中文在线a在线 | 久久免费视频网站 | 久久99热精品 | 精品1区二区 | 日韩丝袜视频 | 在线亚洲成人 | 亚洲精品美女久久久 | 国产97视频在线 | 91九色蝌蚪| 网址你懂的在线观看 | 久久精品视频日本 | 欧美日韩精品久久久 | 麻豆91精品视频 | 日韩二区在线播放 | 欧美精品中文字幕亚洲专区 | 99精品久久久久久久 | 亚洲精品理论片 | 亚洲视频免费在线观看 | 久久人人精 | 天天碰天天操 | 久久99久国产精品黄毛片入口 | 国产精品久久久久久久久久久免费 | 91视频麻豆 | 欧美性成人| 一区二区三区四区免费视频 | 97超碰人| 精品久久久久久亚洲综合网站 | 激情综合五月网 | 国产五月色婷婷六月丁香视频 | 热久久视久久精品18亚洲精品 | 在线观看的a站 | av福利在线看 | 精品在线视频一区二区三区 | 探花视频网站 | 国产乱码精品一区二区蜜臀 | 自拍超碰在线 | 久久综合成人网 | 免费成人在线视频网站 | 成人性生交大片免费看中文网站 | 国产成人福利在线观看 | 麻豆久久久久久久 | 中文字幕一区二区三区在线观看 | 亚洲国产中文字幕在线观看 | 五月婷婷播播 | 亚洲精品乱码白浆高清久久久久久 | 99视频在线精品国自产拍免费观看 | 日韩在线小视频 | 视频在线91 | 国产精品 欧美 日韩 | 91在线看黄 | 国产精品24小时在线观看 | 免费成人在线观看视频 | 日韩中文字幕免费在线观看 | 国产精彩在线视频 | 91精品国产高清自在线观看 | 久久久午夜精品福利内容 | 久久69精品久久久久久久电影好 | 高潮久久久久久 | 开心激情五月网 | 天天躁日日躁狠狠躁av中文 | 婷婷丁香国产 | 免费日韩精品 | 国产视频91在线 | 精品99免费 | 国产成人综合在线观看 | 成人免费观看a | 日韩精品久久一区二区 | 国产精品嫩草影院9 | 久久色中文字幕 | 韩国精品在线 | 亚洲欧美日韩国产一区二区 | 国产精品久久久久久久久婷婷 | 久草爱视频 | 午夜av免费观看 | 少妇bbw撒尿 | 日韩欧美xxxx | 久久天天躁狠狠躁夜夜不卡公司 | 在线观看岛国 | 九九99靖品 | 国产91亚洲 | 欧美日韩高清在线 | 久热爱 | 国内精品久久久久久 | 五月天狠狠操 | 夜夜操天天干, | 欧美精品亚州精品 | 精品久久精品 | 亚洲成人av片在线观看 | 国产永久免费 | 欧美日韩视频在线观看免费 | 亚洲国产精品99久久久久久久久 | 天天色天天爱天天射综合 | 色婷婷www | 成人免费视频网 | 欧美天堂视频在线 | 精品国产一区二区三区久久影院 | 国产三级香港三韩国三级 | 女女av在线| 91黄色在线视频 | 97热久久免费频精品99 | 色婷婷综合久久久久中文字幕1 | 日韩午夜av | 久久综合五月婷婷 | 色婷婷狠狠操 | 日韩免费av在线 | 国产精品一区在线播放 | 国产成人精品一区一区一区 | 久久99亚洲精品久久久久 | 又黄又刺激视频 | 国产乱码精品一区二区蜜臀 | www狠狠| 精品一二| 激情影院在线 | 在线播放精品一区二区三区 | 国产在线97| 国产精品手机播放 | 亚洲国产人午在线一二区 | 国产精品免费观看网站 | 久久国产精品电影 | 97色综合 | 国产色综合 | 日韩电影久久久 | 欧洲精品久久久久毛片完整版 | 在线观看黄av | 久久午夜免费视频 | 午夜av在线 | 久久久精品国产一区二区电影四季 | 亚洲精品永久免费视频 | 五月婷婷视频在线 | 免费视频黄色 | 久久精品视频免费播放 | 国产一线天在线观看 | 久久理论电影 | 久久亚洲婷婷 | 深夜福利视频在线观看 | av丁香 | 超碰av在线 | 一区二区不卡视频在线观看 | 91香蕉视频黄 | 欧美日韩在线播放一区 | 黄色在线观看网站 | 麻豆国产精品永久免费视频 | 天天干婷婷 | 在线天堂中文在线资源网 | 精品亚洲免a | 午夜资源站 | 六月丁香六月婷婷 | 5月丁香婷婷综合 | 在线精品视频在线观看高清 | 国产精品成人一区二区三区吃奶 | 久草在在线| 亚洲jizzjizz日本少妇 | 天天色图 | 超碰公开在线观看 | 午夜10000| 亚洲视频观看 | 激情av五月婷婷 | 欧美日韩国产xxx | 久久精品在线视频 | 久久精品国产久精国产 | 91精品婷婷国产综合久久蝌蚪 | 久久国产午夜精品理论片最新版本 | 久久这里有精品 | 亚洲综合少妇 | 亚洲3级 | 美女黄频在线观看 | 精品福利视频在线观看 | 99av国产精品欲麻豆 | 久久 一区 | 国产亚洲精品成人av久久ww | 99色亚洲 | 国产精品久久久久久久久久免费 | 久久久精品一区二区 | 免费观看性生活大片3 | 伊人激情网 | 日韩欧美国产精品 | 国产不卡一区二区视频 | 四虎在线免费 | 色先锋资源网 | 99久久精品免费看国产 | 亚洲一区视频在线播放 | 亚洲高清在线 | 欧美视频日韩视频 | 日韩在线观看视频中文字幕 | 91成人破解版 | 涩涩成人在线 | 精品久久一级片 | 国产福利在线免费 | 99色在线观看视频 | 久久夜夜操 | 综合久久久久久久 | 国产精品欧美久久久久天天影视 | 亚洲视频精品在线 | 天天干,天天操 | 成人精品在线 | 国产精品激情偷乱一区二区∴ | 黄色成人在线观看 | 午夜电影中文字幕 | 97视频人人免费看 | 久久精品老司机 | 99久热在线精品视频观看 | 国产手机视频在线观看 | 天天曰夜夜爽 | 亚洲精品免费视频 | 久久撸在线视频 | 日本久久视频 | 国产 欧美 日本 | 黄色在线观看免费网站 | 91久久国产综合精品女同国语 | 日韩高清一区 | 日本精品视频一区二区 | 在线观看的av网站 | 日韩视频一区二区三区 | 国产精品自产拍在线观看中文 | 成人免费ⅴa | 九九色在线观看 | 久久理论电影网 | 色资源二区在线视频 | 国产一级在线 | 五月亚洲婷婷 | 成人免费视频观看 | 久久人人精品 | 麻豆视频免费播放 | 亚洲综合在线视频 | 91麻豆看国产在线紧急地址 | 最近中文字幕完整视频高清1 | 一区二区三区高清不卡 | 久久夜视频 | 97人人人人 | 96av在线| 91精品爽啪蜜夜国产在线播放 | 免费大片av| www.色综合.com | 美女视频是黄的免费观看 | 一级a毛片高清视频 | 国产成人精品av在线 | 丁香婷婷激情网 | 亚洲亚洲精品在线观看 | 欧美午夜精品久久久久久孕妇 | 欧美一级乱黄 | 天天射天天操天天 | 综合色影院 | 三级黄色免费片 | 成人中文字幕+乱码+中文字幕 | 国产精品永久 | 欧美日韩国产一区 | 久久久久高清 | 亚洲中字幕 | 国产不卡视频在线播放 | 99久久精品国 | 国产精品女人网站 | 亚洲国产精品成人女人久久 | 一二三久久久 | 日韩精品中文字幕久久臀 | 日韩av在线免费播放 | 丁香激情综合久久伊人久久 | 亚洲天堂网在线观看视频 | 免费在线精品视频 | 亚洲精选视频免费看 | 欧美精品久| 国产精品综合av一区二区国产馆 | 久久免费影院 | 国产一在线精品一区在线观看 | 欧美性生活一级片 | 天堂网一区二区三区 | 区一区二区三区中文字幕 | 亚洲精品视频在线观看视频 | 久久久伦理 | 欧美日本不卡视频 | 日本久久久久 | 超碰午夜| 97精品久久人人爽人人爽 | 国产精品视频观看 | 视频一区二区免费 | 久久免费国产精品1 | 免费观看www7722午夜电影 | 国产高清中文字幕 | 国产视频一区在线播放 | 99久久久成人国产精品 | 亚洲精品99| 日韩欧美亚州 | 日韩在线观看三区 | 久久男人影院 | 天堂在线成人 | 911香蕉视频 | 免费又黄又爽 | 丁香亚洲| 精品亚洲视频在线观看 | 麻豆视频免费在线播放 | 国产一及片 | 亚洲天堂网在线播放 | 亚洲视频第一页 | av免费网站观看 | 欧美性色黄大片在线观看 | 精品国产不卡 | 午夜电影中文字幕 | 91看片淫黄大片91 | 日韩欧美极品 | 国产精品乱码久久久久 | 99热日本| 国产a精品 | 成人午夜av电影 | 久久久久久久久久久久久9999 | 狠狠干婷婷 | www178ccom视频在线 | 亚洲精品tv久久久久久久久久 | 一级性视频 | 人人干人人搞 | 国产亚洲一区二区三区 | 久久国产精品久久久 | 在线观看欧美成人 | 免费看污片 | 18久久久久久 | 99免在线观看免费视频高清 | 亚洲欧洲日韩 | 亚洲精品在线免费 | 日韩免费中文字幕 | 成 人 黄 色视频免费播放 | 毛片网站免费在线观看 | 8x成人在线 | 久久亚洲视频 | 在线看的av网站 | 亚洲精品中文在线 | 成人国产精品入口 | 欧洲成人av | 在线视频 你懂得 | 在线观看日韩av | 欧美日韩99 | 91粉色视频 | 国偷自产视频一区二区久 | 1024手机基地在线观看 | 精品免费久久久久 | 日韩黄色免费在线观看 | 91中文字幕网 | 日韩中文字幕国产 | 人人艹视频 | 久久久免费 | 丁香婷婷综合激情 | 手机在线看永久av片免费 | 美女黄频在线观看 | 91豆麻精品91久久久久久 | 免费av影视 | 九九久久久久久久久激情 | 日韩免费观看一区二区 | 国产做aⅴ在线视频播放 | 婷婷av色综合 | 国产亚洲欧美精品久久久久久 | 在线黄色观看 | 日日夜夜网站 | www.xxxx欧美 | 97精品国产手机 | 亚洲闷骚少妇在线观看网站 | 激情综合亚洲精品 | 国产日韩欧美视频在线观看 | 亚洲在线成人精品 | 91九色蝌蚪视频 | 精品一二三四在线 | 日本精品久久久久影院 | 欧美人牲 | 国产99久久 | 国产网红在线 | 欧美日韩国产一区二区在线观看 | 99电影| 人人爽久久涩噜噜噜网站 | 丁香婷婷综合五月 | 亚洲视屏 | 精品国产一区二区在线 | 狠狠色噜噜狠狠狠狠2022 | www.操.com| 成人a视频 | 成人av动漫在线 | av最新资源 | 亚洲伦理精品 | 97操操操 | 免费日韩一区二区三区 | 国内视频在线观看 | 热久久免费视频 | 久久综合九色综合久99 | 久久人人添人人爽添人人88v | 亚洲免费视频在线观看 | 在线综合色 | 99久久精| 91视频亚洲| 超碰av在线免费观看 | 一级免费黄视频 | 草免费视频 | 日本大尺码专区mv | 欧美性生活免费看 | 97国产视频| av黄色在线播放 | 又湿又紧又大又爽a视频国产 | 欧美极品xxxxx | 亚洲日本欧美在线 | 日韩丝袜在线观看 | 在线观看视频在线 | 国产一区精品在线观看 | 日韩在线一二三区 | bayu135国产精品视频 | 91网址在线观看 | 福利一区二区三区四区 | 亚洲精品日韩一区二区电影 | 国产一级片网站 | 国产男男gay做爰 | 不卡电影免费在线播放一区 | 国产在线观看污片 | 日韩在线观看你懂得 | 9999免费视频 | 欧美影片 | 欧美 日韩 性 | 在线观看视频 | 久久人人爽爽 | av不卡在线看 | 夜夜躁日日躁 | 精品免费观看视频 | 日日干影院 | 免费色婷婷 | 久久久久久精 | 园产精品久久久久久久7电影 | 婷婷精品 | 综合色伊人| 国产中文字幕在线播放 | 天天爱天天操天天干 | 日韩精品中文字幕一区二区 | 五月天婷婷在线观看视频 | 特级xxxxx欧美 | 亚洲精品久久久久久国 |