日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

xgboost使用自定义的loss function

發布時間:2024/1/23 编程问答 36 豆豆
生活随笔 收集整理的這篇文章主要介紹了 xgboost使用自定义的loss function 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

下面給一個官方demo代碼:

參考:https://github.com/dmlc/xgboost/blob/master/demo/guide-python/custom_objective.py

#!/usr/bin/python import numpy as np import xgboost as xgb ### # advanced: customized loss function # print('start running example to used customized objective function')dtrain = xgb.DMatrix('../data/agaricus.txt.train') dtest = xgb.DMatrix('../data/agaricus.txt.test')# note: for customized objective function, we leave objective as default # note: what we are getting is margin value in prediction # you must know what you are doing param = {'max_depth': 2, 'eta': 1, 'silent': 1} watchlist = [(dtest, 'eval'), (dtrain, 'train')] num_round = 2# user define objective function, given prediction, return gradient and second order gradient # this is log likelihood loss def logregobj(preds, dtrain):labels = dtrain.get_label()preds = 1.0 / (1.0 + np.exp(-preds))grad = preds - labelshess = preds * (1.0 - preds)return grad, hess# user defined evaluation function, return a pair metric_name, result # NOTE: when you do customized loss function, the default prediction value is margin # this may make builtin evaluation metric not function properly # for example, we are doing logistic loss, the prediction is score before logistic transformation # the builtin evaluation error assumes input is after logistic transformation # Take this in mind when you use the customization, and maybe you need write customized evaluation function def evalerror(preds, dtrain):labels = dtrain.get_label()# return a pair metric_name, result# since preds are margin(before logistic transformation, cutoff at 0)return 'error', float(sum(labels != (preds > 0.0))) / len(labels)# training with customized objective, we can also do step by step training # simply look at xgboost.py's implementation of train bst = xgb.train(param, dtrain, num_round, watchlist, logregobj, evalerror)

總結

以上是生活随笔為你收集整理的xgboost使用自定义的loss function的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。