由於語法渲染問題而影響閱讀體驗, 請移步博客閱讀~
本文GitPage地址
Python Machine Learning
sudo pip3.7 install -i https://pypi.tuna.tsinghua.edu.cn/simple sklearnsudo pip3.7 install -i https://pypi.tuna.tsinghua.edu.cn/simple xgboost
from sklearn.datasets import load_irisimport xgboost as xgbfrom xgboost import plot_importancefrom matplotlib import pyplot as pltfrom sklearn.model_selection import train_test_split## read in the iris datairis = load_iris()X = iris.datay = iris.targetX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1234565)params = {'booster': 'gbtree','objective': 'multi:softmax','num_class': 3,'gamma': 0.1,'max_depth': 6,'lambda': 2,'subsample': 0.7,'colsample_bytree': 0.7,'min_child_weight': 3,'silent': 1,'eta': 0.1,'seed': 1000,'nthread': 4,}plst = params.items()dtrain = xgb.DMatrix(X_train, y_train)num_rounds = 500model = xgb.train(plst, dtrain, num_rounds)## 对测试集进行预测dtest = xgb.DMatrix(X_test)ans = model.predict(dtest)## 计算准确率cnt1 = 0cnt2 = 0for i in range(len(y_test)):if ans[i] == y_test[i]:cnt1 += 1else:cnt2 += 1print("Accuracy: %.2f %% " % (100 * cnt1 / (cnt1 + cnt2)))## 显示重要特征plot_importance(model)plt.show()
Enjoy~
由於語法渲染問題而影響閱讀體驗, 請移步博客閱讀~
本文GitPage地址
GitHub: Karobben
Blog:Karobben
BiliBili:史上最不正經的生物狗
