数据挖掘 —— 模型评估
生活随笔
收集整理的這篇文章主要介紹了
数据挖掘 —— 模型评估
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
數據挖掘 —— 模型評估
- 1.分類模型評估(一)
- 1.1 二分類模型
- 1.2 多分類模型
- 1.3 代碼
- 2.分類模型評估(二)
- 2.1 ROC 與 AUC
- 2.2 代碼
1.分類模型評估(一)
1.1 二分類模型
- 一般情況下更關注正類
- 混淆矩陣:
TP(TruePositive):正確的正類
FN(FalseNegative):錯誤的負類
FP(FalseNegative):錯誤的正類
TN(TrueNegative):正確的負類
TN FPFN TP- 關鍵指標
Accuracy Rate(準確率):(TP+TN)/(TP+TN+FP+FN)
TPR(True Positive Rate)/(Recall Rate)召回率: TP/(TP+FN)
F-measure(F值):2RecallAccuracy/(Recall + Accuracy) 調和平均值
Precision Rate(查準率): TP/(TP+FP)
FPR(False posotive Rate)(錯誤接受率): (FP)/(FP+TN)
FRR(False Reject Rate)(錯誤拒絕率): FN/(TP+FN)
1.2 多分類模型
預測分類Y1 Y2 ... Yn 真實分類: Y1Y2...Yn- 關鍵指標:
(1)、先計算所有的TP、FN,加起來,再以二值方式計算(微平均)
(2)、分別把每個類當做正類,都算一個召回率或者F值,然后取加權或者不加權的平均(宏平均)
1.3 代碼
# sklearn實現: from sklearn.datasets import load_iris from sklearn.metrics import recall_score,accuracy_score,f1_score,precision_score from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier import numpy as np import pandas as pd data = load_iris() X = data["data"] Y = data["target"]X_train,X_test,Y_train,Y_test = train_test_split(X,Y) knn_model = KNeighborsClassifier(n_neighbors = 6) knn_model.fit(X_train,Y_train) Y_predict= knn_model.predict(X_test) print("*"*8,"metrics","*"*8) print("ACC:",accuracy_score(Y_test,Y_predict)) print("recall for micro:",recall_score(Y_test,Y_predict,average="micro")) print("recall for macro:",recall_score(Y_test,Y_predict,average="macro")) print("f1 for micro:",f1_score(Y_test,Y_predict,average="micro")) print("f1 for macro:",f1_score(Y_test,Y_predict,average="macro")) ******** metrics ******** ACC: 0.9736842105263158 recall for micro: 0.9736842105263158 recall for macro: 0.9629629629629629 f1 for micro: 0.9736842105263158 f1 for macro: 0.96963946869070222.分類模型評估(二)
2.1 ROC 與 AUC
- ROC : Receiver Operating Characteristic Curve
- AUC: Area under Curve
2.2 代碼
import numpy as np import pandas as pd from sklearn.model_selection import train_test_split import os os.environ["PATH"] += os.pathsep + "D://bin/"# 準備數據 features = pd.read_excel("./data.xlsx",sheet_name = "features",header = 0) label = pd.read_excel("./data.xlsx",sheet_name = "label",header = 0)# 訓練集 驗證集 測試集 拆分 def data_split(X,Y):X_tt,X_validation,Y_tt,Y_validation = train_test_split(X,Y,test_size = 0.2)X_train,X_test,Y_train,Y_test = train_test_split(X_tt,Y_tt,test_size = 0.25)return X_train,Y_train,X_validation,Y_validation,X_test,Y_test X_train,Y_train,X_validation,Y_validation,X_test,Y_test = data_split(features.values,label.values)# 搭建神經網絡模型 from keras.models import Sequential # 搭建模型框架 nn_model = Sequential() # 添加輸入層 from keras.layers import Dense,Activation nn_model.add(Dense(50,input_dim = len(X_train[0]))) nn_model.add(Activation("sigmoid")) # 添加隱含層 nn_model.add(Dense(10)) nn_model.add(Activation("sigmoid")) # 添加輸出層 nn_model.add(Dense(2)) nn_model.add(Activation("softmax"))# 神經網絡編譯 from keras.optimizers import SGD,Adam sgd = SGD(lr = 0.51) # lr為學習率 adam = Adam(lr = 0.01) nn_model.compile(loss = "mean_squared_error",optimizer = adam) """ 改用亞當優化器 效果更好 adam """# 神經網絡訓練 Y_train_nn = np.array([[1,0] if i ==0 else [0,1] for i in Y_train]) nn_model.fit(X_train,Y_train_nn,nb_epoch = 1000,batch_size = 4000)# nb_epoch 為最大迭代次數,bath_size為隨機樣本數量# 使用模型預測 validation_predict = nn_model.predict_classes(X_validation) test_predict = nn_model.predict_classes(X_test) train_predict = nn_model.predict_classes(X_train)# 預測效果檢驗 def model_metrics(x1,x2,name):from sklearn.metrics import f1_score,recall_score,accuracy_score,precision_scoreprint(name,":")print("\tf1_score",f1_score(x1,x2))print("\taccuracy_score",accuracy_score(x1,x2))print("\trecall_score",recall_score(x1,x2))print("\tprecision_score",precision_score(x1,x2))model_metrics(train_predict,Y_train,"訓練集") model_metrics(validation_predict,Y_validation,"驗證集") model_metrics(test_predict,Y_test,"測試集")# 預測訓練集屬于正類的概率值 Y_predict_test = nn_model.predict(X_test) Y_predict_test = Y_predict_test[:,1]from sklearn.metrics import roc_curve,auc,roc_auc_score import matplotlib.pyplot as plt# 計算 fpr(錯誤接受率)和TPR(召回率) fpr,tpr,threshold = roc_curve(Y_test,Y_predict_test) # threshold為閾值 plt.plot(fpr,tpr) plt.xlabel("fpr") plt.ylabel("tpr") plt.show()print("AUC",auc(fpr,tpr)) print("AUC",roc_auc_score(Y_test,Y_predict_test))by CyrusMay 2022 04 06
總結
以上是生活随笔為你收集整理的数据挖掘 —— 模型评估的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 数学理论—— 蒙特卡洛近似
- 下一篇: python——json数据