【机器学习】交叉验证筛选参数K值和weight
生活随笔
收集整理的這篇文章主要介紹了
【机器学习】交叉验证筛选参数K值和weight
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
交叉驗(yàn)證
導(dǎo)包
import numpy as npfrom sklearn.neighbors import KNeighborsClassifierfrom sklearn import datasets#model_selection :模型選擇 # cross_val_score: 交叉 ,validation:驗(yàn)證(測試) #交叉驗(yàn)證 from sklearn.model_selection import cross_val_score讀取datasets中鳶尾花(yuan1wei3hua)數(shù)據(jù)
X,y= datasets.load_iris(True) X.shape(150, 4)
一般情況不會(huì)超過數(shù)據(jù)的開方數(shù)
#參考 150**0.5 #K 選擇 1~1312.24744871391589
knn = KNeighborsClassifier()score = cross_val_score(knn,X,y,scoring='balanced_accuracy',cv=11) score.mean()0.968181818181818
應(yīng)用cross_val_score篩選 n_neighbors k值
errors =[] for k in range(1,14):knn = KNeighborsClassifier(n_neighbors=k)score = cross_val_score(knn,X,y, scoring='accuracy',cv = 6).mean()#誤差越小 說明K選擇越合適 越好errors.append(1-score)import matplotlib.pyplot as plt %matplotlib inline#k = 11時(shí) 誤差最小 說明最合適的k值 plt.plot(np.arange(1,14),errors)[<matplotlib.lines.Line2D at 0x17ece9ff0b8>]
應(yīng)用cross_val_score篩選 weights
weights =['uniform','distance']for w in weights:knn = KNeighborsClassifier(n_neighbors = 11,weights= w)print(w,cross_val_score(knn,X,y, scoring='accuracy',cv = 6).mean())uniform 0.98070987654321
distance 0.9799382716049383
模型如何調(diào)參的,參數(shù)調(diào)節(jié)
result = {} for k in range(1,14):for w in weights:knn = KNeighborsClassifier(n_neighbors=k,weights=w)sm = cross_val_score(knn,X,y,scoring='accuracy',cv=6).mean()result[w+str(k)] =sma =result.values() list(a)np.array(list(a)).argmax()20
list(result)[20]‘uniform11’
創(chuàng)作挑戰(zhàn)賽新人創(chuàng)作獎(jiǎng)勵(lì)來咯,堅(jiān)持創(chuàng)作打卡瓜分現(xiàn)金大獎(jiǎng)總結(jié)
以上是生活随笔為你收集整理的【机器学习】交叉验证筛选参数K值和weight的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: c语言中期报告程序,课题中期报告
- 下一篇: 肌电数据归一化并显示灰度图片