R语言使用逻辑回归分类算法
R語言使用邏輯回歸分類算法
邏輯回歸屬于概率統計的分類算法模型的算法,是根據一個或者多個特征進行類別標號預測。在R語言中可以通過調用logit函數執行邏輯回歸分類算法并預測輸出概率。通過調用glm函數將family參數也就是響應分布指定為binominal(二項式),就是使用邏輯回歸算法。
操作
同進述內容一樣準備好訓練數據集與測試數據集。
fit = glm(churn ~ .,data = trainset,family = binomial)
summary(fit)
Call:
glm(formula = churn ~ ., family = binomial, data = trainset)
Deviance Residuals:?
??? Min?????? 1Q?? Median?????? 3Q????? Max ?
-3.1519?? 0.1983?? 0.3460?? 0.5186?? 2.1284 ?
Coefficients:
??????????????????????????????? Estimate Std. Error z value Pr(>|z|)?? ?
(Intercept)??????????????????? 8.3462866? 0.8364914?? 9.978? < 2e-16 ***
international_plan1?????????? -2.0534243? 0.1726694 -11.892? < 2e-16 ***
voice_mail_plan1?????????????? 1.3445887? 0.6618905?? 2.031 0.042211 * ?
number_vmail_messages???????? -0.0155101? 0.0209220? -0.741 0.458496?? ?
total_day_minutes????????????? 0.2398946? 3.9168466?? 0.061 0.951163?? ?
total_day_calls?????????????? -0.0014003? 0.0032769? -0.427 0.669141?? ?
total_day_charge????????????? -1.4855284 23.0402950? -0.064 0.948592?? ?
total_eve_minutes????????????? 0.3600678? 1.9349825?? 0.186 0.852379?? ?
total_eve_calls?????????????? -0.0028484? 0.0033061? -0.862 0.388928?? ?
total_eve_charge????????????? -4.3204432 22.7644698? -0.190 0.849475?? ?
total_night_minutes??????????? 0.4431210? 1.0478105?? 0.423 0.672367?? ?
total_night_calls????????????? 0.0003978? 0.0033188?? 0.120 0.904588?? ?
total_night_charge??????????? -9.9162795 23.2836376? -0.426 0.670188?? ?
total_intl_minutes???????????? 0.4587114? 6.3524560?? 0.072 0.942435?? ?
total_intl_calls?????????????? 0.1065264? 0.0304318?? 3.500 0.000464 ***
total_intl_charge???????????? -2.0803428 23.5262100? -0.088 0.929538?? ?
number_customer_service_calls -0.5109077? 0.0476289 -10.727? < 2e-16 ***
---
Signif. codes:? 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
? Null deviance: 1938.8? on 2314? degrees of freedom
Residual deviance: 1515.3? on 2298? degrees of freedom
AIC: 1549.3
Number of Fisher Scoring iterations: 6
找到分類模型中包含的可能導致錯誤分類的非顯著變量,僅使用顯著的變量來訓練分類模型。
?fit = glm(churn ~ international_plan + voice_mail_plan + number_customer_service_calls,data = trainset,family = binomial)
summary(fit)
Call:
glm(formula = churn ~ international_plan + voice_mail_plan +?
??? number_customer_service_calls, family = binomial, data = trainset)
Deviance Residuals:?
??? Min?????? 1Q?? Median?????? 3Q????? Max ?
-2.6485?? 0.3067?? 0.4500?? 0.5542?? 1.6509 ?
Coefficients:
????????????????????????????? Estimate Std. Error z value Pr(>|z|)?? ?
(Intercept)??????????????????? 2.68272??? 0.12064? 22.237? < 2e-16 ***
international_plan1?????????? -1.97626??? 0.15998 -12.353? < 2e-16 ***
voice_mail_plan1?????????????? 0.79423??? 0.16352?? 4.857 1.19e-06 ***
number_customer_service_calls -0.44341??? 0.04445? -9.975? < 2e-16 ***
---
Signif. codes:? 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
??? Null deviance: 1938.8? on 2314? degrees of freedom
Residual deviance: 1678.5? on 2311? degrees of freedom
AIC: 1686.5
Number of Fisher Scoring iterations: 5
調用fit使用一個內置模型來預測testset數據集的輸出,可以通過調整概率是否高于0.5來改變類別標記的輸出結果。
#這是選擇預測之后的輸出結果,這個參數能用在binomial數據,也就是響應變量是二分型的時候,這個參數選成type=response,表示輸出結果預測響應變量為1的概率。?
pred = predict(fit,testset,type = "response")
#將ped中概率大于0.5的設置TRUE,代表為“no”,沒有流失客戶,1
#將ped中概率小于0.5的設置FALSE,代表為“yes”,有流失
客戶,0
Class = pred > 0.5
summary(Class)
?? Mode?? FALSE??? TRUE?
logical????? 28???? 990?
對測試數據集的分類和預測結果進行統計分析計數:
tb = table(testset$churn,Class)
> tb
???? Class
????? FALSE TRUE
? yes??? 15? 126
? no???? 13? 864
將上一步驟的統計結果用分類形式表輸出,并生成混淆矩陣
churn.mod = ifelse(testset$churn == "yes",1,0)
> churn.mod
?? [1] 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0
? [44] 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0
? [87] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0
?[130] 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0
?[173] 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
?[216] 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0
?[259] 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0
?[302] 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0
?[345] 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
?[388] 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0
?[431] 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
?[474] 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0
?[517] 0 0 0 0 0 0 0 0 1 0 1 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0
?[560] 0 0 0 0 0 0 0 1 0 1 0 1 1 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1
?[603] 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0
?[646] 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0
?[689] 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0
?[732] 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
?[775] 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
?[818] 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1
?[861] 1 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
?[904] 0 0 1 1 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1
?[947] 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0
?[990] 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0
將Class轉化成numeric
ABC = as.numeric(Class)
ABC與churn.mod 中0,1代表的意思相反,將ABC進行數值取反
BC = 1 - ABC
計算混淆矩陣
confusionMatrix(churn.mod,BC)
Confusion Matrix and Statistics
????????? Reference
Prediction?? 0?? 1
???????? 0 864? 13
???????? 1 126? 15
?????????????? Accuracy : 0.8635??????? ?
???????????????? 95% CI : (0.8408, 0.884)
??? No Information Rate : 0.9725??????? ?
??? P-Value [Acc > NIR] : 1???????????? ?
????????????????? Kappa : 0.138???????? ?
?Mcnemar's Test P-Value : <2e-16??????? ?
??????????? Sensitivity : 0.8727??????? ?
??????????? Specificity : 0.5357??????? ?
???????? Pos Pred Value : 0.9852??????? ?
???????? Neg Pred Value : 0.1064??????? ?
???????????? Prevalence : 0.9725??????? ?
???????? Detection Rate : 0.8487??????? ?
?? Detection Prevalence : 0.8615??????? ?
????? Balanced Accuracy : 0.7042??????? ?
?????? 'Positive' Class : 0??????? ?
邏輯回歸算法和線性回歸非常相似,兩者區別是在于線性回歸算法中的變量是連續變量,而邏輯回歸響應變量是二分類的變量(名義變量),使用邏輯回歸算法主要目的是利用logit模型去預測和測量變量相關的名義變量的概率。邏輯回歸公式:ln(P/(1-P)),P為某事情發生的概率。
邏輯回歸的算法的優勢是在于算法易于理解,能夠直接輸出預測模型的邏輯概率邏輯值以及結果的置信區間,與決策樹難以更新模型不同,邏輯回歸算法能夠迅速在邏輯回歸算法中合并新的數據,更新分類模型,邏輯回歸算法的不足是無法處理多重共線性問題,因此解決變量必須線性無關。glm提供了一個通用的線性回歸模型,可以通過設置family參數得到,當為binomial回歸時,可以實現二元分類。
調用fit函數預測測試數據集testset的類別響應變量,fit函數能夠輸出類標號的概率,如果概率值小于等于0.5,意味預測得出的類標號與測試數據集的實際類標號不相符,如果大于0.5則說明兩者是一致的,進一步調用summsary函數來得到預測的模型。最后進行計數統計與混淆矩陣。
總結
以上是生活随笔為你收集整理的R语言使用逻辑回归分类算法的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 极性与非极性电容并联的作用
- 下一篇: Flutter APPbar 自定义ic