kaggle研究生招生(上)
每天逛 kaggle
https://www.kaggle.com/mohansacharya/graduate-admissions
看來這個也是非常出名的數據集
- GRE分數(290至340)
- 托福成績(92-120)
- 大學評級(1至5)
- 目的聲明(1至5)
- 推薦信強度(1至5)
- 本科生CGPA(6.8至9.92)
- 研究經驗(0或1)
- 入學率(0.34至0.97)
碩士入學的三個最重要特征:CGPA、GRE和托福成績
進入碩士學位的三個最不重要的特征:研究、LOR和SOP
相關系數矩陣
fig,ax = plt.subplots(figsize=(10, 10)) sns.heatmap(df.corr(), ax=ax, annot=True, linewidths=0.05, fmt= '.2f',cmap="magma") plt.show()
但是數據大多數候選人都有研究經驗。
因此,本研究將成為入學機會的一個不重要的特征
print("Not Having Research:",len(df[df.Research == 0])) print("Having Research:",len(df[df.Research == 1])) y = np.array([len(df[df.Research == 0]),len(df[df.Research == 1])]) x = ["Not Having Research","Having Research"] plt.bar(x,y) plt.title("Research Experience") plt.xlabel("Canditates") plt.ylabel("Frequency") plt.show()
數據中托福最低分為92分,托福最高分為120分。平均107.41。
GRE分數:
此柱狀圖顯示GRE分數的頻率。
密度介于310和330之間。在這個范圍以上是候選人脫穎而出的一個很好的特征。
df["GRE Score"].plot(kind = 'hist',bins = 200,figsize = (6,6)) plt.title("GRE Scores") plt.xlabel("GRE Score") plt.ylabel("Frequency") plt.show()
大學評分的CGPA分數:
隨著大學質量的提高,CGPA分數也隨之提高。
plt.scatter(df["University Rating"],df.CGPA) plt.title("CGPA Scores for University Ratings") plt.xlabel("University Rating") plt.ylabel("CGPA") plt.show()
GRE分數高的個體通常有較高的CGPA分數。
從好大學畢業的候選人更有幸被錄取。
CGPA分數高的候選人通常具有較高的SOP分數。
GRE分數高的候選人通常具有較高的SOP分數。
上面是數據分析過程,下面開始model的訓練
去掉第一列的序號
# reading the dataset df = pd.read_csv("../input/Admission_Predict.csv",sep = ",")# it may be needed in the future. serialNo = df["Serial No."].valuesdf.drop(["Serial No."],axis=1,inplace = True) y = df["Chance of Admit"].values x = df.drop(["Chance of Admit"],axis=1)# separating train (80%) and test (%20) sets from sklearn.model_selection import train_test_splitx_train, x_test,y_train, y_test = train_test_split(x,y,test_size = 0.20,random_state = 42)縮放到固定范圍(0-1)
# normalization from sklearn.preprocessing import MinMaxScaler scalerX = MinMaxScaler(feature_range=(0, 1)) x_train[x_train.columns] = scalerX.fit_transform(x_train[x_train.columns]) x_test[x_test.columns] = scalerX.transform(x_test[x_test.columns])線性模型
from sklearn.linear_model import LinearRegression lr = LinearRegression() lr.fit(x_train,y_train) y_head_lr = lr.predict(x_test)print("real value of y_test[1]: " + str(y_test[1]) + " -> the predict: " + str(lr.predict(x_test.iloc[[1],:]))) print("real value of y_test[2]: " + str(y_test[2]) + " -> the predict: " + str(lr.predict(x_test.iloc[[2],:])))from sklearn.metrics import r2_score print("r_square score: ", r2_score(y_test,y_head_lr))y_head_lr_train = lr.predict(x_train) print("r_square score (train dataset): ", r2_score(y_train,y_head_lr_train))real value of y_test[1]: 0.68 -> the predict: [0.72368741]
real value of y_test[2]: 0.9 -> the predict: [0.93536809]
r_square score: 0.821208259148699
r_square score (train dataset): 0.7951946003191085
隨機森林
from sklearn.ensemble import RandomForestRegressor rfr = RandomForestRegressor(n_estimators = 100, random_state = 42) rfr.fit(x_train,y_train) y_head_rfr = rfr.predict(x_test) from sklearn.metrics import r2_score print("r_square score: ", r2_score(y_test,y_head_rfr)) print("real value of y_test[1]: " + str(y_test[1]) + " -> the predict: " + str(rfr.predict(x_test.iloc[[1],:]))) print("real value of y_test[2]: " + str(y_test[2]) + " -> the predict: " + str(rfr.predict(x_test.iloc[[2],:])))y_head_rf_train = rfr.predict(x_train) print("r_square score (train dataset): ", r2_score(y_train,y_head_rf_train))r_square score: 0.8074111823415694
real value of y_test[1]: 0.68 -> the predict: [0.7249]
real value of y_test[2]: 0.9 -> the predict: [0.9407]
r_square score (train dataset): 0.9634880602889714
決策樹
from sklearn.tree import DecisionTreeRegressor dtr = DecisionTreeRegressor(random_state = 42) dtr.fit(x_train,y_train) y_head_dtr = dtr.predict(x_test) from sklearn.metrics import r2_score print("r_square score: ", r2_score(y_test,y_head_dtr)) print("real value of y_test[1]: " + str(y_test[1]) + " -> the predict: " + str(dtr.predict(x_test.iloc[[1],:]))) print("real value of y_test[2]: " + str(y_test[2]) + " -> the predict: " + str(dtr.predict(x_test.iloc[[2],:])))y_head_dtr_train = dtr.predict(x_train) print("r_square score (train dataset): ", r2_score(y_train,y_head_dtr_train))r_square score: 0.6262105228127393
real value of y_test[1]: 0.68 -> the predict: [0.73]
real value of y_test[2]: 0.9 -> the predict: [0.94]
r_square score (train dataset): 1.0
線性回歸和隨機森林回歸算法優于決策樹回歸算法。
y = np.array([r2_score(y_test,y_head_lr),r2_score(y_test,y_head_rfr),r2_score(y_test,y_head_dtr)]) x = ["LinearRegression","RandomForestReg.","DecisionTreeReg."] plt.bar(x,y) plt.title("Comparison of Regression Algorithms") plt.xlabel("Regressor") plt.ylabel("r2_score") plt.show()
可視化三種算法
總結
以上是生活随笔為你收集整理的kaggle研究生招生(上)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 信用卡的循环利息是怎么产生的
- 下一篇: 申请广州农商银行太阳信用卡好吗 额度一般