[kaggle]泰坦尼克预测(代码解析)
生活随笔
收集整理的這篇文章主要介紹了
[kaggle]泰坦尼克预测(代码解析)
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
用pandas加載數據:
import pandas as pd import numpy as np from pandas import Series,DataFramedata_train = pd.read_csv("train.csv") data_train.columns
data_train.info()
上面數據告訴我們,訓練數據中總共有891名乘客,但是很不幸,我們有些屬性的數據不全.
``
#看看各登錄港口的獲救情況 fig = plt.figure() fig.set(alpha=0.2) # 設定圖表顏色alpha參數Survived_0 = data_train.Embarked[data_train.Survived == 0].value_counts() Survived_1 = data_train.Embarked[data_train.Survived == 1].value_counts() df=pd.DataFrame({u'Survived_1':Survived_1, u'Survived_0':Survived_0}) df.plot(kind='bar')plt.xlabel(u"Embarked") plt.ylabel(u"number") plt.show() #看看各性別的獲救情況 Survived_male = data_train.Survived[data_train.Sex == 'male'].value_counts() Survived_female = data_train.Survived[data_train.Sex == 'female'].value_counts() df = pd.DataFrame({'male':Survived_male, 'female':Survived_female}) df.plot(kind='bar') plt.xlabel('sex') plt.ylabel('number') #然后我們再來看看各種艙級別情況下各性別的獲救情況 fig=plt.figure() fig.set(alpha=0.65) # 設置圖像透明度,無所謂 plt.title(u"pclass and sex")plt.subplot(221) data_train.Survived[data_train.Sex == 'female'][data_train.Pclass != 3].value_counts().plot(kind='bar') plt.xlabel('female in 12class') plt.ylabel('number')plt.subplot(222) data_train.Survived[data_train.Sex == 'male'][data_train.Pclass != 3].value_counts().plot(kind='bar') plt.xlabel('male in 12class') plt.ylabel('number')plt.subplot(223) data_train.Survived[data_train.Sex == 'male'][data_train.Pclass == 3].value_counts().plot(kind='bar') plt.xlabel('female in 3class') plt.ylabel('number')plt.subplot(224) data_train.Survived[data_train.Sex == 'male'][data_train.Pclass == 3].value_counts().plot(kind='bar') plt.xlabel('male in 3class') plt.ylabel('number')plt.show() #那堂兄弟和父母呢?大家族會有優勢么? g = data_train.groupby(['SibSp','Survived']) df = pd.DataFrame(g.count()['PassengerId']) df g = data_train.groupby(['Parch','Survived']) df = pd.DataFrame(g.count()['PassengerId']) df #ticket是船票編號,應該是unique的,和最后的結果沒有太大的關系,不納入考慮的特征范疇 #cabin只有204個乘客有值,我們先看看它的一個分布 data_train.Cabin.value_counts()Survived_cabin = data_train.Survived[pd.notnull(data_train.Cabin)].value_counts() Survived_nocabin = data_train.Survived[pd.isnull(data_train.Cabin)].value_counts() df = pd.DataFrame({'have':Survived_cabin, 'not have':Survived_nocabin}) df.plot(kind='bar',stacked=True) plt.show()
似乎有cabin記錄的乘客survival比例稍高,那先試試把這個值分為兩類,有cabin值/無cabin值,一會兒加到類別特征好了.
通常遇到缺值的情況,我們會有幾種常見的處理方式:
如果缺值的樣本占總數比例極高,我們可能就直接舍棄了,作為特征加入的話,可能反倒帶入noise,影響最后的結果了.
如果缺值的樣本適中,而該屬性非連續值特征屬性(比如說類目屬性),那就把NaN作為一個新類別,加到類別特征中.
如果缺值的樣本適中,而該屬性為連續值特征屬性,有時候我們會考慮給定一個step(比如這里的age,我們可以考慮每隔2/3歲為一個步長),然后把它離散化,之后把NaN作為一個type加到屬性類目中。
有些情況下,缺失的值個數并不是特別多,那我們也可以試著根據已有的值,擬合一下數據,補充上。
我們這里用scikit-learn中的RandomForest來擬合一下缺失的年齡數據
from sklearn.ensemble import RandomForestRegressordef set_miss_ages(df):age_df = df[['Age','Fare', 'Parch', 'SibSp', 'Pclass']]known_age = age_df[age_df.Age.notnull()].as_matrix()unknown_age = age_df[age_df.Age.isnull()].as_matrix()y = known_age[:, 0]X = known_age[:, 1:]rf = RandomForestRegressor(random_state=0, n_estimators=2000, n_jobs=-1)rf.fit(X, y)predictedAge = rf.predict(unknown_age[:, 1:])df.loc[(df.Age.isnull()), 'Age'] = predictedAgereturn df ,rfdef set_Cabin_type(df):df.loc[(df.Cabin.notnull()), 'Cabin'] = 'YES'df.loc[(df.Cabin.isnull()), 'Cabin'] = 'NO'return dfdata_train, rf = set_miss_ages(data_train) data_train = set_Cabin_type(data_train) data_train因為邏輯回歸建模時,需要輸入的特征都是數值型特征,我們通常會先對類目型的特征因子化/one-hot編碼。
dummies_Cabin = pd.get_dummies(data_train['Cabin'], prefix= 'Cabin') dummies_Embarked = pd.get_dummies(data_train['Embarked'], prefix='Embarked') dummies_Sex = pd.get_dummies(data_train['Sex'], prefix='Sex') dummies_Pclass = pd.get_dummies(data_train['Pclass'], prefix='Pclass') df = pd.concat([data_train, dummies_Cabin, dummies_Embarked, dummies_Sex, dummies_Pclass], axis=1) df.drop(['Pclass', 'Name', 'Sex', 'Ticket', 'Cabin', 'Embarked'], axis=1, inplace=True) df.head(5)接下來我們要接著做一些數據預處理的工作,比如scaling,將一些變化幅度較大的特征化到[-1,1]之內.這樣可以加速logistic regression的收斂
import sklearn.preprocessing as preprocessing scaler = preprocessing.StandardScaler() age_scale_param = scaler.fit(df['Age']) df['Age_scaled'] = scaler.fit_transform(df['Age'], age_scale_param) fare_scale_param = scaler.fit(df['Fare']) df['Fare_scaled'] = scaler.fit_transform(df['Fare'], fare_scale_param)我們把需要的feature字段取出來,轉成numpy格式,使用scikit-learn中的LogisticRegression建模
from sklearn import linear_modeltrain_df = df.filter(regex='Survived|Age_.*|SibSp|Parch|Fare_.*|Cabin_.*|Embarked_.*|Sex_.*|Pclass_.*') train_np = train_df.as_matrix()# y即Survival結果 y = train_np[:, 0]# X即特征屬性值 X = train_np[:, 1:]# fit到RandomForestRegressor之中 clf = linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-6) clf.fit(X, y)clf接下來咱們對訓練集和測試集做一樣的操作
data_test = pd.read_csv("test.csv") data_test.loc[ (data_test.Fare.isnull()), 'Fare' ] = 0 # 接著我們對test_data做和train_data中一致的特征變換 # 首先用同樣的RandomForestRegressor模型填上丟失的年齡 tmp_df = data_test[['Age','Fare', 'Parch', 'SibSp', 'Pclass']] null_age = tmp_df[data_test.Age.isnull()].as_matrix() # 根據特征屬性X預測年齡并補上 X = null_age[:, 1:] predictedAges = rf.predict(X) data_test.loc[ (data_test.Age.isnull()), 'Age' ] = predictedAgesdata_test = set_Cabin_type(data_test) dummies_Cabin = pd.get_dummies(data_test['Cabin'], prefix= 'Cabin') dummies_Embarked = pd.get_dummies(data_test['Embarked'], prefix= 'Embarked') dummies_Sex = pd.get_dummies(data_test['Sex'], prefix= 'Sex') dummies_Pclass = pd.get_dummies(data_test['Pclass'], prefix= 'Pclass')df_test = pd.concat([data_test, dummies_Cabin, dummies_Embarked, dummies_Sex, dummies_Pclass], axis=1) df_test.drop(['Pclass', 'Name', 'Sex', 'Ticket', 'Cabin', 'Embarked'], axis=1, inplace=True) df_test['Age_scaled'] = scaler.fit_transform(df_test['Age'], age_scale_param) df_test['Fare_scaled'] = scaler.fit_transform(df_test['Fare'], fare_scale_param) df_test test = df_test.filter(regex='Age_.*|SibSp|Parch|Fare_.*|Cabin_.*|Embarked_.*|Sex_.*|Pclass_.*') predictions = clf.predict(test) result = pd.DataFrame({'PassengerId':data_test['PassengerId'].as_matrix(), 'Survived':predictions.astype(np.int32)}) result.to_csv("logistic_regression_predictions.csv", index=False)0.76555,恩,結果還不錯。畢竟,這只是我們簡單分析過后出的一個baseline系統嘛.
我們用scikit-learn里面的learning_curve來幫我們分辨我們模型的狀態>>>
import numpy as np import matplotlib.pyplot as plt from sklearn.learning_curve import learning_curve# 用sklearn的learning_curve得到training_score和cv_score,使用matplotlib畫出learning curve def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, n_jobs=1, train_sizes=np.linspace(.05, 1., 20), verbose=0, plot=True):"""畫出data在某模型上的learning curve.參數解釋----------estimator : 你用的分類器。title : 表格的標題。X : 輸入的feature,numpy類型y : 輸入的target vectorylim : tuple格式的(ymin, ymax), 設定圖像中縱坐標的最低點和最高點cv : 做cross-validation的時候,數據分成的份數,其中一份作為cv集,其余n-1份作為training(默認為3份)n_jobs : 并行的的任務數(默認1)"""train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes, verbose=verbose)train_scores_mean = np.mean(train_scores, axis=1)train_scores_std = np.std(train_scores, axis=1)test_scores_mean = np.mean(test_scores, axis=1)test_scores_std = np.std(test_scores, axis=1)if plot:plt.figure()plt.title(title)if ylim is not None:plt.ylim(*ylim)plt.xlabel(u"訓練樣本數")plt.ylabel(u"得分")plt.gca().invert_yaxis()plt.grid()plt.fill_between(train_sizes, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.1, color="b")plt.fill_between(train_sizes, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.1, color="r")plt.plot(train_sizes, train_scores_mean, 'o-', color="b", label=u"訓練集上得分")plt.plot(train_sizes, test_scores_mean, 'o-', color="r", label=u"交叉驗證集上得分")plt.legend(loc="best")plt.draw()plt.gca().invert_yaxis()plt.show()midpoint = ((train_scores_mean[-1] + train_scores_std[-1]) + (test_scores_mean[-1] - test_scores_std[-1])) / 2diff = (train_scores_mean[-1] + train_scores_std[-1]) - (test_scores_mean[-1] - test_scores_std[-1])return midpoint, diffplot_learning_curve(clf, u"學習曲線", X, y)接下來做交叉驗證提升準確率:
from sklearn import cross_validation# 簡單看看打分情況 clf = linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-6) all_data = df.filter(regex='Survived|Age_.*|SibSp|Parch|Fare_.*|Cabin_.*|Embarked_.*|Sex_.*|Pclass_.*') X = all_data.as_matrix()[:,1:] y = all_data.as_matrix()[:,0] print cross_validation.cross_val_score(clf, X, y, cv=5)# 分割數據 split_train, split_cv = cross_validation.train_test_split(df, test_size=0.3, random_state=0) train_df = split_train.filter(regex='Survived|Age_.*|SibSp|Parch|Fare_.*|Cabin_.*|Embarked_.*|Sex_.*|Pclass_.*') # 生成模型 clf = linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-6) clf.fit(train_df.as_matrix()[:,1:], train_df.as_matrix()[:,0])# 對cross validation數據進行預測cv_df = split_cv.filter(regex='Survived|Age_.*|SibSp|Parch|Fare_.*|Cabin_.*|Embarked_.*|Sex_.*|Pclass_.*') predictions = clf.predict(cv_df.as_matrix()[:,1:]) split_cv[ predictions != cv_df.as_matrix()[:,0] ].drop() # 去除預測錯誤的case看原始dataframe數據 #split_cv['PredictResult'] = predictions origin_data_train = pd.read_csv("Train.csv") bad_cases = origin_data_train.loc[origin_data_train['PassengerId'].isin(split_cv[predictions != cv_df.as_matrix()[:,0]]['PassengerId'].values)] bad_cases data_train[data_train['Name'].str.contains("Major")] data_train = pd.read_csv("train.csv") data_train['Sex_Pclass'] = data_train.Sex + "_" + data_train.Pclass.map(str)from sklearn.ensemble import RandomForestRegressor### 使用 RandomForestClassifier 填補缺失的年齡屬性 def set_missing_ages(df):# 把已有的數值型特征取出來丟進Random Forest Regressor中age_df = df[['Age','Fare', 'Parch', 'SibSp', 'Pclass']]# 乘客分成已知年齡和未知年齡兩部分known_age = age_df[age_df.Age.notnull()].as_matrix()unknown_age = age_df[age_df.Age.isnull()].as_matrix()# y即目標年齡y = known_age[:, 0]# X即特征屬性值X = known_age[:, 1:]# fit到RandomForestRegressor之中rfr = RandomForestRegressor(random_state=0, n_estimators=2000, n_jobs=-1)rfr.fit(X, y)# 用得到的模型進行未知年齡結果預測predictedAges = rfr.predict(unknown_age[:, 1::])# 用得到的預測結果填補原缺失數據df.loc[ (df.Age.isnull()), 'Age' ] = predictedAges return df, rfrdef set_Cabin_type(df):df.loc[ (df.Cabin.notnull()), 'Cabin' ] = "Yes"df.loc[ (df.Cabin.isnull()), 'Cabin' ] = "No"return dfdata_train, rfr = set_missing_ages(data_train) data_train = set_Cabin_type(data_train)dummies_Cabin = pd.get_dummies(data_train['Cabin'], prefix= 'Cabin') dummies_Embarked = pd.get_dummies(data_train['Embarked'], prefix= 'Embarked') dummies_Sex = pd.get_dummies(data_train['Sex'], prefix= 'Sex') dummies_Pclass = pd.get_dummies(data_train['Pclass'], prefix= 'Pclass') dummies_Sex_Pclass = pd.get_dummies(data_train['Sex_Pclass'], prefix= 'Sex_Pclass')df = pd.concat([data_train, dummies_Cabin, dummies_Embarked, dummies_Sex, dummies_Pclass, dummies_Sex_Pclass], axis=1) df.drop(['Pclass', 'Name', 'Sex', 'Ticket', 'Cabin', 'Embarked', 'Sex_Pclass'], axis=1, inplace=True) import sklearn.preprocessing as preprocessing scaler = preprocessing.StandardScaler() age_scale_param = scaler.fit(df['Age']) df['Age_scaled'] = scaler.fit_transform(df['Age'], age_scale_param) fare_scale_param = scaler.fit(df['Fare']) df['Fare_scaled'] = scaler.fit_transform(df['Fare'], fare_scale_param)from sklearn import linear_modeltrain_df = df.filter(regex='Survived|Age_.*|SibSp|Parch|Fare_.*|Cabin_.*|Embarked_.*|Sex_.*|Pclass.*') train_np = train_df.as_matrix()# y即Survival結果 y = train_np[:, 0]# X即特征屬性值 X = train_np[:, 1:]# fit到RandomForestRegressor之中 clf = linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-6) clf.fit(X, y) clfdata_test = pd.read_csv("test.csv") data_test.loc[ (data_test.Fare.isnull()), 'Fare' ] = 0 data_test['Sex_Pclass'] = data_test.Sex + "_" + data_test.Pclass.map(str) # 接著我們對test_data做和train_data中一致的特征變換 # 首先用同樣的RandomForestRegressor模型填上丟失的年齡 tmp_df = data_test[['Age','Fare', 'Parch', 'SibSp', 'Pclass']] null_age = tmp_df[data_test.Age.isnull()].as_matrix() # 根據特征屬性X預測年齡并補上 X = null_age[:, 1:] predictedAges = rfr.predict(X) data_test.loc[ (data_test.Age.isnull()), 'Age' ] = predictedAgesdata_test = set_Cabin_type(data_test) dummies_Cabin = pd.get_dummies(data_test['Cabin'], prefix= 'Cabin') dummies_Embarked = pd.get_dummies(data_test['Embarked'], prefix= 'Embarked') dummies_Sex = pd.get_dummies(data_test['Sex'], prefix= 'Sex') dummies_Pclass = pd.get_dummies(data_test['Pclass'], prefix= 'Pclass') dummies_Sex_Pclass = pd.get_dummies(data_test['Sex_Pclass'], prefix= 'Sex_Pclass')df_test = pd.concat([data_test, dummies_Cabin, dummies_Embarked, dummies_Sex, dummies_Pclass, dummies_Sex_Pclass], axis=1) df_test.drop(['Pclass', 'Name', 'Sex', 'Ticket', 'Cabin', 'Embarked', 'Sex_Pclass'], axis=1, inplace=True) df_test['Age_scaled'] = scaler.fit_transform(df_test['Age'], age_scale_param) df_test['Fare_scaled'] = scaler.fit_transform(df_test['Fare'], fare_scale_param) df_testtest = df_test.filter(regex='Age_.*|SibSp|Parch|Fare_.*|Cabin_.*|Embarked_.*|Sex_.*|Pclass.*') predictions = clf.predict(test) result = pd.DataFrame({'PassengerId':data_test['PassengerId'].as_matrix(), 'Survived':predictions.astype(np.int32)}) result.to_csv("logistic_regression_predictions2.csv", index=False)模型融合Bagging
from sklearn.ensemble import BaggingRegressortrain_df = df.filter(regex='Survived|Age_.*|SibSp|Parch|Fare_.*|Cabin_.*|Embarked_.*|Sex_.*|Pclass.*|Mother|Child|Family|Title') train_np = train_df.as_matrix()# y即Survival結果 y = train_np[:, 0]# X即特征屬性值 X = train_np[:, 1:]# fit到BaggingRegressor之中 clf = linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-6) bagging_clf = BaggingRegressor(clf, n_estimators=10, max_samples=0.8, max_features=1.0, bootstrap=True, bootstrap_features=False, n_jobs=-1) bagging_clf.fit(X, y)test = df_test.filter(regex='Age_.*|SibSp|Parch|Fare_.*|Cabin_.*|Embarked_.*|Sex_.*|Pclass.*|Mother|Child|Family|Title') predictions = bagging_clf.predict(test) result = pd.DataFrame({'PassengerId':data_test['PassengerId'].as_matrix(), 'Survived':predictions.astype(np.int32)}) result.to_csv("./logistic_regression_predictions2.csv", index=False)總結
以上是生活随笔為你收集整理的[kaggle]泰坦尼克预测(代码解析)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: eclipse neon Java编辑
- 下一篇: 轮回 第二章 冷傲天