catboost进行分类并开启GPU模式
生活随笔
收集整理的這篇文章主要介紹了
catboost进行分类并开启GPU模式
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
?
代碼如下:
?
?
########################### Helpers ################################################################################# ## ------------------- ## Seeder # :seed to make all processes deterministic # type: int def seed_everything(seed=0):random.seed(seed)os.environ['PYTHONHASHSEED'] = str(seed)np.random.seed(seed) ## ------------------- def reduce_mem_usage(df, verbose=True):numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']start_mem = df.memory_usage(deep=True).sum() / 1024**2for col in df.columns:col_type = df[col].dtypesif col_type in numerics:c_min = df[col].min()c_max = df[col].max()if str(col_type)[:3] == 'int':if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:df[col] = df[col].astype(np.int8)elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:df[col] = df[col].astype(np.int16)elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:df[col] = df[col].astype(np.int32)elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:df[col] = df[col].astype(np.int64)else:c_prec = df[col].apply(lambda x: np.finfo(x).precision).max()if c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max and c_prec == np.finfo(np.float32).precision:df[col] = df[col].astype(np.float32)else:df[col] = df[col].astype(np.float64)end_mem = df.memory_usage().sum() / 1024**2if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))return df?
########################### DATA LOAD ################################################################################# seed_everything(SEED)print('Load Data') train_df = pd.read_pickle('../input/ieee-data-minification/train_transaction.pkl')test_df = pd.read_pickle('../input/ieee-data-minification/test_transaction.pkl') train_identity = pd.read_pickle('../input/ieee-data-minification/train_identity.pkl') test_identity = pd.read_pickle('../input/ieee-data-minification/test_identity.pkl')base_columns = list(train_df) + list(train_identity)?
train_df = pd.merge(train_df,train_identity, how = 'left', on = 'TransactionID',validate = "many_to_one") test_df = pd.merge(test_df,test_identity, how = 'left', on = 'TransactionID',validate = "many_to_one")?
train_df.drop(["TransactionID", "TransactionDT"],axis=1, inplace=True) test_df.drop(["TransactionDT"],axis=1, inplace=True)?
X = train_df.drop(["isFraud"],axis=1) y= train_df["isFraud"] X_Test = test_df.copy()X_Test.drop(['TransactionID', 'isFraud'],axis=1,inplace=True) #getting rid of the trans.ID that is in our submission file anyway# X = reduce_mem_usage(X) # X_Test = reduce_mem_usage(X_Test)del train_df, test_df, train_identity, test_identitygc.collect()?
basic data cleaning
?
print(f"Before dropna, top missing columns:\n{X.isna().sum().sort_values(ascending = False).head(5)}\n")thresh = 0.80 #how many NA values (%) I think anything more than 80% is a bit too much. This is of course only my opinionX_less_nas = X.dropna(thresh=X.shape[0]*(1-thresh), axis='columns')cols_dropped = list(set(X.columns)-set(X_less_nas.columns))X_Test.drop(cols_dropped, axis=1, inplace=True)# X_less_nas = reduce_mem_usage(X_less_nas) # X_Test = reduce_mem_usage(X_Test)print(f"After dropna, top missing columns:\n{X_less_nas.isna().sum().sort_values(ascending = False).head(5)}")print(f"\nNo. of cols dropped = {len(set(X.columns)-set(X_less_nas.columns))}, or {len(set(X.columns)-set(X_less_nas.columns))/len(X.columns)*100:.2f}% of columns")del X ; gc.collect()Let's build a dictionary containing the categorical features for?catboost's API
?
#according to https://www.kaggle.com/c/ieee-fraud-detection/discussion/101203#latest-607486Catfeats = ['ProductCD'] + \["card"+f"{i+1}" for i in range(6)] + \["addr"+f"{i+1}" for i in range(2)] + \["P_emaildomain", "R_emaildomain"] + \["M"+f"{i+1}" for i in range(9)] + \["DeviceType", "DeviceInfo"] + \["id_"+f"{i}" for i in range(12, 39)]# removing columns dropped earlier when we weeded out the empty columnsCatfeats = list(set(Catfeats)- set(cols_dropped))?
Lets define our Numerical Features as well:
?
Numfeats = list(set(X_less_nas.columns)- set(cols_dropped)-set(Catfeats)) X_less_nas[Catfeats].head()?
Seems good :)
According to Catboost's official tutorial, it's good transform our NaN values to some number way out their distribution
https://github.com/catboost/tutorials/blob/master/python_tutorial.ipynb
lets do that:
X_less_nas.fillna(-10000, inplace=True) X_Test.fillna(-10000, inplace=True) X_less_nas.head()?
Model Fitting
## quick test with AUCX_tr, X_val, y_tr, y_val = train_test_split(X_less_nas, y, test_size=0.2, random_state=SEED,stratify = y)cat_params = {'loss_function': 'Logloss','custom_loss':['AUC'],'logging_level':'Silent','task_type' : 'GPU','early_stopping_rounds' : 100 }simple_model = CatBoostClassifier(**cat_params)simple_model.fit(X_tr, y_tr,cat_features=Catfeats,eval_set=(X_val, y_val),plot=True );# cv_params = model.get_params()# cv_data = cv( # Pool( X.iloc[:2000,:5], y[:2000], `=[1]), # cv_params,nfold=4, # plot=True # )?
Looks very promising. Lets train on all available data,
I'll do cross validation later
?
#final training on whole trianing setsimple_model.fit(X_less_nas, y,cat_features=Catfeats,logging_level = 'Silent' );?
submission = pd.read_csv('../input/ieee-fraud-detection/sample_submission.csv') submission['isFraud'] = simple_model.predict_proba(X_Test)[:,1] # you must predict a probability for the isFraud variable submission.to_csv('simple_model_Catboost.csv', index=False)?
Reference:
[1]https://www.kaggle.com/pipboyguy/catboost-and-eda/output
總結(jié)
以上是生活随笔為你收集整理的catboost进行分类并开启GPU模式的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 特征名类似情况下的列表的快速书写
- 下一篇: 存储输出的pickle文件作为数据源