DL之DNN:利用MultiLayerNetExtend模型【6*100+ReLU+SGD,dropout】对Mnist数据集训练来抑制过拟合
生活随笔
收集整理的這篇文章主要介紹了
DL之DNN:利用MultiLayerNetExtend模型【6*100+ReLU+SGD,dropout】对Mnist数据集训练来抑制过拟合
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
DL之DNN:利用MultiLayerNetExtend模型【6*100+ReLU+SGD,dropout】對(duì)Mnist數(shù)據(jù)集訓(xùn)練來抑制過擬合
?
?
目錄
輸出結(jié)果
設(shè)計(jì)思路
核心代碼
更多輸出
?
?
?
輸出結(jié)果
?
設(shè)計(jì)思路
190417更新
?
?
?
核心代碼
class RMSprop:def __init__(self, lr=0.01, decay_rate = 0.99):self.lr = lrself.decay_rate = decay_rateself.h = Nonedef update(self, params, grads):if self.h is None:self.h = {}for key, val in params.items():self.h[key] = np.zeros_like(val)for key in params.keys():self.h[key] *= self.decay_rateself.h[key] += (1 - self.decay_rate) * grads[key] * grads[key]params[key] -= self.lr * grads[key] / (np.sqrt(self.h[key]) + 1e-7)class Nesterov:def __init__(self, lr=0.01, momentum=0.9):self.lr = lrself.momentum = momentumself.v = Nonedef update(self, params, grads):if self.v is None:self.v = {}for key, val in params.items():self.v[key] = np.zeros_like(val)for key in params.keys():self.v[key] *= self.momentumself.v[key] -= self.lr * grads[key]params[key] += self.momentum * self.momentum * self.v[key]params[key] -= (1 + self.momentum) * self.lr * grads[key]use_dropout = True dropout_ratio = 0.2network = MultiLayerNetExtend(input_size=784, hidden_size_list=[100, 100, 100, 100, 100, 100],output_size=10, use_dropout=use_dropout, dropout_ration=dropout_ratio) trainer = Trainer(network, x_train, t_train, x_test, t_test, epochs=301, mini_batch_size=100,optimizer='sgd', optimizer_param={'lr': 0.01}, verbose=True) trainer.train() train_acc_list, test_acc_list = trainer.train_acc_list, trainer.test_acc_list?
?
更多輸出
1、DNN[6*100+ReLU,SGD]: accuracy of not dropout on Minist dataset
train loss:2.3364575765992637 === epoch:1, train acc:0.10333333333333333, test acc:0.1088 === train loss:2.414526554119518 train loss:2.341182306768928 train loss:2.3072782723352496 === epoch:2, train acc:0.09666666666666666, test acc:0.1103 === train loss:2.2600377181768887 train loss:2.263350960525319 train loss:2.2708260374887645……=== epoch:298, train acc:1.0, test acc:0.7709 === train loss:0.00755416896470134 train loss:0.009934657874546435 train loss:0.008421672959852643 === epoch:299, train acc:1.0, test acc:0.7712 === train loss:0.007142981215285884 train loss:0.008205245499586114 train loss:0.007319626293763803 === epoch:300, train acc:1.0, test acc:0.7707 === train loss:0.00752230499930163 train loss:0.008431046288276818 train loss:0.008067532729014863 === epoch:301, train acc:1.0, test acc:0.7707 === train loss:0.010729407851274233 train loss:0.007776889701033221 =============== Final Test Accuracy =============== test acc:0.771?
2、DNN[6*100+ReLU,SGD]: accuracy of dropout(0.2) on Minist dataset
train loss:2.3064018541384437 === epoch:1, train acc:0.11, test acc:0.1112 === train loss:2.316626942558816 train loss:2.314434337198633 train loss:2.318862771955365 === epoch:2, train acc:0.11333333333333333, test acc:0.1128 === train loss:2.3241989320140717 train loss:2.317694982413387 train loss:2.3079716553885006……=== epoch:298, train acc:0.6266666666666667, test acc:0.5168 === train loss:1.2359381134877185 train loss:1.2833380447791383 train loss:1.2728131428100005 === epoch:299, train acc:0.63, test acc:0.52 === train loss:1.1687601000183936 train loss:1.1435412548991142 train loss:1.3854277174616834 === epoch:300, train acc:0.6333333333333333, test acc:0.5244 === train loss:1.3039470016588997 train loss:1.2359979876607923 train loss:1.2871396654831204 === epoch:301, train acc:0.63, test acc:0.5257 === train loss:1.1690084424502523 train loss:1.1820777530873694 =============== Final Test Accuracy =============== test acc:0.5269?
相關(guān)文章
CSDN:2019.04.09起
?
總結(jié)
以上是生活随笔為你收集整理的DL之DNN:利用MultiLayerNetExtend模型【6*100+ReLU+SGD,dropout】对Mnist数据集训练来抑制过拟合的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: ML之xgboost:利用xgboost
- 下一篇: 成功解决TypeError: slice