DL之AlexNet:利用卷积神经网络类AlexNet实现猫狗分类识别(图片数据增强→保存h5模型)
生活随笔
收集整理的這篇文章主要介紹了
DL之AlexNet:利用卷积神经网络类AlexNet实现猫狗分类识别(图片数据增强→保存h5模型)
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
DL之AlexNet:利用卷積神經網絡類AlexNet實現貓狗分類識別(圖片數據增強→保存h5模型)
?
?
?
?
目錄
利用卷積神經網絡類AlexNet實現貓狗分類識別(圖片數據增強→保存h5模型)
設計思路
處理過程及結果呈現
基于ImageDataGenerator實現數據增強
類AlexNet代碼
?
?
相關文章
DL之AlexNet:利用卷積神經網絡類AlexNet實現貓狗分類識別(圖片數據增強→保存h5模型)
DL之AlexNet:利用卷積神經網絡類AlexNet實現貓狗分類識別(圖片數據增強→保存h5模型)實現
?
利用卷積神經網絡類AlexNet實現貓狗分類識別(圖片數據增強→保存h5模型)
設計思路
?
?
處理過程及結果呈現
Found 17500 images belonging to 2 classes.
Found 7500 images belonging to 2 classes.
?
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 150, 150, 3) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 148, 148, 64) 1792 _________________________________________________________________ batch_normalization_1 (Batch (None, 148, 148, 64) 256 _________________________________________________________________ activation_1 (Activation) (None, 148, 148, 64) 0 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 74, 74, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 72, 72, 64) 36928 _________________________________________________________________ batch_normalization_2 (Batch (None, 72, 72, 64) 256 _________________________________________________________________ activation_2 (Activation) (None, 72, 72, 64) 0 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 36, 36, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 34, 34, 128) 73856 _________________________________________________________________ batch_normalization_3 (Batch (None, 34, 34, 128) 512 _________________________________________________________________ activation_3 (Activation) (None, 34, 34, 128) 0 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 17, 17, 128) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 15, 15, 128) 147584 _________________________________________________________________ batch_normalization_4 (Batch (None, 15, 15, 128) 512 _________________________________________________________________ activation_4 (Activation) (None, 15, 15, 128) 0 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 7, 7, 128) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 6272) 0 _________________________________________________________________ dense_1 (Dense) (None, 64) 401472 _________________________________________________________________ batch_normalization_5 (Batch (None, 64) 256 _________________________________________________________________ activation_5 (Activation) (None, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 64) 0 _________________________________________________________________ dense_2 (Dense) (None, 1) 65 _________________________________________________________________ activation_6 (Activation) (None, 1) 0 ================================================================= Total params: 663,489 Trainable params: 662,593 Non-trainable params: 896 _________________________________________________________________ None Epoch 1/10- 837s - loss: 0.8109 - binary_accuracy: 0.5731 - val_loss: 0.7552 - val_binary_accuracy: 0.6275 Epoch 2/10- 972s - loss: 0.6892 - binary_accuracy: 0.6184 - val_loss: 0.6323 - val_binary_accuracy: 0.6538 Epoch 3/10- 888s - loss: 0.6773 - binary_accuracy: 0.6275 - val_loss: 0.6702 - val_binary_accuracy: 0.6475 Epoch 4/10- 827s - loss: 0.6503 - binary_accuracy: 0.6522 - val_loss: 1.4757 - val_binary_accuracy: 0.5437 Epoch 5/10- 775s - loss: 0.6024 - binary_accuracy: 0.6749 - val_loss: 0.5872 - val_binary_accuracy: 0.6975 Epoch 6/10- 775s - loss: 0.5855 - binary_accuracy: 0.6935 - val_loss: 1.6343 - val_binary_accuracy: 0.5075 Epoch 7/10- 781s - loss: 0.5725 - binary_accuracy: 0.7117 - val_loss: 1.0417 - val_binary_accuracy: 0.5850 Epoch 8/10- 770s - loss: 0.5594 - binary_accuracy: 0.7268 - val_loss: 0.6793 - val_binary_accuracy: 0.6150 Epoch 9/10- 774s - loss: 0.5619 - binary_accuracy: 0.7239 - val_loss: 0.7271 - val_binary_accuracy: 0.5737 Epoch 10/10- 772s - loss: 0.5206 - binary_accuracy: 0.7485 - val_loss: 1.2269 - val_binary_accuracy: 0.5564 train_history.history {'val_loss': [0.7552271389961243, 0.6323019933700561, 0.6702361726760864, 1.4756725096702576, 0.5872411811351776, 1.6343200182914734, 1.0417238283157348, 0.679338448047638, 0.7270535206794739, 1.2268943945566813], 'val_binary_accuracy': [0.6275, 0.65375, 0.6475, 0.54375, 0.6975, 0.5075, 0.585, 0.615, 0.57375, 0.5564102564102564], 'loss': [0.8109277236846185, 0.6891729639422509, 0.6772915293132106, 0.6502932430275025, 0.6023876513204267, 0.5855168705025027, 0.5725259766463311, 0.5594036031153894, 0.561434359863551, 0.5205760602989504], 'binary_accuracy': [0.5730846774193549, 0.6184475806451613, 0.6275201612903226, 0.6522177419354839, 0.6748991935483871, 0.6935483870967742, 0.7116935483870968, 0.7268145161290323, 0.7242424240015974, 0.7484879032258065]}?
?
?
基于ImageDataGenerator實現數據增強
擴充數據集大小,增強模型的泛化能力。比如進行旋轉、變形、歸一化等。
- 擴充數據量:對圖像作簡單的預處理(如縮放,改變像素值范圍);
隨機打亂圖像順序,并且在圖像集上無限循環(不會出現數據用完的情況);
對圖像加入擾動,大大增大數據量,避免多次輸入相同的訓練圖像產生過擬合。 - 優化訓練效率:訓練神經網絡時經常需要將數據分成小的批次(例如每16張圖像作為一個batch提供給神經網絡),在ImageDataGenerator中,只需要簡單提供一個參數 batch_size = 16。
?
?
類AlexNet代碼
n_channels = 3 input_shape = (*image_size, n_channels) input_layer = Input(input_shape) z = input_layer z = Conv2D(64, (3, 3))(z) z = BatchNormalization()(z) z = Activation('relu')(z) z = MaxPooling2D(pool_size=(2, 2))(z)z = Conv2D(64, (3, 3))(z) z = BatchNormalization()(z) z = Activation('relu')(z) z = MaxPooling2D(pool_size=(2, 2))(z)z = Conv2D(128, (3, 3))(z) z = BatchNormalization()(z) z = Activation('relu')(z) z = MaxPooling2D(pool_size=(2, 2))(z)z = Conv2D(128, (3, 3))(z) z = BatchNormalization()(z) z = Activation('relu')(z) z = MaxPooling2D(pool_size=(2, 2))(z)z = Flatten()(z) # 將特征變成一維向量 z = Dense(64)(z) z = BatchNormalization()(z) z = Activation('relu')(z) z = Dropout(0.5)(z) z = Dense(1)(z) z = Activation('sigmoid')(z)?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
《新程序員》:云原生和全面數字化實踐50位技術專家共同創作,文字、視頻、音頻交互閱讀總結
以上是生活随笔為你收集整理的DL之AlexNet:利用卷积神经网络类AlexNet实现猫狗分类识别(图片数据增强→保存h5模型)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: VM:Vmware简介、安装、使用方法详
- 下一篇: 成功解决raise ValueError