深度学习之基于CNN和VGG19实现灵笼人物识别
VGG19是VGG網絡中一個更復雜的網絡,相比較于VGG16,它的層數更深。
VGG19網絡都使用了同樣大小的卷積核尺寸(3x3)和最大池化尺寸(2x2)。但是它的訓練時間過長,調參難度大,并且需要的存儲容量大,不利于部署。
本次基于CNN和VGG19,對靈籠人物進行識別。其中VGG19網絡分別調用了官方模型和自己搭建的模型。
1.導入庫
import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import os,PIL,pathlib from tensorflow import keras from tensorflow.keras import layers,models,Sequential,Input from tensorflow.keras.layers import Conv2D,MaxPooling2D,Dense,Flatten,Dropout from tensorflow.keras.models import Model2.數據加載
①添加文件路徑
data_dir = "E:/tmp/.keras/datasets/linglong_photos" data_dir = pathlib.Path(data_dir)②構建一個ImageDataGenerator
#因為訓練集和測試集是在一個文件夾中,所以構建一個ImageDataGenerator就可以 train_data_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,rotation_range=45,shear_range=0.2,validation_split=0.2,#以8:2的比例劃分訓練集和測試集horizontal_flip=True )③劃分訓練集和測試集
train_ds = train_data_gen.flow_from_directory(directory=data_dir,target_size=(height,width),batch_size=batch_size,shuffle=True,class_mode='categorical',subset='training' ) test_ds = train_data_gen.flow_from_directory(directory=data_dir,target_size=(height,width),batch_size=batch_size,class_mode='categorical',subset='validation' )結果如下:
Found 225 images belonging to 6 classes. Found 55 images belonging to 6 classes.3.設置超參數
height = 224 width = 224 batch_size = 16 epochs = 104.搭建CNN網絡
網絡結構:3層卷積池化層+Flatten+全連接層
model = tf.keras.Sequential([tf.keras.layers.Conv2D(16,3,padding="same",activation="relu",input_shape=(height,width,3)),tf.keras.layers.MaxPooling2D(),tf.keras.layers.Conv2D(32,3,padding="same",activation="relu"),tf.keras.layers.MaxPooling2D(),tf.keras.layers.Conv2D(64,3,padding="same",activation="relu"),tf.keras.layers.MaxPooling2D(),tf.keras.layers.Conv2D(128,3,padding="same",activation="relu"),tf.keras.layers.MaxPooling2D(),tf.keras.layers.Flatten(),tf.keras.layers.Dense(1024,activation="relu"),tf.keras.layers.Dense(512,activation="relu"),tf.keras.layers.Dense(6,activation="softmax") ])網絡編譯&&運行
model.compile(optimizer="adam",loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),metrics=["acc"]) history = model.fit(train_ds,validation_data=test_ds,epochs=epochs )實驗結果如下:
不盡如人意,這個上下波動就很無語。將epochs調大后的結果如下:
在20次訓練之后沒有任何的效果。
修改優化器:
在改變優化器之后,效果得到了很大的改善,準確率也得到了提高。
4.VGG19網絡
網絡結構如下所示:
①官方模型
實驗結果如下所示(epochs=10):
相比于上面的網絡,效果還算可以。沒有出現大規模的波動,最后的準確率也比較高。
②自己搭建VGG19網絡
參考自K同學啊
def VGG19(nb_classes,input_shape):input_ten = Input(shape=input_shape)#1blockx = Conv2D(64,(3,3),activation='relu',padding='same',name='block1_conv1')(input_ten)x = Conv2D(64,(3,3),activation='relu',padding='same',name='block1_conv2')(x)x = MaxPooling2D((2,2),strides=(2,2),name='block1_pool')(x)#2blockx = Conv2D(128,(3,3),activation='relu',padding='same',name='block2_conv1')(x)x = Conv2D(128,(3,3),activation='relu',padding='same',name='block2_conv2')(x)x = MaxPooling2D((2,2),strides=(2,2),name='block2_pool')(x)#3blockx = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv4')(x)x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)#4blockx = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv4')(x)x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)#5blockx = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv4')(x)x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)#Fullx = Flatten()(x)x = Dense(4096, activation='relu', name='fc1')(x)x = Dense(4096, activation='relu', name='fc2')(x)output_ten = Dense(nb_classes, activation='softmax', name='predictions')(x)model = Model(input_ten, output_ten)return model model = VGG19(6,(width,height,3))實驗結果如下所示:
就特么離譜,都不知道為啥會有這么大的差別,希望路過的大佬批評指正。
總結
經過改變優化器以及增加epochs之后,自己搭建的CNN準確率是比較好的,但是出現了較大規模的波動。VGG19網絡的訓練速度確實比較慢,但是官方模型的準確率是比較好的,沒有出現CNN模型的波動情況。由于硬件原因,博主在實驗時的epochs設置的都比較小,可以嘗試一下epochs增大時的效果如何。自己搭建的VGG19網絡,最后的模型準確率也比較高,但是波動較大,而且很不穩定,具體原因不清楚。
總結
以上是生活随笔為你收集整理的深度学习之基于CNN和VGG19实现灵笼人物识别的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 深度学习之基于卷积神经网络(VGG16C
- 下一篇: C语言包含头文件时用引号和尖括号的区别