深度学习之基于卷积神经网络(VGG16)实现性别判别
生活随笔
收集整理的這篇文章主要介紹了
深度学习之基于卷积神经网络(VGG16)实现性别判别
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
無意間在kaggle上發現的一個數據集,旨在提高網絡模型判別男女的準確率,博主利用遷移學習試驗了多個卷積神經網絡,最終的模型準確率在95%左右。并劃分了訓練集、測試集、驗證集三類,最終在驗證集上的準確率為93.63%.
1.導入庫
import tensorflow as tf import matplotlib.pyplot as plt import os,PIL,pathlib import pandas as pd import numpy as np from tensorflow import keras from tensorflow.keras import layers,models from tensorflow.keras import layers, models, Input from tensorflow.keras.models import Model from tensorflow.keras.layers import Conv2D, Dense, Flatten, Dropout,BatchNormalization,Activation from tensorflow.keras.layers import MaxPooling2D, AveragePooling2D, Concatenate, Lambda,GlobalAveragePooling2D from tensorflow.keras import backend as K# 支持中文 plt.rcParams['font.sans-serif'] = ['SimHei'] # 用來正常顯示中文標簽 plt.rcParams['axes.unicode_minus'] = False # 用來正常顯示負號2.導入數據
kaggle上下載的原數據一共有20000+張,由于硬件原因,博主選取了4286張作為訓練集和測試集,剩下的作為驗證集。
data_dir = "E:/tmp/.keras/datasets/Man_Women/faces_test" data_dir = pathlib.Path(data_dir) img_count = len(list(data_dir.glob('*/*'))) print(img_count)all_images_paths = list(data_dir.glob('*')) all_images_paths = [str(path) for path in all_images_paths] all_label_names = [path.split("\\")[6].split(".")[0] for path in all_images_paths] print(all_label_names) 4286 ['man', 'woman'] Found 4286 images belonging to 2 classes.參數設置:
height = 224 width = 224 epochs = 15 batch_size = 323.訓練集與測試集
按照8:2的比例劃分訓練集和測試集
train_data_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,#歸一化validation_split=0.2,horizontal_flip=True#進行水平翻轉,作為數據增強 )train_ds = train_data_gen.flow_from_directory(directory=data_dir,target_size=(height,width),batch_size=batch_size,shuffle=True,class_mode='categorical',subset='training' ) test_ds = train_data_gen.flow_from_directory(directory=data_dir,target_size=(height,width),batch_size=batch_size,shuffle=True,class_mode='categorical',subset='validation' ) Found 3430 images belonging to 2 classes. Found 856 images belonging to 2 classes.圖片展示:
plt.figure(figsize=(15, 10)) # 圖形的寬為15高為10for images, labels in train_ds:for i in range(30):ax = plt.subplot(5, 6, i + 1)plt.imshow(images[i])plt.title(all_label_names[np.argmax(labels[i])])plt.axis("off")break plt.show()3.遷移學習VGG16網絡
base_model = tf.keras.applications.VGG16(include_top=False, weights="imagenet",input_shape=(height,width,3),pooling = 'max') x = base_model.output x = tf.keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x) x = tf.keras.layers.Dense(256, activation='relu')(x) x = tf.keras.layers.Dropout(rate=.45, seed=123)(x) output = tf.keras.layers.Dense(2, activation='softmax')(x) model=Model(inputs=base_model.input, outputs=output)設置優化器
# #設置優化器 # #起始學習率 init_learning_rate = 1e-4 lr_sch = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate=init_learning_rate,decay_steps=50,decay_rate=0.96,staircase=True ) gen_optimizer = tf.keras.optimizers.Adam(learning_rate=lr_sch)網絡編譯&&訓練
model.compile(optimizer=gen_optimizer,loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),metrics=['accuracy'] )history = model.fit(train_ds,epochs=epochs,validation_data=test_ds )訓練結果如下所示:
最終的模型準確率為95%左右,在博主試驗的這些網絡模型中,VGG16的模型準確率是最高的。
4.混淆矩陣的繪制
網絡保存:
model.save("E:/Users/yqx/PycharmProjects/Man_Women_Rec/model_.h5")網絡加載:
model = tf.keras.models.load_model("E:/Users/yqx/PycharmProjects/Man_Women_Rec/model.h5")利用模型對驗證集的數據進行測試:
plt.figure(figsize=(50,50))for images,labels in validation_ds:num = 0total = 0for i in range(64):total += 1ax = plt.subplot(8,8,i+1)plt.imshow(images[i])img_array = tf.expand_dims(images[i],0)pre = model.predict(img_array)if np.argmax(pre) == np.argmax(labels[i]):num += 1plt.title(all_label_names[np.argmax(pre)])plt.axis("off")print(total)print(num)breakplt.suptitle("The acc rating of validation is:{}".format((num / total)))plt.show()
繪制混淆矩陣:
5.測試驗證集
model = tf.keras.models.load_model("E:/Users/yqx/PycharmProjects/Man_Women_Rec/model.h5") model.evaluate(validation_ds)最終結果如下所示:
716/716 [==============================] - 418s 584ms/step - loss: 0.5345 - accuracy: 0.9363 [0.5345107175451417, 0.936279]#loss值與acc率模型準確率比較高。在kaggle上,看到有模型準確率在99%左右,路過的大佬可以試驗一下。
努力加油a啊
總結
以上是生活随笔為你收集整理的深度学习之基于卷积神经网络(VGG16)实现性别判别的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Oracle备份的几种方式【转】[通俗易
- 下一篇: 音悦台高清mv下载_音悦台没有了去哪看m