[Pytorch系列-35]:卷积神经网络 - 搭建LeNet-5网络与CFAR10分类数据集
?作者主頁(文火冰糖的硅基工坊):文火冰糖(王文兵)的博客_文火冰糖的硅基工坊_CSDN博客
?本文網址:https://blog.csdn.net/HiWangWenBing/article/details/121072835
目錄
前言:LeNet網絡詳解
第1章 業務領域分析
1.1? 步驟1-1:業務領域分析
1.2 步驟1-2:業務建模
1.3 訓練模型
1.4 驗證模型
1.5 整體架構
1.6?代碼實例前置條件
第2章 前向運算模型定義
2.1?步驟2-1:數據集選擇
2.2?步驟2-2:數據預處理 - 本案例無需數據預處理
2.3 步驟2-3:神經網絡建模
2.4?步驟2-4:定義神經網絡實例以及輸出
第3章 定義反向計算
3.1?步驟3-1:定義loss
3.2 步驟3-2:定義優化器
3.3?步驟3-3:模型訓練 (epochs = 10)
3.4?可視化loss迭代過程
3.5?可視化訓練批次的精度迭代過程
第4章 模型性能驗證
4.1 手工驗證
4.2 整個訓練集上的精度驗證:精度只有58%
4.3 整個測試集上的精度驗證:精度只有58%
前言:LeNet網絡詳解
(1)LeNet網絡詳解
[人工智能-深度學習-33]:卷積神經網絡CNN - 常見分類網絡- LeNet網絡結構分析與詳解_文火冰糖(王文兵)的博客-CSDN博客作者主頁(文火冰糖的硅基工坊):文火冰糖(王文兵)的博客_文火冰糖的硅基工坊_CSDN博客本文網址:目錄第1章 卷積神經網絡基礎1.1 卷積神經發展與進化史1.2 卷積神經網絡的核心要素1.3 卷積神經網絡的描述方法1.4 人工智能三巨頭 + 華人圈名人第2章LeNet概述第3章 LeNet-5網絡結構分析3.1 網絡結構描述-垂直法3.2網絡結構描述-厚度法3.3 分層解讀3.4 分析結果示意第1章 卷積神經網絡基礎1.1 卷積神經發展與進...https://blog.csdn.net/HiWangWenBing/article/details/120893764
(2)Pytorch官網對LeNet的定義
Neural Networks — PyTorch Tutorials 1.10.0+cu102 documentationhttps://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html?highlight=lenet
第1章 業務領域分析
1.1? 步驟1-1:業務領域分析
(1)業務需求:數據集本身就說明了業務需求
[Pytorch系列-33]:數據集 - torchvision與CIFAR10詳解_文火冰糖(王文兵)的博客-CSDN博客第1章TorchVision概述1.1TorchVisionPytorch非常有用的工具集:torchtext:處理自然語言torchaudio:處理音頻的torchvision:處理圖像視頻的。torchvision包含一些常用的數據集、模型、轉換函數等等。本文重點放在torchvision的數據集上。1.2TorchVision的安裝pip install torchvision 1.3TorchVision官網的數據集https://pytorc...https://blog.csdn.net/HiWangWenBing/article/details/121055970
(2)業務分析
本任務的本質是邏輯分類中的多分類,多分類中的10分類問題,即給定一張圖形的特征數據(這里是單個圖形的三通道像素值),能夠判斷其屬于哪個物體分類。屬于分類問題。
有很多現有的卷積神經網絡可以解決分類問題,本文使用LeNet來解決這個簡單的分類問題。
這里也有兩個思路:
- 直接利用框架自帶的LeNet網絡完成模型的搭建。
- 自己按照LeNet網絡的結構,使用Pytorch提供的卷積核自行搭建該網絡。
由于LeNet網絡比較簡單,也為了熟悉Ptorch的nn網絡,我們不妨嘗試上述兩種方法。
對于后續的復雜網絡,我們可以直接利用平臺提供的庫,直接使用已有的網絡,而不再手工搭建。
1.2 步驟1-2:業務建模
其實,這里不需要自己在建立數據模型了,可以直接使用LeNet已有的模型,模型參考如下:
?
?
1.3 訓練模型
?
1.4 驗證模型
?
1.5 整體架構
?
1.6?代碼實例前置條件
#環境準備 import numpy as np # numpy數組庫 import math # 數學運算庫 import matplotlib.pyplot as plt # 畫圖庫import torch # torch基礎庫 import torch.nn as nn # torch神經網絡庫 import torch.nn.functional as F # torch神經網絡庫 from sklearn.datasets import load_boston from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_splitprint("Hello World") print(torch.__version__) print(torch.cuda.is_available())第2章 前向運算模型定義
2.1?步驟2-1:數據集選擇
(1)CFAR10數據集
[Pytorch系列-33]:數據集 - torchvision與CIFAR10詳解_文火冰糖(王文兵)的博客-CSDN博客第1章TorchVision概述1.1TorchVisionPytorch非常有用的工具集:torchtext:處理自然語言torchaudio:處理音頻的torchvision:處理圖像視頻的。torchvision包含一些常用的數據集、模型、轉換函數等等。本文重點放在torchvision的數據集上。1.2TorchVision的安裝pip install torchvision 1.3TorchVision官網的數據集https://pytorc...https://blog.csdn.net/HiWangWenBing/article/details/121055970
(2)樣本數據與樣本標簽格式
?
(3)源代碼示例 -- 下載并讀入數據
#2-1 準備數據集 train_data = dataset.CIFAR10 (root = "cifar10",train = True,transform = transforms.ToTensor(),download = True)#2-1 準備數據集 test_data = dataset.CIFAR10 (root = "cifar10",train = False,transform = transforms.ToTensor(),download = True)print(train_data) print("size=", len(train_data)) print("") print(test_data) print("size=", len(test_data)) Files already downloaded and verified Files already downloaded and verified Dataset CIFAR10Number of datapoints: 50000Root location: cifar10Split: TrainStandardTransform Transform: ToTensor() size= 50000Dataset CIFAR10Number of datapoints: 10000Root location: cifar10Split: TestStandardTransform Transform: ToTensor() size= 100002.2?步驟2-2:數據預處理 - 本案例無需數據預處理
(1)批量數據讀取?-- 啟動dataloader從數據集中讀取Batch數據
# 批量數據讀取 train_loader = data_utils.DataLoader(dataset = train_data, #訓練數據batch_size = 64, #每個批次讀取的圖片數量shuffle = True) #讀取到的數據,是否需要隨機打亂順序test_loader = data_utils.DataLoader(dataset = test_data, #測試數據集batch_size = 64,shuffle = True)print(train_loader) print(test_loader) print(len(train_data), len(train_data)/64) print(len(test_data), len(test_data)/64)(2)#顯示一個batch圖片 -- 僅僅用于調試
#顯示一個batch圖片 print("獲取一個batch組圖片") imgs, labels = next(iter(train_loader)) print(imgs.shape) print(labels.shape) print(labels.size()[0])print("\n合并成一張三通道灰度圖片") images = utils.make_grid(imgs) print(images.shape) print(labels.shape)print("\n轉換成imshow格式") images = images.numpy().transpose(1,2,0) print(images.shape) print(labels.shape)print("\n顯示樣本標簽") #打印圖片標簽 for i in range(64):print(labels[i], end=" ")i += 1#換行if i%8 == 0:print(end='\n')print("\n顯示圖片") plt.imshow(images) plt.show() 獲取一個batch組圖片 torch.Size([64, 3, 32, 32]) torch.Size([64]) 64合并成一張三通道灰度圖片 torch.Size([3, 274, 274]) torch.Size([64])轉換成imshow格式 (274, 274, 3) torch.Size([64])顯示樣本標簽 tensor(3) tensor(7) tensor(9) tensor(8) tensor(9) tensor(0) tensor(6) tensor(4) tensor(1) tensor(1) tensor(3) tensor(9) tensor(7) tensor(6) tensor(9) tensor(7) tensor(3) tensor(5) tensor(5) tensor(8) tensor(7) tensor(5) tensor(5) tensor(7) tensor(0) tensor(7) tensor(5) tensor(3) tensor(2) tensor(6) tensor(2) tensor(5) tensor(6) tensor(1) tensor(8) tensor(5) tensor(2) tensor(5) tensor(9) tensor(3) tensor(3) tensor(0) tensor(9) tensor(5) tensor(0) tensor(4) tensor(1) tensor(8) tensor(2) tensor(0) tensor(5) tensor(3) tensor(1) tensor(8) tensor(8) tensor(5) tensor(6) tensor(5) tensor(4) tensor(6) tensor(2) tensor(8) tensor(8) tensor(4) 顯示圖片?
2.3 步驟2-3:神經網絡建模
(1)模型
?
LeNet-5 神經網絡一共五層,其中卷積層和池化層可以考慮為一個整體,網絡的結構為 :
輸入 → 卷積 → 池化 → 卷積 → 池化 → 全連接 → 全連接 → 全連接 → 輸出。
?(2)Pytorch NN?Conv2d用法詳解
https://blog.csdn.net/HiWangWenBing/article/details/121051650
(3)Pytorch NN?MaxPool2d用法詳解
??????https://blog.csdn.net/HiWangWenBing/article/details/121053578
(4)使用Pytorch卷積核構建構建LeNet網絡
在?pytorch?中,圖像數據集(提供給網絡的輸入)的存儲順序為:
(batch, channels, height, width),依次為批大小、通道數、高度、寬度。
?
特別提醒:
LeNet-5網絡的默認的輸入圖片的尺寸是32*32, 而Mnist數據集的圖片的尺寸是28 * 28。
因此,采用Mnist數據集時,每一層的輸出的特征值feature map的尺寸與LeNet-5網絡的默認默認的feature map的尺寸是不一樣的,需要適當的調整。
具體如何調整,請參考代碼的實現:
下面以兩種等效的方式定義LeNet神經網絡:
- Pytorch官網方式
- 自定義方式
(5)構建LeNet網絡結構的代碼實例?- 官網
# 來自官網 class LeNet5A(nn.Module):def __init__(self):super(LeNet5A, self).__init__()# 1 input image channel, 6 output channels, 5x5 square convolution kernelself.conv1 = nn.Conv2d(in_channels = 3, out_channels = 6, kernel_size = 5) # 6 * 28 * 28self.conv2 = nn.Conv2d(in_channels = 6, out_channels = 16, kernel_size = 5) # 16 * 10 * 10# an affine operation: y = Wx + b self.fc1 = nn.Linear(in_features = 16 * 5 * 5, out_features= 120) # 16 * 5 * 5self.fc2 = nn.Linear(in_features = 120, out_features = 84)self.fc3 = nn.Linear(in_features = 84, out_features = 10)def forward(self, x):# Max pooling over a (2, 2) windowx = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))# If the size is a square, you can specify with a single numberx = F.max_pool2d(F.relu(self.conv2(x)), 2)x = torch.flatten(x, 1) # flatten all dimensions except the batch dimensionx = F.relu(self.fc1(x))x = F.relu(self.fc2(x))x = self.fc3(x)#x = F.log_softmax(x,dim=1)return x(6)構建LeNet網絡結構的代碼實例?- 自定義
class LeNet5B(nn.Module):def __init__(self):super(LeNet5B, self).__init__()self.feature_convnet = nn.Sequential(OrderedDict([('conv1', nn.Conv2d (in_channels = 3, out_channels = 6, kernel_size= (5, 5), stride = 1)), # 6 * 28 * 28('relu1', nn.ReLU()),('pool1', nn.MaxPool2d(kernel_size=(2, 2))), # 6 * 14 * 14('conv2', nn.Conv2d (in_channels = 6, out_channels = 16, kernel_size=(5, 5))), # 16 * 10 * 10('relu2', nn.ReLU()),('pool2', nn.MaxPool2d(kernel_size=(2, 2))), # 16 * 5 * 5]))self.class_fc = nn.Sequential(OrderedDict([('fc1', nn.Linear(in_features = 16 * 5 * 5, out_features = 120)), ('relu3', nn.ReLU()),('fc2', nn.Linear(in_features = 120, out_features = 84)), ('relu4', nn.ReLU()),('fc3', nn.Linear(in_features = 84, out_features = 10)),]))def forward(self, img):output = self.feature_convnet(img)output = output.view(-1, 16 * 5 * 5) #相當于Flatten()output = self.class_fc(output)return output2.4?步驟2-4:定義神經網絡實例以及輸出
net_a = LeNet5A() print(net_a) LeNet5A((conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))(fc1): Linear(in_features=400, out_features=120, bias=True)(fc2): Linear(in_features=120, out_features=84, bias=True)(fc3): Linear(in_features=84, out_features=10, bias=True) ) net_b = LeNet5B() print(net_b) LeNet5B((feature_convnet): Sequential((conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))(relu1): ReLU()(pool1): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))(relu2): ReLU()(pool2): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False))(class_fc): Sequential((fc1): Linear(in_features=400, out_features=120, bias=True)(relu3): ReLU()(fc2): Linear(in_features=120, out_features=84, bias=True)(relu4): ReLU()(fc3): Linear(in_features=84, out_features=10, bias=True)) ) # 2-4 定義網絡預測輸出 # 測試網絡是否能夠工作 print("定義測試數據") input = torch.randn(1, 3, 32, 32) print("") print("net_a的輸出方法1:") out = net_a(input) print(out) print("net_a的輸出方法2:") out = net_a.forward(input) print(out) print("") print("net_b的輸出方法1:") out = net_b(input) print(out) print("net_b的輸出方法2:") out = net_b.forward(input) print(out) 定義測試數據net_a的輸出方法1: tensor([[-0.0969, -0.1226, 0.0581, 0.0373, 0.0028, 0.1278, -0.1044, -0.1068,-0.0097, 0.0272]], grad_fn=<AddmmBackward>) net_a的輸出方法2: tensor([[-0.0969, -0.1226, 0.0581, 0.0373, 0.0028, 0.1278, -0.1044, -0.1068,-0.0097, 0.0272]], grad_fn=<AddmmBackward>)net_b的輸出方法1: tensor([[-0.0052, 0.0682, -0.1567, 0.0173, 0.0977, -0.0599, 0.0969, -0.0656,0.0591, -0.1179]], grad_fn=<AddmmBackward>) net_b的輸出方法2: tensor([[-0.0052, 0.0682, -0.1567, 0.0173, 0.0977, -0.0599, 0.0969, -0.0656,0.0591, -0.1179]], grad_fn=<AddmmBackward>)第3章 定義反向計算
3.1?步驟3-1:定義loss
# 3-1 定義loss函數: loss_fn = nn.CrossEntropyLoss() print(loss_fn)3.2 步驟3-2:定義優化器
# 3-2 定義優化器 net = net_aLearning_rate = 0.001 #學習率# optimizer = SGD: 基本梯度下降法 # parameters:指明要優化的參數列表 # lr:指明學習率 #optimizer = torch.optim.Adam(model.parameters(), lr = Learning_rate) optimizer = torch.optim.SGD(net.parameters(), lr = Learning_rate, momentum=0.9) print(optimizer) SGD ( Parameter Group 0dampening: 0lr: 0.001momentum: 0.9nesterov: Falseweight_decay: 0 )3.3?步驟3-3:模型訓練 (epochs = 10)
# 3-3 模型訓練 # 定義迭代次數 epochs = 10loss_history = [] #訓練過程中的loss數據 accuracy_history =[] #中間的預測結果accuracy_batch = 0.0for i in range(0, epochs):for j, (x_train, y_train) in enumerate(train_loader):#(0) 復位優化器的梯度optimizer.zero_grad() #(1) 前向計算y_pred = net(x_train)#(2) 計算lossloss = loss_fn(y_pred, y_train)#(3) 反向求導loss.backward()#(4) 反向迭代optimizer.step()# 記錄訓練過程中的損失值loss_history.append(loss.item()) #loss for a batch# 記錄訓練過程中的在訓練集上該批次的準確率number_batch = y_train.size()[0] # 訓練批次中圖片的個數_, predicted = torch.max(y_pred.data, dim = 1) # 選出最大可能性的預測correct_batch = (predicted == y_train).sum().item() # 獲得預測正確的數目accuracy_batch = 100 * correct_batch/number_batch # 計算該批次上的準確率accuracy_history.append(accuracy_batch) # 該批次的準確率添加到log中if(j % 100 == 0):print('epoch {} batch {} In {} loss = {:.4f} accuracy = {:.4f}%'.format(i, j , len(train_data)/64, loss.item(), accuracy_batch)) print("\n迭代完成") print("final loss =", loss.item()) print("final accu =", accuracy_batch) epoch 0 batch 0 In 781.25 loss = 2.2998 accuracy = 7.8125% epoch 0 batch 100 In 781.25 loss = 2.3023 accuracy = 9.3750% epoch 0 batch 200 In 781.25 loss = 2.2958 accuracy = 14.0625% epoch 0 batch 300 In 781.25 loss = 2.3062 accuracy = 9.3750% epoch 0 batch 400 In 781.25 loss = 2.2994 accuracy = 7.8125% epoch 0 batch 500 In 781.25 loss = 2.3016 accuracy = 10.9375% epoch 0 batch 600 In 781.25 loss = 2.3040 accuracy = 6.2500% epoch 0 batch 700 In 781.25 loss = 2.3030 accuracy = 4.6875% epoch 1 batch 0 In 781.25 loss = 2.3024 accuracy = 4.6875% epoch 1 batch 100 In 781.25 loss = 2.2962 accuracy = 14.0625% epoch 1 batch 200 In 781.25 loss = 2.2991 accuracy = 18.7500% epoch 1 batch 300 In 781.25 loss = 2.2985 accuracy = 12.5000% epoch 1 batch 400 In 781.25 loss = 2.2965 accuracy = 15.6250% epoch 1 batch 500 In 781.25 loss = 2.2950 accuracy = 14.0625% epoch 1 batch 600 In 781.25 loss = 2.2851 accuracy = 21.8750% epoch 1 batch 700 In 781.25 loss = 2.2719 accuracy = 7.8125% epoch 2 batch 0 In 781.25 loss = 2.2673 accuracy = 14.0625% epoch 2 batch 100 In 781.25 loss = 2.2646 accuracy = 18.7500% epoch 2 batch 200 In 781.25 loss = 2.2156 accuracy = 18.7500% epoch 2 batch 300 In 781.25 loss = 2.1538 accuracy = 18.7500% epoch 2 batch 400 In 781.25 loss = 2.0801 accuracy = 15.6250% epoch 2 batch 500 In 781.25 loss = 2.0368 accuracy = 28.1250% epoch 2 batch 600 In 781.25 loss = 2.2541 accuracy = 18.7500% epoch 2 batch 700 In 781.25 loss = 1.8632 accuracy = 32.8125% epoch 3 batch 0 In 781.25 loss = 2.0807 accuracy = 28.1250% epoch 3 batch 100 In 781.25 loss = 1.9539 accuracy = 26.5625% epoch 3 batch 200 In 781.25 loss = 1.9882 accuracy = 23.4375% epoch 3 batch 300 In 781.25 loss = 1.9672 accuracy = 25.0000% epoch 3 batch 400 In 781.25 loss = 1.9345 accuracy = 25.0000% epoch 3 batch 500 In 781.25 loss = 1.9574 accuracy = 18.7500% epoch 3 batch 600 In 781.25 loss = 1.7351 accuracy = 28.1250% epoch 3 batch 700 In 781.25 loss = 1.7798 accuracy = 34.3750% epoch 4 batch 0 In 781.25 loss = 1.8423 accuracy = 29.6875% epoch 4 batch 100 In 781.25 loss = 1.7056 accuracy = 29.6875% epoch 4 batch 200 In 781.25 loss = 1.8324 accuracy = 32.8125% epoch 4 batch 300 In 781.25 loss = 1.9146 accuracy = 28.1250% epoch 4 batch 400 In 781.25 loss = 1.7319 accuracy = 34.3750% epoch 4 batch 500 In 781.25 loss = 1.6023 accuracy = 35.9375% epoch 4 batch 600 In 781.25 loss = 1.6365 accuracy = 43.7500% epoch 4 batch 700 In 781.25 loss = 1.6822 accuracy = 37.5000% epoch 5 batch 0 In 781.25 loss = 1.8602 accuracy = 35.9375% epoch 5 batch 100 In 781.25 loss = 1.3989 accuracy = 48.4375% epoch 5 batch 200 In 781.25 loss = 1.6256 accuracy = 40.6250% epoch 5 batch 300 In 781.25 loss = 1.8250 accuracy = 34.3750% epoch 5 batch 400 In 781.25 loss = 1.6592 accuracy = 32.8125% epoch 5 batch 500 In 781.25 loss = 1.6617 accuracy = 40.6250% epoch 5 batch 600 In 781.25 loss = 1.5431 accuracy = 42.1875% epoch 5 batch 700 In 781.25 loss = 1.6123 accuracy = 43.7500% epoch 6 batch 0 In 781.25 loss = 1.8649 accuracy = 29.6875% epoch 6 batch 100 In 781.25 loss = 1.5037 accuracy = 43.7500% epoch 6 batch 200 In 781.25 loss = 1.4193 accuracy = 45.3125% epoch 6 batch 300 In 781.25 loss = 1.5146 accuracy = 50.0000% epoch 6 batch 400 In 781.25 loss = 1.4917 accuracy = 50.0000% epoch 6 batch 500 In 781.25 loss = 1.5940 accuracy = 37.5000% epoch 6 batch 600 In 781.25 loss = 1.4959 accuracy = 42.1875% epoch 6 batch 700 In 781.25 loss = 1.5738 accuracy = 39.0625% epoch 7 batch 0 In 781.25 loss = 1.7241 accuracy = 43.7500% epoch 7 batch 100 In 781.25 loss = 1.4952 accuracy = 46.8750% epoch 7 batch 200 In 781.25 loss = 1.5233 accuracy = 37.5000% epoch 7 batch 300 In 781.25 loss = 1.5373 accuracy = 45.3125% epoch 7 batch 400 In 781.25 loss = 1.4451 accuracy = 50.0000% epoch 7 batch 500 In 781.25 loss = 1.4423 accuracy = 48.4375% epoch 7 batch 600 In 781.25 loss = 1.4838 accuracy = 50.0000% epoch 7 batch 700 In 781.25 loss = 1.5223 accuracy = 48.4375% epoch 8 batch 0 In 781.25 loss = 1.4175 accuracy = 54.6875% epoch 8 batch 100 In 781.25 loss = 1.4949 accuracy = 46.8750% epoch 8 batch 200 In 781.25 loss = 1.3644 accuracy = 46.8750% epoch 8 batch 300 In 781.25 loss = 1.5125 accuracy = 39.0625% epoch 8 batch 400 In 781.25 loss = 1.5507 accuracy = 39.0625% epoch 8 batch 500 In 781.25 loss = 1.4964 accuracy = 37.5000% epoch 8 batch 600 In 781.25 loss = 1.5248 accuracy = 40.6250% epoch 8 batch 700 In 781.25 loss = 1.2663 accuracy = 53.1250% epoch 9 batch 0 In 781.25 loss = 1.4140 accuracy = 43.7500% epoch 9 batch 100 In 781.25 loss = 1.3880 accuracy = 53.1250% epoch 9 batch 200 In 781.25 loss = 1.2337 accuracy = 51.5625% epoch 9 batch 300 In 781.25 loss = 1.3268 accuracy = 51.5625% epoch 9 batch 400 In 781.25 loss = 1.3944 accuracy = 40.6250% epoch 9 batch 500 In 781.25 loss = 1.4345 accuracy = 57.8125% epoch 9 batch 600 In 781.25 loss = 1.3305 accuracy = 51.5625% epoch 9 batch 700 In 781.25 loss = 1.4600 accuracy = 57.8125% epoch 10 batch 0 In 781.25 loss = 1.4065 accuracy = 50.0000% epoch 10 batch 100 In 781.25 loss = 1.3810 accuracy = 53.1250% epoch 10 batch 200 In 781.25 loss = 1.4630 accuracy = 46.8750% epoch 10 batch 300 In 781.25 loss = 1.4755 accuracy = 51.5625% epoch 10 batch 400 In 781.25 loss = 1.2332 accuracy = 43.7500% epoch 10 batch 500 In 781.25 loss = 1.3078 accuracy = 57.8125% epoch 10 batch 600 In 781.25 loss = 1.2185 accuracy = 53.1250% epoch 10 batch 700 In 781.25 loss = 1.2658 accuracy = 59.3750% epoch 11 batch 0 In 781.25 loss = 1.3261 accuracy = 57.8125% epoch 11 batch 100 In 781.25 loss = 1.2700 accuracy = 54.6875% epoch 11 batch 200 In 781.25 loss = 1.3970 accuracy = 46.8750% epoch 11 batch 300 In 781.25 loss = 1.3336 accuracy = 54.6875% epoch 11 batch 400 In 781.25 loss = 1.3989 accuracy = 50.0000% epoch 11 batch 500 In 781.25 loss = 1.3394 accuracy = 46.8750% epoch 11 batch 600 In 781.25 loss = 1.3924 accuracy = 56.2500% epoch 11 batch 700 In 781.25 loss = 1.2620 accuracy = 56.2500% epoch 12 batch 0 In 781.25 loss = 1.1936 accuracy = 51.5625% epoch 12 batch 100 In 781.25 loss = 1.4169 accuracy = 50.0000% epoch 12 batch 200 In 781.25 loss = 1.2697 accuracy = 56.2500% epoch 12 batch 300 In 781.25 loss = 1.3202 accuracy = 53.1250% epoch 12 batch 400 In 781.25 loss = 1.4378 accuracy = 37.5000% epoch 12 batch 500 In 781.25 loss = 1.1814 accuracy = 62.5000% epoch 12 batch 600 In 781.25 loss = 1.3979 accuracy = 51.5625% epoch 12 batch 700 In 781.25 loss = 1.6219 accuracy = 43.7500% epoch 13 batch 0 In 781.25 loss = 1.7161 accuracy = 48.4375% epoch 13 batch 100 In 781.25 loss = 1.3266 accuracy = 50.0000% epoch 13 batch 200 In 781.25 loss = 1.3687 accuracy = 50.0000% epoch 13 batch 300 In 781.25 loss = 1.4488 accuracy = 50.0000% epoch 13 batch 400 In 781.25 loss = 1.3780 accuracy = 51.5625% epoch 13 batch 500 In 781.25 loss = 1.4341 accuracy = 53.1250% epoch 13 batch 600 In 781.25 loss = 1.2450 accuracy = 60.9375% epoch 13 batch 700 In 781.25 loss = 1.4046 accuracy = 57.8125% epoch 14 batch 0 In 781.25 loss = 1.2583 accuracy = 57.8125% epoch 14 batch 100 In 781.25 loss = 1.2042 accuracy = 59.3750% epoch 14 batch 200 In 781.25 loss = 1.0217 accuracy = 60.9375% epoch 14 batch 300 In 781.25 loss = 1.5322 accuracy = 39.0625% epoch 14 batch 400 In 781.25 loss = 1.2680 accuracy = 59.3750% epoch 14 batch 500 In 781.25 loss = 1.4117 accuracy = 43.7500% epoch 14 batch 600 In 781.25 loss = 1.2070 accuracy = 56.2500% epoch 14 batch 700 In 781.25 loss = 1.5578 accuracy = 53.1250% epoch 15 batch 0 In 781.25 loss = 1.2705 accuracy = 50.0000% epoch 15 batch 100 In 781.25 loss = 1.3118 accuracy = 53.1250% epoch 15 batch 200 In 781.25 loss = 1.3140 accuracy = 56.2500% epoch 15 batch 300 In 781.25 loss = 1.2689 accuracy = 50.0000% epoch 15 batch 400 In 781.25 loss = 1.3345 accuracy = 54.6875% epoch 15 batch 500 In 781.25 loss = 1.2388 accuracy = 57.8125% epoch 15 batch 600 In 781.25 loss = 1.3317 accuracy = 51.5625% epoch 15 batch 700 In 781.25 loss = 1.0944 accuracy = 60.9375% epoch 16 batch 0 In 781.25 loss = 0.9785 accuracy = 65.6250% epoch 16 batch 100 In 781.25 loss = 1.3298 accuracy = 48.4375% epoch 16 batch 200 In 781.25 loss = 1.2975 accuracy = 56.2500% epoch 16 batch 300 In 781.25 loss = 1.3631 accuracy = 45.3125% epoch 16 batch 400 In 781.25 loss = 1.1714 accuracy = 51.5625% epoch 16 batch 500 In 781.25 loss = 1.2197 accuracy = 56.2500% epoch 16 batch 600 In 781.25 loss = 1.1604 accuracy = 54.6875% epoch 16 batch 700 In 781.25 loss = 1.3643 accuracy = 45.3125% epoch 17 batch 0 In 781.25 loss = 1.2571 accuracy = 54.6875% epoch 17 batch 100 In 781.25 loss = 1.3136 accuracy = 50.0000% epoch 17 batch 200 In 781.25 loss = 1.0692 accuracy = 64.0625% epoch 17 batch 300 In 781.25 loss = 1.2984 accuracy = 54.6875% epoch 17 batch 400 In 781.25 loss = 1.3145 accuracy = 56.2500% epoch 17 batch 500 In 781.25 loss = 1.0156 accuracy = 65.6250% epoch 17 batch 600 In 781.25 loss = 1.4033 accuracy = 50.0000% epoch 17 batch 700 In 781.25 loss = 1.3489 accuracy = 48.4375% epoch 18 batch 0 In 781.25 loss = 1.3279 accuracy = 48.4375% epoch 18 batch 100 In 781.25 loss = 1.0127 accuracy = 56.2500% epoch 18 batch 200 In 781.25 loss = 1.2629 accuracy = 48.4375% epoch 18 batch 300 In 781.25 loss = 1.1487 accuracy = 59.3750% epoch 18 batch 400 In 781.25 loss = 1.3890 accuracy = 56.2500% epoch 18 batch 500 In 781.25 loss = 1.0579 accuracy = 62.5000% epoch 18 batch 600 In 781.25 loss = 1.3453 accuracy = 50.0000% epoch 18 batch 700 In 781.25 loss = 1.1905 accuracy = 51.5625% epoch 19 batch 0 In 781.25 loss = 1.1903 accuracy = 57.8125% epoch 19 batch 100 In 781.25 loss = 1.1407 accuracy = 62.5000% epoch 19 batch 200 In 781.25 loss = 1.2654 accuracy = 56.2500% epoch 19 batch 300 In 781.25 loss = 1.2199 accuracy = 57.8125% epoch 19 batch 400 In 781.25 loss = 1.2242 accuracy = 64.0625% epoch 19 batch 500 In 781.25 loss = 1.0288 accuracy = 67.1875% epoch 19 batch 600 In 781.25 loss = 1.0358 accuracy = 60.9375% epoch 19 batch 700 In 781.25 loss = 1.1985 accuracy = 65.6250%迭代完成 final loss = 0.8100204467773438 final accu = 68.753.4?可視化loss迭代過程
#顯示loss的歷史數據 plt.grid() plt.xlabel("iters") plt.ylabel("") plt.title("loss", fontsize = 12) plt.plot(loss_history, "r") plt.show()?
3.5?可視化訓練批次的精度迭代過程
#顯示準確率的歷史數據 plt.grid() plt.xlabel("iters") plt.ylabel("%") plt.title("accuracy", fontsize = 12) plt.plot(accuracy_history, "b+") plt.show()?
第4章 模型性能驗證
4.1 手工驗證
# 手工檢查 index = 0 print("獲取一個batch樣本") images, labels = next(iter(test_loader)) print(images.shape) print(labels.shape) print(labels)print("\n對batch中所有樣本進行預測") outputs = net(images) print(outputs.data.shape)print("\n對batch中每個樣本的預測結果,選擇最可能的分類") _, predicted = torch.max(outputs, 1) print(predicted.data.shape) print(predicted)print("\n對batch中的所有結果進行比較") bool_results = (predicted == labels) print(bool_results.shape) print(bool_results)print("\n統計預測正確樣本的個數和精度") corrects = bool_results.sum().item() accuracy = corrects/(len(bool_results)) print("corrects=", corrects) print("accuracy=", accuracy)print("\n樣本index =", index) print("標簽值 :", labels[index]. item()) print("分類可能性:", outputs.data[index].numpy()) print("最大可能性:",predicted.data[index].item()) print("正確性 :",bool_results.data[index].item()) 獲取一個batch樣本 torch.Size([64, 3, 32, 32]) torch.Size([64]) tensor([6, 5, 9, 6, 9, 4, 1, 1, 2, 0, 0, 2, 6, 9, 2, 9, 1, 1, 6, 1, 1, 4, 4, 3,1, 9, 9, 7, 3, 9, 6, 3, 7, 9, 1, 4, 8, 0, 9, 4, 5, 8, 8, 4, 4, 8, 0, 4,0, 8, 6, 3, 5, 9, 2, 0, 7, 2, 3, 7, 1, 6, 7, 5])對batch中所有樣本進行預測 torch.Size([64, 10])對batch中每個樣本的預測結果,選擇最可能的分類 torch.Size([64]) tensor([8, 2, 9, 6, 4, 6, 1, 1, 5, 0, 0, 0, 6, 1, 9, 1, 1, 3, 8, 1, 1, 6, 6, 2,1, 9, 9, 7, 5, 9, 6, 1, 5, 9, 1, 1, 8, 0, 0, 4, 6, 8, 8, 2, 4, 8, 0, 4,0, 8, 6, 3, 5, 9, 2, 8, 1, 7, 4, 8, 8, 6, 7, 7])對batch中的所有結果進行比較 torch.Size([64]) tensor([False, False, True, True, False, False, True, True, False, True,True, False, True, False, False, False, True, False, False, True,True, False, False, False, True, True, True, True, False, True,True, False, False, True, True, False, True, True, False, True,False, True, True, False, True, True, True, True, True, True,True, True, True, True, True, False, False, False, False, False,False, True, True, False])統計預測正確樣本的個數和精度 corrects= 36 accuracy= 0.5625樣本index = 0 標簽值 : 6 分類可能性: [-0.29385602 -3.5680068 -0.11207527 2.3141491 -1.4549704 0.860663061.0907507 -3.04807 3.4386015 -1.4474211 ] 最大可能性: 8 正確性 : False4.2 整個訓練集上的精度驗證:精度只有58%
# 對訓練后的模型進行評估:測試其在訓練集上總的準確率 correct_dataset = 0 total_dataset = 0 accuracy_dataset = 0.0# 進行評測的時候網絡不更新梯度 with torch.no_grad():for i, data in enumerate(train_loader):#獲取一個batch樣本" images, labels = data#對batch中所有樣本進行預測outputs = net(images)#對batch中每個樣本的預測結果,選擇最可能的分類_, predicted = torch.max(outputs.data, 1)#對batch中的樣本數進行累計total_dataset += labels.size()[0] #對batch中的所有結果進行比較"bool_results = (predicted == labels)#統計預測正確樣本的個數correct_dataset += bool_results.sum().item()#統計預測正確樣本的精度accuracy_dataset = 100 * correct_dataset/total_datasetif(i % 100 == 0):print('batch {} In {} accuracy = {:.4f}'.format(i, len(train_data)/64, accuracy_dataset))print('Final result with the model on the dataset, accuracy =', accuracy_dataset) batch 0 In 781.25 accuracy = 50.0000 batch 100 In 781.25 accuracy = 57.7970 batch 200 In 781.25 accuracy = 57.9291 batch 300 In 781.25 accuracy = 57.6256 batch 400 In 781.25 accuracy = 57.6761 batch 500 In 781.25 accuracy = 57.6129 batch 600 In 781.25 accuracy = 57.6071 batch 700 In 781.25 accuracy = 57.4715 Final result with the model on the dataset, accuracy = 57.5984.3 整個測試集上的精度驗證:精度只有58%
# 對訓練后的模型進行評估:測試其在訓練集上總的準確率 correct_dataset = 0 total_dataset = 0 accuracy_dataset = 0.0# 進行評測的時候網絡不更新梯度 with torch.no_grad():for i, data in enumerate(test_loader):#獲取一個batch樣本" images, labels = data#對batch中所有樣本進行預測outputs = net(images)#對batch中每個樣本的預測結果,選擇最可能的分類_, predicted = torch.max(outputs.data, 1)#對batch中的樣本數進行累計total_dataset += labels.size()[0] #對batch中的所有結果進行比較"bool_results = (predicted == labels)#統計預測正確樣本的個數correct_dataset += bool_results.sum().item()#統計預測正確樣本的精度accuracy_dataset = 100 * correct_dataset/total_datasetif(i % 100 == 0):print('batch {} In {} accuracy = {:.4f}'.format(i, len(test_data)/64, accuracy_dataset))print('Final result with the model on the dataset, accuracy =', accuracy_dataset) batch 0 In 156.25 accuracy = 54.6875 batch 100 In 156.25 accuracy = 54.5947 Final result with the model on the dataset, accuracy = 54.94結論:
LeNet在MNIST數據集上的準確率可以高達98%.
LeNet在CFAR10上的準確率不足60%,因此需要一個更深的神經網絡。
作者主頁(文火冰糖的硅基工坊):文火冰糖(王文兵)的博客_文火冰糖的硅基工坊_CSDN博客
?本文網址:https://blog.csdn.net/HiWangWenBing/article/details/121072835
總結
以上是生活随笔為你收集整理的[Pytorch系列-35]:卷积神经网络 - 搭建LeNet-5网络与CFAR10分类数据集的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 算法—分配糖果
- 下一篇: 爬虫技术(05)神箭手爬虫回调函数