PyTorch ResNet 实现图片分类
生活随笔
收集整理的這篇文章主要介紹了
PyTorch ResNet 实现图片分类
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
PyTorch ResNet 實現(xiàn)圖片分類
- 建黨 100 年
- Resnet
- 深度網(wǎng)絡退化
- 代碼實現(xiàn)
- 殘差塊
- 超參數(shù)
- ResNet 18 網(wǎng)絡
- 獲取數(shù)據(jù)
- 訓練
- 測試
- 完整代碼
建黨 100 年
百年風雨, 金戈鐵馬.
憶往昔, 歲月崢嶸;
看今朝, 燦爛輝煌.
Resnet
深度殘差網(wǎng)絡 ResNet (Deep residual network) 和 Alexnet 一樣是深度學習的一個里程碑.
TensorFlow 版 Restnet 實現(xiàn):
TensorFlow2 千層神經(jīng)網(wǎng)絡, 始步于此
深度網(wǎng)絡退化
當網(wǎng)絡深度從 0 增加到 20 的時候, 結果會隨著網(wǎng)絡的深度而變好. 但當網(wǎng)絡超過 20 層的時候, 結果會隨著網(wǎng)絡深度的增加而下降. 網(wǎng)絡的層數(shù)越深, 梯度之間的相關性會越來越差, 模型也更難優(yōu)化.
殘差網(wǎng)絡 (ResNet) 通過增加映射 (Identity) 來解決網(wǎng)絡退化問題. H(x) = F(x) + x通過集合殘差而不是恒等隱射, 保證了網(wǎng)絡不會退化.
代碼實現(xiàn)
殘差塊
class BasicBlock(torch.nn.Module):"""殘差塊"""def __init__(self, inplanes, planes, stride=1):"""初始化"""super(BasicBlock, self).__init__()self.conv1 = torch.nn.Conv2d(in_channels=inplanes, out_channels=planes, kernel_size=(3, 3), stride=(stride, stride), padding=1) # 卷積層1self.bn1 = torch.nn.BatchNorm2d(planes) # 標準化層1self.conv2 = torch.nn.Conv2d(in_channels=planes, out_channels=planes, kernel_size=(3, 3), padding=1) # 卷積層2self.bn2 = torch.nn.BatchNorm2d(planes) # 標準化層2# 如果步長不為1, 用1*1的卷積實現(xiàn)下采樣if stride != 1:self.downsample = torch.nn.Sequential(# 下采樣torch.nn.Conv2d(in_channels=inplanes, out_channels=planes, kernel_size=(1, 1), stride=(stride, stride)))else:self.downsample = lambda x: x # 返回xdef forward(self, input):"""前向傳播"""out = self.conv1(input)out = self.bn1(out)out = F.relu(out)out = self.conv2(out)out = self.bn2(out)identity = self.downsample(input)output = torch.add(out, identity)output = F.relu(output)return outputResNet_18 = torch.nn.Sequential(# 初始層torch.nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1)), # 卷積torch.nn.BatchNorm2d(64),torch.nn.ReLU(),torch.nn.MaxPool2d((2, 2)), # 池化# 8個block(每個為兩層)BasicBlock(64, 64, stride=1),BasicBlock(64, 64, stride=1),BasicBlock(64, 128, stride=2),BasicBlock(128, 128, stride=1),BasicBlock(128, 256, stride=2),BasicBlock(256, 256, stride=1),BasicBlock(256, 512, stride=2),BasicBlock(512, 512, stride=1),torch.nn.AvgPool2d(2), # 池化torch.nn.Flatten(), # 平鋪層# 全連接層torch.nn.Linear(512, 100) # 100類 )超參數(shù)
# 定義超參數(shù) batch_size = 1024 # 一次訓練的樣本數(shù)目 learning_rate = 0.0001 # 學習率 iteration_num = 20 # 迭代次數(shù) network = ResNet_18 optimizer = torch.optim.Adam(network.parameters(), lr=learning_rate) # 優(yōu)化器# GPU 加速 use_cuda = torch.cuda.is_available()if use_cuda:network.cuda() print("是否使用 GPU 加速:", use_cuda) print(summary(network, (3, 32, 32)))ResNet 18 網(wǎng)絡
ResNet_18 = torch.nn.Sequential(# 初始層torch.nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1)), # 卷積torch.nn.BatchNorm2d(64),torch.nn.ReLU(),torch.nn.MaxPool2d((2, 2)), # 池化# 8個block(每個為兩層)BasicBlock(64, 64, stride=1),BasicBlock(64, 64, stride=1),BasicBlock(64, 128, stride=2),BasicBlock(128, 128, stride=1),BasicBlock(128, 256, stride=2),BasicBlock(256, 256, stride=1),BasicBlock(256, 512, stride=2),BasicBlock(512, 512, stride=1),torch.nn.AvgPool2d(2), # 池化torch.nn.Flatten(), # 平鋪層# 全連接層torch.nn.Linear(512, 100) # 100類 )獲取數(shù)據(jù)
def get_data():"""獲取數(shù)據(jù)"""# 獲取測試集train = torchvision.datasets.CIFAR100(root="./data", train=True, download=True,transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor(), # 轉(zhuǎn)換成張量torchvision.transforms.Normalize((0.1307,), (0.3081,)) # 標準化]))train_loader = DataLoader(train, batch_size=batch_size) # 分割測試集# 獲取測試集test = torchvision.datasets.CIFAR100(root="./data", train=False, download=True,transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor(), # 轉(zhuǎn)換成張量torchvision.transforms.Normalize((0.1307,), (0.3081,)) # 標準化]))test_loader = DataLoader(test, batch_size=batch_size) # 分割訓練# 返回分割好的訓練集和測試集return train_loader, test_loader訓練
def train(model, epoch, train_loader):"""訓練"""# 訓練模式model.train()# 迭代for step, (x, y) in enumerate(train_loader):# 加速if use_cuda:model = model.cuda()x, y = x.cuda(), y.cuda()# 梯度清零optimizer.zero_grad()output = model(x)# 計算損失loss = F.cross_entropy(output, y)# 反向傳播loss.backward()# 更新梯度optimizer.step()# 打印損失if step % 10 == 0:print('Epoch: {}, Step {}, Loss: {}'.format(epoch, step, loss))測試
def test(model, test_loader):"""測試"""# 測試模式model.eval()# 存放正確個數(shù)correct = 0with torch.no_grad():for x, y in test_loader:# 加速if use_cuda:model = model.cuda()x, y = x.cuda(), y.cuda()# 獲取結果output = model(x)# 預測結果pred = output.argmax(dim=1, keepdim=True)# 計算準確個數(shù)correct += pred.eq(y.view_as(pred)).sum().item()# 計算準確率accuracy = correct / len(test_loader.dataset) * 100# 輸出準確print("Test Accuracy: {}%".format(accuracy))完整代碼
完整代碼:
import torch import torchvision import torch.nn.functional as F from torch.utils.data import DataLoader from torchsummary import summaryclass BasicBlock(torch.nn.Module):"""殘差塊"""def __init__(self, inplanes, planes, stride=1):"""初始化"""super(BasicBlock, self).__init__()self.conv1 = torch.nn.Conv2d(in_channels=inplanes, out_channels=planes, kernel_size=(3, 3),stride=(stride, stride), padding=1) # 卷積層1self.bn1 = torch.nn.BatchNorm2d(planes) # 標準化層1self.conv2 = torch.nn.Conv2d(in_channels=planes, out_channels=planes, kernel_size=(3, 3), padding=1) # 卷積層2self.bn2 = torch.nn.BatchNorm2d(planes) # 標準化層2# 如果步長不為1, 用1*1的卷積實現(xiàn)下采樣if stride != 1:self.downsample = torch.nn.Sequential(# 下采樣torch.nn.Conv2d(in_channels=inplanes, out_channels=planes, kernel_size=(1, 1), stride=(stride, stride)))else:self.downsample = lambda x: x # 返回xdef forward(self, input):"""前向傳播"""out = self.conv1(input)out = self.bn1(out)out = F.relu(out)out = self.conv2(out)out = self.bn2(out)identity = self.downsample(input)output = torch.add(out, identity)output = F.relu(output)return outputResNet_18 = torch.nn.Sequential(# 初始層torch.nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1)), # 卷積torch.nn.BatchNorm2d(64),torch.nn.ReLU(),torch.nn.MaxPool2d((2, 2)), # 池化# 8個block(每個為兩層)BasicBlock(64, 64, stride=1),BasicBlock(64, 64, stride=1),BasicBlock(64, 128, stride=2),BasicBlock(128, 128, stride=1),BasicBlock(128, 256, stride=2),BasicBlock(256, 256, stride=1),BasicBlock(256, 512, stride=2),BasicBlock(512, 512, stride=1),torch.nn.AvgPool2d(2), # 池化torch.nn.Flatten(), # 平鋪層# 全連接層torch.nn.Linear(512, 100) # 100類 )# 定義超參數(shù) batch_size = 1024 # 一次訓練的樣本數(shù)目 learning_rate = 0.0001 # 學習率 iteration_num = 20 # 迭代次數(shù) network = ResNet_18 optimizer = torch.optim.Adam(network.parameters(), lr=learning_rate) # 優(yōu)化器# GPU 加速 use_cuda = torch.cuda.is_available()if use_cuda:network.cuda() print("是否使用 GPU 加速:", use_cuda) print(summary(network, (3, 32, 32)))def get_data():"""獲取數(shù)據(jù)"""# 獲取測試集train = torchvision.datasets.CIFAR100(root="./data", train=True, download=True,transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor(), # 轉(zhuǎn)換成張量torchvision.transforms.Normalize((0.1307,), (0.3081,)) # 標準化]))train_loader = DataLoader(train, batch_size=batch_size) # 分割測試集# 獲取測試集test = torchvision.datasets.CIFAR100(root="./data", train=False, download=True,transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor(), # 轉(zhuǎn)換成張量torchvision.transforms.Normalize((0.1307,), (0.3081,)) # 標準化]))test_loader = DataLoader(test, batch_size=batch_size) # 分割訓練# 返回分割好的訓練集和測試集return train_loader, test_loaderdef train(model, epoch, train_loader):"""訓練"""# 訓練模式model.train()# 迭代for step, (x, y) in enumerate(train_loader):# 加速if use_cuda:model = model.cuda()x, y = x.cuda(), y.cuda()# 梯度清零optimizer.zero_grad()output = model(x)# 計算損失loss = F.cross_entropy(output, y)# 反向傳播loss.backward()# 更新梯度optimizer.step()# 打印損失if step % 10 == 0:print('Epoch: {}, Step {}, Loss: {}'.format(epoch, step, loss))def test(model, test_loader):"""測試"""# 測試模式model.eval()# 存放正確個數(shù)correct = 0with torch.no_grad():for x, y in test_loader:# 加速if use_cuda:model = model.cuda()x, y = x.cuda(), y.cuda()# 獲取結果output = model(x)# 預測結果pred = output.argmax(dim=1, keepdim=True)# 計算準確個數(shù)correct += pred.eq(y.view_as(pred)).sum().item()# 計算準確率accuracy = correct / len(test_loader.dataset) * 100# 輸出準確print("Test Accuracy: {}%".format(accuracy))def main():# 獲取數(shù)據(jù)train_loader, test_loader = get_data()# 迭代for epoch in range(iteration_num):print("\n================ epoch: {} ================".format(epoch))train(network, epoch, train_loader)test(network, test_loader)if __name__ == "__main__":main()輸出結果:
是否使用 GPU 加速: True /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode) ----------------------------------------------------------------Layer (type) Output Shape Param # ================================================================Conv2d-1 [-1, 64, 30, 30] 1,792BatchNorm2d-2 [-1, 64, 30, 30] 128ReLU-3 [-1, 64, 30, 30] 0MaxPool2d-4 [-1, 64, 15, 15] 0Conv2d-5 [-1, 64, 15, 15] 36,928BatchNorm2d-6 [-1, 64, 15, 15] 128Conv2d-7 [-1, 64, 15, 15] 36,928BatchNorm2d-8 [-1, 64, 15, 15] 128BasicBlock-9 [-1, 64, 15, 15] 0Conv2d-10 [-1, 64, 15, 15] 36,928BatchNorm2d-11 [-1, 64, 15, 15] 128Conv2d-12 [-1, 64, 15, 15] 36,928BatchNorm2d-13 [-1, 64, 15, 15] 128BasicBlock-14 [-1, 64, 15, 15] 0Conv2d-15 [-1, 128, 8, 8] 73,856BatchNorm2d-16 [-1, 128, 8, 8] 256Conv2d-17 [-1, 128, 8, 8] 147,584BatchNorm2d-18 [-1, 128, 8, 8] 256Conv2d-19 [-1, 128, 8, 8] 8,320BasicBlock-20 [-1, 128, 8, 8] 0Conv2d-21 [-1, 128, 8, 8] 147,584BatchNorm2d-22 [-1, 128, 8, 8] 256Conv2d-23 [-1, 128, 8, 8] 147,584BatchNorm2d-24 [-1, 128, 8, 8] 256BasicBlock-25 [-1, 128, 8, 8] 0Conv2d-26 [-1, 256, 4, 4] 295,168BatchNorm2d-27 [-1, 256, 4, 4] 512Conv2d-28 [-1, 256, 4, 4] 590,080BatchNorm2d-29 [-1, 256, 4, 4] 512Conv2d-30 [-1, 256, 4, 4] 33,024BasicBlock-31 [-1, 256, 4, 4] 0Conv2d-32 [-1, 256, 4, 4] 590,080BatchNorm2d-33 [-1, 256, 4, 4] 512Conv2d-34 [-1, 256, 4, 4] 590,080BatchNorm2d-35 [-1, 256, 4, 4] 512BasicBlock-36 [-1, 256, 4, 4] 0Conv2d-37 [-1, 512, 2, 2] 1,180,160BatchNorm2d-38 [-1, 512, 2, 2] 1,024Conv2d-39 [-1, 512, 2, 2] 2,359,808BatchNorm2d-40 [-1, 512, 2, 2] 1,024Conv2d-41 [-1, 512, 2, 2] 131,584BasicBlock-42 [-1, 512, 2, 2] 0Conv2d-43 [-1, 512, 2, 2] 2,359,808BatchNorm2d-44 [-1, 512, 2, 2] 1,024Conv2d-45 [-1, 512, 2, 2] 2,359,808BatchNorm2d-46 [-1, 512, 2, 2] 1,024BasicBlock-47 [-1, 512, 2, 2] 0AvgPool2d-48 [-1, 512, 1, 1] 0Flatten-49 [-1, 512] 0Linear-50 [-1, 100] 51,300 ================================================================ Total params: 11,223,140 Trainable params: 11,223,140 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.01 Forward/backward pass size (MB): 3.74 Params size (MB): 42.81 Estimated Total Size (MB): 46.56 ---------------------------------------------------------------- None Downloading https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz to ./data/cifar-100-python.tar.gz 169001984/? [00:07<00:00, 23425059.51it/s]Extracting ./data/cifar-100-python.tar.gz to ./data Files already downloaded and verified================ epoch: 0 ================ Epoch: 0, Step 0, Loss: 4.73184871673584 Epoch: 0, Step 10, Loss: 4.262868881225586 Epoch: 0, Step 20, Loss: 3.946244239807129 Epoch: 0, Step 30, Loss: 3.7039854526519775 Epoch: 0, Step 40, Loss: 3.5138051509857178 Test Accuracy: 17.16%================ epoch: 1 ================ Epoch: 1, Step 0, Loss: 3.3631155490875244 Epoch: 1, Step 10, Loss: 3.183103561401367 Epoch: 1, Step 20, Loss: 3.0515971183776855 Epoch: 1, Step 30, Loss: 2.913054943084717 Epoch: 1, Step 40, Loss: 2.8454060554504395 Test Accuracy: 26.76%================ epoch: 2 ================ Epoch: 2, Step 0, Loss: 2.764857053756714 Epoch: 2, Step 10, Loss: 2.5304853916168213 Epoch: 2, Step 20, Loss: 2.3920257091522217 Epoch: 2, Step 30, Loss: 2.294809341430664 Epoch: 2, Step 40, Loss: 2.2125251293182373 Test Accuracy: 30.599999999999998%================ epoch: 3 ================ Epoch: 3, Step 0, Loss: 2.15826678276062 Epoch: 3, Step 10, Loss: 1.9255717992782593 Epoch: 3, Step 20, Loss: 1.7490493059158325 Epoch: 3, Step 30, Loss: 1.6468313932418823 Epoch: 3, Step 40, Loss: 1.5404233932495117 Test Accuracy: 29.659999999999997%================ epoch: 4 ================ Epoch: 4, Step 0, Loss: 1.4881120920181274 Epoch: 4, Step 10, Loss: 1.3130300045013428 Epoch: 4, Step 20, Loss: 1.119794249534607 Epoch: 4, Step 30, Loss: 1.07780921459198 Epoch: 4, Step 40, Loss: 0.9983140826225281 Test Accuracy: 27.04%================ epoch: 5 ================ Epoch: 5, Step 0, Loss: 1.0429306030273438 Epoch: 5, Step 10, Loss: 0.9188315868377686 Epoch: 5, Step 20, Loss: 0.7664494514465332 Epoch: 5, Step 30, Loss: 0.8060574531555176 Epoch: 5, Step 40, Loss: 0.7700539231300354 Test Accuracy: 25.629999999999995%================ epoch: 6 ================ Epoch: 6, Step 0, Loss: 0.8620188236236572 Epoch: 6, Step 10, Loss: 0.8017312288284302 Epoch: 6, Step 20, Loss: 0.6923062801361084 Epoch: 6, Step 30, Loss: 0.6696692109107971 Epoch: 6, Step 40, Loss: 0.6102812886238098 Test Accuracy: 25.45%================ epoch: 7 ================ Epoch: 7, Step 0, Loss: 0.5835701823234558 Epoch: 7, Step 10, Loss: 0.5514459013938904 Epoch: 7, Step 20, Loss: 0.4809255301952362 Epoch: 7, Step 30, Loss: 0.3889707326889038 Epoch: 7, Step 40, Loss: 0.42040011286735535 Test Accuracy: 25.3%================ epoch: 8 ================ Epoch: 8, Step 0, Loss: 0.4036518931388855 Epoch: 8, Step 10, Loss: 0.31424838304519653 Epoch: 8, Step 20, Loss: 0.2538606524467468 Epoch: 8, Step 30, Loss: 0.26636990904808044 Epoch: 8, Step 40, Loss: 0.23289920389652252 Test Accuracy: 28.22%================ epoch: 9 ================ Epoch: 9, Step 0, Loss: 0.20370212197303772 Epoch: 9, Step 10, Loss: 0.21275906264781952 Epoch: 9, Step 20, Loss: 0.1724529266357422 Epoch: 9, Step 30, Loss: 0.16944238543510437 Epoch: 9, Step 40, Loss: 0.11199608445167542 Test Accuracy: 28.17%================ epoch: 10 ================ Epoch: 10, Step 0, Loss: 0.14693205058574677 Epoch: 10, Step 10, Loss: 0.11063629388809204 Epoch: 10, Step 20, Loss: 0.08746964484453201 Epoch: 10, Step 30, Loss: 0.08660224825143814 Epoch: 10, Step 40, Loss: 0.09079966694116592 Test Accuracy: 29.12%================ epoch: 11 ================ Epoch: 11, Step 0, Loss: 0.07582048326730728 Epoch: 11, Step 10, Loss: 0.07523166388273239 Epoch: 11, Step 20, Loss: 0.05015444755554199 Epoch: 11, Step 30, Loss: 0.06376209855079651 Epoch: 11, Step 40, Loss: 0.047050636261701584 Test Accuracy: 30.159999999999997%================ epoch: 12 ================ Epoch: 12, Step 0, Loss: 0.03873936086893082 Epoch: 12, Step 10, Loss: 0.036511268466711044 Epoch: 12, Step 20, Loss: 0.03504694253206253 Epoch: 12, Step 30, Loss: 0.03236941248178482 Epoch: 12, Step 40, Loss: 0.04149263724684715 Test Accuracy: 30.69%================ epoch: 13 ================ Epoch: 13, Step 0, Loss: 0.02524631842970848 Epoch: 13, Step 10, Loss: 0.02024298906326294 Epoch: 13, Step 20, Loss: 0.01565425843000412 Epoch: 13, Step 30, Loss: 0.03372647985816002 Epoch: 13, Step 40, Loss: 0.03173805773258209 Test Accuracy: 30.61%================ epoch: 14 ================ Epoch: 14, Step 0, Loss: 0.013597095385193825 Epoch: 14, Step 10, Loss: 0.014107376337051392 Epoch: 14, Step 20, Loss: 0.010056688450276852 Epoch: 14, Step 30, Loss: 0.016869302839040756 Epoch: 14, Step 40, Loss: 0.016789773479104042 Test Accuracy: 30.79%================ epoch: 15 ================ Epoch: 15, Step 0, Loss: 0.00870730821043253 Epoch: 15, Step 10, Loss: 0.0070304274559021 Epoch: 15, Step 20, Loss: 0.005506859626621008 Epoch: 15, Step 30, Loss: 0.02930188737809658 Epoch: 15, Step 40, Loss: 0.013658527284860611 Test Accuracy: 30.990000000000002%================ epoch: 16 ================ Epoch: 16, Step 0, Loss: 0.006122640334069729 Epoch: 16, Step 10, Loss: 0.008687378838658333 Epoch: 16, Step 20, Loss: 0.008756318129599094 Epoch: 16, Step 30, Loss: 0.011087586171925068 Epoch: 16, Step 40, Loss: 0.011925156228244305 Test Accuracy: 31.25%================ epoch: 17 ================ Epoch: 17, Step 0, Loss: 0.00833406113088131 Epoch: 17, Step 10, Loss: 0.004966908134520054 Epoch: 17, Step 20, Loss: 0.003708316246047616 Epoch: 17, Step 30, Loss: 0.020299237221479416 Epoch: 17, Step 40, Loss: 0.010047768242657185 Test Accuracy: 31.540000000000003%================ epoch: 18 ================ Epoch: 18, Step 0, Loss: 0.0037587652914226055 Epoch: 18, Step 10, Loss: 0.0033208071254193783 Epoch: 18, Step 20, Loss: 0.004131313879042864 Epoch: 18, Step 30, Loss: 0.012251097708940506 Epoch: 18, Step 40, Loss: 0.00844736211001873 Test Accuracy: 31.8%================ epoch: 19 ================ Epoch: 19, Step 0, Loss: 0.0030041378922760487 Epoch: 19, Step 10, Loss: 0.0028436880093067884 Epoch: 19, Step 20, Loss: 0.0026263371109962463 Epoch: 19, Step 30, Loss: 0.01706080697476864 Epoch: 19, Step 40, Loss: 0.007125745993107557 Test Accuracy: 31.72%總結
以上是生活随笔為你收集整理的PyTorch ResNet 实现图片分类的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 考研资料分享——百度网盘获取
- 下一篇: 25家往昔明星网站“血泪史” 教你如何过