CV算法复现(分类算法3/6):VGG(2014年 牛津大学)
生活随笔
收集整理的這篇文章主要介紹了
CV算法复现(分类算法3/6):VGG(2014年 牛津大学)
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
致謝:霹靂吧啦Wz:https://space.bilibili.com/18161609
目錄
致謝:霹靂吧啦Wz:https://space.bilibili.com/18161609
1 本次要點
1.1 Python庫語法
1.2 深度學習理論
2 網絡簡介
2.1 歷史意義
2.2 網絡亮點
2.3 網絡架構
3 代碼結構
3.1 mode.py
3.2 train.py
3.3 predict.py
1 本次要點
1.1 Python庫語法
- python中定義函數,參數分為一般參數,默認參數,非關鍵字參數和關鍵字參數
- 非關鍵字參數(或稱可變參數)*args
- 定義:可變參數就是傳入的參數個數是可變的,可以是0個,1個,2個,……很多個。
- 作用:就是可以一次給函數傳很多的參數
- 特征:*args
- 關鍵字參數**kw
- 定義:關鍵字參數允許你傳入0個或任意個含參數名的參數,這些關鍵字參數在函數內部自動組裝為一個dict。在調用函數時,可以只傳入必選參數。
- 作用:擴展函數的功能
- 特征:**kw
- 非關鍵字參數(或稱可變參數)*args
1.2 深度學習理論
- 感受野:在卷積神經網絡中,決定某一層輸出結果中一個元素所對應的輸入層的區域大小,被稱作 感受野(receptive field)。
- imageNet所有圖片的RGB均值:[123.68, 116.78, 103.94]。如果使用ImageNet的預訓練模型,那么后續訓練時圖片要減去此均值
- 使用ImageNet預訓練模型的標準化方法:transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])。
- 不使用預訓練模型:transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
2 網絡簡介
2.1 歷史意義
- VGG在2014年由牛津大學著名研究組VGG (Visual Geometry Group) 提出,斬獲該年ImageNet競賽中 Localization Task (定位任務) 第一名 和 Classification Task (分類任務) 第二名。
2.2 網絡亮點
- 通過堆疊(中間沒有跟池化層)多個3x3的卷積核來替代大尺度卷積核(目的:減少網絡參數量)(2層3*3卷積代替1層5*5卷積,3層3*3卷積代替1層7*7卷積)
2.3 網絡架構
3 代碼結構
- mode.py
- train.py
- predict.py
3.1 mode.py
import torch.nn as nn
import torchclass VGG(nn.modules):def __init__(self, features, class_num=1000, init_weights=False):super(VGG, self).__init__()self.features = featuresself.classifier = nn.Sequential(nn.Dropout(p=0.5),nn.Linear(512*7*7, 2048),nn.ReLU(True),nn.Dropout(p=0.5),nn.Linear(2048, 2048),nn.ReLU(True),nn.Linear(2048, class_num))if init_weights:self._initialize_weights()def forward(self, x):# N * 3 * 224 * 224x = self.features(x)# N * 512 * 7 * 7x = torch.flatten(x, start_dim=1)# N * 512 * 7 * 7x = self.classifier(x)return xdef _initialize_weights(self):for m in self.modules():if isinstance(m, nn.Conv2d):# nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')nn.init.xavier_normal_(m.weight)if m.bias is not None:nn.init.constant_(m.bias, 0)elif isinstance(m, nn.Linear):nn.init.xavier_uniform_(m.weight)# nn.init.normal_(m.weight, 0, 0.01)nn.init.constant_(m.bias, 0)def make_features(cfg:list): #傳入一個配置變量,list類型layers = []in_channels = 3for v in cfg:if v == 'M':layers += [nn.MaxPool2d(kernel_size=2, stride=2)]else:conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)layers += [conv2d, nn.ReLU(inplace=True)]in_channels = vreturn nn.Sequential(*layers) # “*”:通過非關鍵字參數形式傳入進去。# 數字代表卷積核個數。M代表池化層。
cfgs = {'vgg11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],'vgg13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],'vgg16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],'vgg19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}# 此函數功能:實例化VGG類
#**kwargs可變長度的字典變量,這個字典變量可能就包含分類個數、是否初始化等。
#**kwargs意思是該函數可以接受任意多個輸入變量。
def vgg(model_name="vgg16", **kwargs):try:cfg = cfgs[model_name]except:print("Warning: model number {} not in cfgs dict!".format(model_name)exit(-1)model = VGG(make_features(cfg), **kwargs) return model
3.2 train.py
import torch.nn as nn
from torchvision import transforms, datasets
import json
import os
import torch.optim as optim
from model import vgg
import torchdef main():device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")print("using {} device.".format(device))data_transform = {"train": transforms.Compose([transforms.RandomResizedCrop(224),transforms.RandomHorizontalFlip(),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]),"val": transforms.Compose([transforms.Resize((224, 224)),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])}data_root = os.path.abspath(os.path.join(os.getcwd(), "../..")) # get data root pathimage_path = os.path.join(data_root, "data_set", "flower_data") # flower data set pathassert os.path.exists(image_path), "{} path does not exist.".format(image_path)train_dataset = datasets.ImageFolder(root=os.path.join(image_path, "train"),transform=data_transform["train"])train_num = len(train_dataset)# {'daisy':0, 'dandelion':1, 'roses':2, 'sunflower':3, 'tulips':4}flower_list = train_dataset.class_to_idxcla_dict = dict((val, key) for key, val in flower_list.items())# write dict into json filejson_str = json.dumps(cla_dict, indent=4)with open('class_indices.json', 'w') as json_file:json_file.write(json_str)batch_size = 32nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, 8]) # number of workersprint('Using {} dataloader workers every process'.format(nw))train_loader = torch.utils.data.DataLoader(train_dataset,batch_size=batch_size, shuffle=True,num_workers=nw)validate_dataset = datasets.ImageFolder(root=os.path.join(image_path, "val"),transform=data_transform["val"])val_num = len(validate_dataset)validate_loader = torch.utils.data.DataLoader(validate_dataset,batch_size=batch_size, shuffle=False,num_workers=nw)print("using {} images for training, {} images fot validation.".format(train_num,val_num))# test_data_iter = iter(validate_loader)# test_image, test_label = test_data_iter.next()model_name = "vgg16"net = vgg(model_name=model_name, num_classes=5, init_weights=True)net.to(device)loss_function = nn.CrossEntropyLoss()optimizer = optim.Adam(net.parameters(), lr=0.0001)best_acc = 0.0save_path = './{}Net.pth'.format(model_name)for epoch in range(30):# trainnet.train()running_loss = 0.0for step, data in enumerate(train_loader, start=0):images, labels = dataoptimizer.zero_grad()outputs = net(images.to(device))loss = loss_function(outputs, labels.to(device))loss.backward()optimizer.step()# print statisticsrunning_loss += loss.item()# print train processrate = (step + 1) / len(train_loader)a = "*" * int(rate * 50)b = "." * int((1 - rate) * 50)print("\rtrain loss: {:^3.0f}%[{}->{}]{:.3f}".format(int(rate * 100), a, b, loss), end="")print()# validatenet.eval()acc = 0.0 # accumulate accurate number / epochwith torch.no_grad():for val_data in validate_loader:val_images, val_labels = val_dataoptimizer.zero_grad()outputs = net(val_images.to(device))predict_y = torch.max(outputs, dim=1)[1]acc += (predict_y == val_labels.to(device)).sum().item()val_accurate = acc / val_numif val_accurate > best_acc:best_acc = val_accuratetorch.save(net.state_dict(), save_path)print('[epoch %d] train_loss: %.3f test_accuracy: %.3f' %(epoch + 1, running_loss / step, val_accurate))print('Finished Training')if __name__ == '__main__':main()
3.3 predict.py
import torch
from model import vgg
from PIL import Image
from torchvision import transforms
import matplotlib.pyplot as plt
import jsondata_transform = transforms.Compose([transforms.Resize((224, 224)),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])# load image
img = Image.open("../tulip.jpg")
plt.imshow(img)
# [N, C, H, W]
img = data_transform(img)
# expand batch dimension
img = torch.unsqueeze(img, dim=0)# read class_indict
try:json_file = open('./class_indices.json', 'r')class_indict = json.load(json_file)
except Exception as e:print(e)exit(-1)# create model
model = vgg(model_name="vgg16", num_classes=5)
# load model weights
model_weight_path = "./vgg16Net.pth"
model.load_state_dict(torch.load(model_weight_path))
model.eval()
with torch.no_grad():# predict classoutput = torch.squeeze(model(img))predict = torch.softmax(output, dim=0)predict_cla = torch.argmax(predict).numpy()
print(class_indict[str(predict_cla)])
plt.show()
?
?
總結
以上是生活随笔為你收集整理的CV算法复现(分类算法3/6):VGG(2014年 牛津大学)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: CV算法复现(分类算法2/6):Alex
- 下一篇: CV算法复现(分类算法4/6):Goog