Pytorch 版YOLOV5训练自己的数据集
生活随笔
收集整理的這篇文章主要介紹了
Pytorch 版YOLOV5训练自己的数据集
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
1、環境搭建
https://github.com/ultralytics/yolov52、安裝需要的軟件
pip install -U -r requirements.txt3、準備數據
在data文件下建立上面三個文件(Annotations、images與ImageSets,labels后續我們腳本生成)其中Annotations存放xml文件,images圖像,ImageSets新建Main文件存放train與test文件(腳本生成),labels是標簽文件
劃分訓練集與測試集(為了充分利用數據集我們只劃分這兩個),生成的在ImageSets / Main文件下
import os import randomtrainval_percent = 0.2 #可自行進行調節 train_percent = 1 xmlfilepath = 'Annotations' txtsavepath = 'ImageSets\Main' total_xml = os.listdir(xmlfilepath)num = len(total_xml) list = range(num) tv = int(num * trainval_percent) tr = int(tv * train_percent) trainval = random.sample(list, tv) train = random.sample(trainval, tr)#ftrainval = open('ImageSets/Main/trainval.txt', 'w') ftest = open('ImageSets/Main/test.txt', 'w') ftrain = open('ImageSets/Main/train.txt', 'w') #fval = open('ImageSets/Main/val.txt', 'w')for i in list:name = total_xml[i][:-4] + '\n'if i in trainval:#ftrainval.write(name)if i in train:ftest.write(name)#else:#fval.write(name)else:ftrain.write(name)#ftrainval.close() ftrain.close() #fval.close() ftest.close()建立voc_labels文件生成labels標簽文件
import xml.etree.ElementTree as ETimport pickleimport osfrom os import listdir, getcwdfrom os.path import joinsets = ['train', 'test']classes = ['apple','orange'] #自己訓練的類別def convert(size, box):dw = 1. / size[0]dh = 1. / size[1]x = (box[0] + box[1]) / 2.0y = (box[2] + box[3]) / 2.0w = box[1] - box[0]h = box[3] - box[2]x = x * dww = w * dwy = y * dhh = h * dhreturn (x, y, w, h)def convert_annotation(image_id):in_file = open('data/Annotations/%s.xml' % (image_id))out_file = open('data/labels/%s.txt' % (image_id), 'w')tree = ET.parse(in_file)root = tree.getroot()size = root.find('size')w = int(size.find('width').text)h = int(size.find('height').text)for obj in root.iter('object'):difficult = obj.find('difficult').textcls = obj.find('name').textif cls not in classes or int(difficult) == 1:continuecls_id = classes.index(cls)xmlbox = obj.find('bndbox')b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),float(xmlbox.find('ymax').text))bb = convert((w, h), b)out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')wd = getcwd()for image_set in sets:if not os.path.exists('data/labels/'):os.makedirs('data/labels/')image_ids = open('data/ImageSets/Main/%s.txt' % (image_set)).read().strip().split()list_file = open('data/%s.txt' % (image_set), 'w')for image_id in image_ids:list_file.write('data/images/%s.jpg\n' % (image_id))convert_annotation(image_id)list_file.close()4、配置訓練文件
在data目錄下新建fruit.yaml,配置訓練的數據
# COCO 2017 dataset http://cocodataset.org# Download command: bash yolov5/data/get_coco2017.sh# Train command: python train.py --data ./data/coco.yaml# Dataset should be placed next to yolov5 folder:# /parent_folder# /coco# /yolov5# train and val datasets (image directory or *.txt file with image paths)train: xx/xx/train2017.txt # 上面我們生成的train,根據自己的路徑進行更改val: xx/xx/val2017.txt # 上面我們生成的test#test: ../coco/test-dev2017.txt # 20k images for submission to https://competitions.codalab.org/competitions/20794# number of classesnc: 2 #訓練的類別# class namesnames: ['apple','orange']# Print classes# with open('data/coco.yaml') as f:# d = yaml.load(f, Loader=yaml.FullLoader) # dict# for i, x in enumerate(d['names']):# print(i, x)models文件(采用那個yaml我們更改那個),例如采用yolov5s.yaml:
# parametersnc: 2 # number of classes 訓練的類別數depth_multiple: 0.33 # model depth multiplewidth_multiple: 0.50 # layer channel multiple# anchorsanchors:- [10,13, 16,30, 33,23] # P3/8- [30,61, 62,45, 59,119] # P4/16- [116,90, 156,198, 373,326] # P5/32# yolov5 backbonebackbone:# [from, number, module, args][[-1, 1, Focus, [64, 3]], # 1-P1/2[-1, 1, Conv, [128, 3, 2]], # 2-P2/4[-1, 3, Bottleneck, [128]],[-1, 1, Conv, [256, 3, 2]], # 4-P3/8[-1, 9, Bottleneck, [256]],[-1, 1, Conv, [512, 3, 2]], # 6-P4/16[-1, 9, Bottleneck, [512]],[-1, 1, Conv, [1024, 3, 2]], # 8-P5/32[-1, 1, SPP, [1024, [5, 9, 13]]],[-1, 3, Bottleneck, [1024]], # 10]# yolov5 headhead:[[-1, 3, Bottleneck, [1024, False]], # 11[-1, 1, nn.Conv2d, [na * (nc + 5), 1, 1, 0]], # 12 (P5/32-large)[-2, 1, nn.Upsample, [None, 2, 'nearest']],[[-1, 6], 1, Concat, [1]], # cat backbone P4[-1, 1, Conv, [512, 1, 1]],[-1, 3, Bottleneck, [512, False]],[-1, 1, nn.Conv2d, [na * (nc + 5), 1, 1, 0]], # 17 (P4/16-medium)[-2, 1, nn.Upsample, [None, 2, 'nearest']],[[-1, 4], 1, Concat, [1]], # cat backbone P3[-1, 1, Conv, [256, 1, 1]],[-1, 3, Bottleneck, [256, False]],[-1, 1, nn.Conv2d, [na * (nc + 5), 1, 1, 0]], # 22 (P3/8-small)[[], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)]5、訓練
python train.py --data data/fruit.yaml --cfg models/yolov5s.yaml --weights '' --batch-size 16 --epochs 100這里想加載預訓練權重要更改下代碼,不然會報錯:
train.py中115行(日期2020.6.9)
try:#ckpt['model'] = \#{k: v for k, v in ckpt['model'].state_dict().items() if model.state_dict()[k].numel() == v.numel()}ckpt['model'] = \{k: v for k, v in ckpt['model'].state_dict().items() if k in model.state_dict().keys()\and model.state_dict()[k].numel() == v.numel()訓練命令:
python train.py --data data/fruit.yaml --cfg models/yolov5s.yaml --weights weights/yolov5s.pt --batch-size 16 --epochs 1006、測試:
python detect.py --source file.jpg # imagefile.mp4 # video./dir # directory0 # webcamrtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa # rtsp streamhttp://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8 # http stream跟大家點小建議,自己去嘗試。
YOLO?V5建議:如果需要訓練較小的自定義數據集,Adam是更合適的選擇,盡管Adam的學習率通常比SGD低。但是如果訓練大型數據集,對于YOLOV5來說SGD效果比Adam好
官方指南,你值得擁有:https://github.com/ultralytics/yolov5
創作挑戰賽新人創作獎勵來咯,堅持創作打卡瓜分現金大獎總結
以上是生活随笔為你收集整理的Pytorch 版YOLOV5训练自己的数据集的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: CentOS7 搭建Pulsar 消息队
- 下一篇: 重置mariadb密码