DL之R-FCN:R-FCN算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略
DL之R-FCN:R-FCN算法的簡介(論文介紹)、架構(gòu)詳解、案例應(yīng)用等配圖集合之詳細攻略
?
?
?
目錄
R-FCN算法的簡介(論文介紹)
1、Motivation: Sharing is Caring
7、各種策略下的實驗結(jié)果
R-FCN算法的架構(gòu)詳解
R-FCN算法的案例應(yīng)用
?
?
?
?
?
相關(guān)文章
DL之R-FCN:R-FCN算法的簡介(論文介紹)、架構(gòu)詳解、案例應(yīng)用等配圖集合之詳細攻略
DL之R-FCN:R-FCN算法的架構(gòu)詳解
R-FCN算法的簡介(論文介紹)
?
Abstract ?
? ? ? We present region-based, fully convolutional networks for accurate and efficient ?object detection. In contrast to previous region-based detectors such as Fast/Faster ?R-CNN [6, 18] that apply a costly per-region subnetwork hundreds of times, our ?region-based detector is fully convolutional with almost all computation shared on ?the entire image. To achieve this goal, we propose position-sensitive score maps ?to address a dilemma between translation-invariance in image classification and ?translation-variance in object detection. Our method can thus naturally adopt fully ?convolutional image classifier backbones, such as the latest Residual Networks ?(ResNets) [9], for object detection. We show competitive results on the PASCAL ?VOC datasets (e.g., 83.6% mAP on the 2007 set) with the 101-layer ResNet. ?Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20× ?faster than the Faster R-CNN counterpart. Code is made publicly available at: https://github.com/daijifeng001/r-fcn.
摘要
? ? ? 我們提出了基于區(qū)域的全卷積網(wǎng)絡(luò),用于精確和有效的目標檢測。與之前的基于區(qū)域的檢測器(如Fast/Faster R-CNN)相比,我們的基于區(qū)域的檢測器是完全卷積的,幾乎所有計算在整個圖像上共享。為了實現(xiàn)這一目標,我們提出了位置敏感的分數(shù)映射來解決圖像分類中的平移不變性與目標檢測中的平移方差之間的矛盾。因此,我們的方法可以很自然地采用完全卷積的圖像分類器骨干,例如最新的ResNets[9]來進行目標檢測。我們使用101層ResNet在PASCAL VOC數(shù)據(jù)集上顯示了競爭結(jié)果(例如,在2007年的集上顯示了83.6%的mAP)。同時,我們的結(jié)果在測試時的速度為每張圖像170ms,比更快的R-CNN對應(yīng)圖像快2.5-20倍。代碼公開提供:https://github.com/daijifeng001/r-fcn。
Conclusion and Future Work
? ? ? We presented Region-based Fully Convolutional Networks, a simple but accurate and efficient framework for object detection. Our system naturally adopts the state-of-the-art image classification backbones, such as ResNets, that are by design fully convolutional. Our method achieves accuracy competitive with the Faster R-CNN counterpart, but is much faster during both training and inference. We intentionally keep the R-FCN system presented in the paper simple. There have been a series of orthogonal extensions of FCNs that were developed for semantic segmentation (e.g., see [2]), as well as extensions of region-based methods for object detection (e.g., see [9, 1, 22]). We expect our system will easily enjoy the benefits of the progress in the field.
結(jié)論及未來工作
? ? ? 提出了一種基于區(qū)域的全卷積網(wǎng)絡(luò),這是一種簡單、準確、高效的目標檢測框架。我們的系統(tǒng)自然采用了最先進的圖像分類骨干,如ResNets,它的設(shè)計完全是卷積的。我們的方法達到了精度與更快的R-CNN競爭對手,但在訓練和推理過程中都快得多。我們有意使本文中介紹的R-FCN系統(tǒng)保持簡單。已有一系列針對語義分割的FCNs正交擴展(如[2]),以及基于區(qū)域的對象檢測方法的擴展(如[9,1,22])。我們希望我們的系統(tǒng)能夠很容易地從這一領(lǐng)域的進展中獲益。
論文
Jifeng Dai, Yi Li, KaimingHe, Jian Sun.
R-FCN: Object detection via region-based fully convolutional networks. NIPS, 2016
https://arxiv.org/abs/1605.06409
?
1、Motivation: Sharing is Caring
? ? ? ?對Faster R-CNN結(jié)構(gòu)進行了改造,將RoI層之后的卷積都移到了RoI層之前,并利用一種位置敏感的特征圖來評估各個類別的概率,在保持較高定位準確度的同時,大幅提高檢測速率。
?
7、各種策略下的實驗結(jié)果
1、AtrousConvolution技巧
?將ResNet-101的有效步幅從32像素降低到16像素,從而提高了得分圖的分辨率。
?Conv4和之前的所有層(stride = 16)都保持不變; 將第一個conv5中的stride = 2修改為stride = 1
?Conv5的所有卷積濾波器都通過“帶孔算法”(Algorithmeà rous)進行修改,以補償減小的步幅。
? The à troustrick improves mAPby 2.6 points.
?
2、Effect of Position Sensitivity on fully convolutional strategies
?
3、Standard Benchmarks: VOC 2007
?
4、Standard Benchmarks: VOC 2012
?
5、The Effect of Depth
當深度從50增加到101時,檢測精度會增加,但是當深度達到152時,檢測精度會變得飽和。
?
6、The Effect of Proposal Type
Works pretty well with any proposal method
Selective Search (SS) and Edge Boxes (EB)
?
?
?
?
R-FCN算法的架構(gòu)詳解
后期更新……
DL之R-FCN:R-FCN算法的架構(gòu)詳解
?
?
R-FCN算法的案例應(yīng)用
后期更新……
?
?
?
?
?
?
總結(jié)
以上是生活随笔為你收集整理的DL之R-FCN:R-FCN算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 成功解决没有tf.nn.rnn_cell
- 下一篇: 成功解决AttributeError: