FCN全连接卷积网络(3)--Fully Convolutional Networks for Semantic Segmentation阅读(摘要部分)
1.摘要
1.1逐句理解一下:
Convolutional networks are powerful visual models that
yield hierarchies of features.
卷積網絡是十分有力的在獲得層次特征的圖像模型當中。
We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation.
我們展示了卷積網絡本身,在端到端、像素到像素的語義分割當中,獲得了當前最領先的成果。
Our key insight is to build “fully convolutional”
我們的主要精力都集中在建立全連接網絡。
networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
網絡讀入任意大小的輸入,并且經過高效的推理和學習輸出相應大小的輸出。
We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models.
我們定義和細化了全卷積網絡的空間,并將其的應用作用在空間稠密的預測任務當中,并且和之前的模型相互聯系。
We adapt contemporary classification networks (AlexNet [20],
the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations
我們使得ALexNet、VGG和GooleNet這些當代流行的網絡適應于全卷積網絡,并且改變他們的學習表現。
by fine-tuning [3] to the segmentation task.
通過微調分割任務。
We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations.
我們之后定義了一個由語義信息組成的跳躍結構通過一個深度的粗糙的層
Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT
我們的全卷積網絡在ALexNet、VGG和GooleNet當中達到了當前階段最好的的分割效果。
Flow, while inference takes less than one fifth of a second for a typical image.
1.2總結
1.搭建了一個全卷積神經網絡,輸入任意尺寸的圖像可以輸出相應(我理解為一樣)大小的輸出。
2.將當前全卷積網絡改寫成AlexNet和VGGNet和GoogleNet
3.實驗結果:PASCAL voc、NYUDv2和SIFT Flow數據集上得到了state-of-the-art的結果,也就是最先進,最好的結果
總結
以上是生活随笔為你收集整理的FCN全连接卷积网络(3)--Fully Convolutional Networks for Semantic Segmentation阅读(摘要部分)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: FCN全连接卷积网络(2)--读论文的过
- 下一篇: FCN全连接卷积网络(4)--Fully