Deeper and wider fully convolutional network coupled with conditional random fields for scene labeling


Autoria(s): Nguyen Thanh, Kien; Fookes, Clinton; Sridharan, Sridha
Data(s)

25/09/2016

Resumo

Deep convolutional neural networks (DCNNs) have been employed in many computer vision tasks with great success due to their robustness in feature learning. One of the advantages of DCNNs is their representation robustness to object locations, which is useful for object recognition tasks. However, this also discards spatial information, which is useful when dealing with topological information of the image (e.g. scene labeling, face recognition). In this paper, we propose a deeper and wider network architecture to tackle the scene labeling task. The depth is achieved by incorporating predictions from multiple early layers of the DCNN. The width is achieved by combining multiple outputs of the network. We then further refine the parsing task by adopting graphical models (GMs) as a post-processing step to incorporate spatial and contextual information into the network. The new strategy for a deeper, wider convolutional network coupled with graphical models has shown promising results on the PASCAL-Context dataset.

Formato

application/pdf

Identificador

http://eprints.qut.edu.au/95434/

Relação

http://eprints.qut.edu.au/95434/1/ICIP2016.pdf

Nguyen Thanh, Kien, Fookes, Clinton, & Sridharan, Sridha (2016) Deeper and wider fully convolutional network coupled with conditional random fields for scene labeling. In 23rd IEEE International Conference on Image Processing (ICIP 2016), 25-28 September 2016, Phoenix, Arizona.

Direitos

Copyright 2016 [Please consult the author]

Fonte

School of Electrical Engineering & Computer Science; Science & Engineering Faculty

Palavras-Chave #Computer Vision #Scene Understanding #Deep Learning
Tipo

Conference Paper