Semantic feature-based visual attention model for pedestrian detection


Autoria(s): Li, Ning; Gong, Yuan; Xu, Junlin; Gu, Xiaorong; Xu, Tao; Zhou, Huiyu
Data(s)

01/06/2016

Resumo

Objective <br/>Pedestrian detection under video surveillance systems has always been a hot topic in computer vision research. These systems are widely used in train stations, airports, large commercial plazas, and other public places. However, pedestrian detection remains difficult because of complex backgrounds. Given its development in recent years, the visual attention mechanism has attracted increasing attention in object detection and tracking research, and previous studies have achieved substantial progress and breakthroughs. We propose a novel pedestrian detection method based on the semantic features under the visual attention mechanism. <br/>Method <br/>The proposed semantic feature-based visual attention model is a spatial-temporal model that consists of two parts: the static visual attention model and the motion visual attention model. The static visual attention model in the spatial domain is constructed by combining bottom-up with top-down attention guidance. Based on the characteristics of pedestrians, the bottom-up visual attention model of Itti is improved by intensifying the orientation vectors of elementary visual features to make the visual saliency map suitable for pedestrian detection. In terms of pedestrian attributes, skin color is selected as a semantic feature for pedestrian detection. The regional and Gaussian models are adopted to construct the skin color model. Skin feature-based visual attention guidance is then proposed to complete the top-down process. The bottom-up and top-down visual attentions are linearly combined using the proper weights obtained from experiments to construct the static visual attention model in the spatial domain. The spatial-temporal visual attention model is then constructed via the motion features in the temporal domain. Based on the static visual attention model in the spatial domain, the frame difference method is combined with optical flowing to detect motion vectors. Filtering is applied to process the field of motion vectors. The saliency of motion vectors can be evaluated via motion entropy to make the selected motion feature more suitable for the spatial-temporal visual attention model. <br/>Result <br/>Standard datasets and practical videos are selected for the experiments. The experiments are performed on a MATLAB R2012a platform. The experimental results show that our spatial-temporal visual attention model demonstrates favorable robustness under various scenes, including indoor train station surveillance videos and outdoor scenes with swaying leaves. Our proposed model outperforms the visual attention model of Itti, the graph-based visual saliency model, the phase spectrum of quaternion Fourier transform model, and the motion channel model of Liu in terms of pedestrian detection. The proposed model achieves a 93% accuracy rate on the test video. <br/>Conclusion <br/>This paper proposes a novel pedestrian method based on the visual attention mechanism. A spatial-temporal visual attention model that uses low-level and semantic features is proposed to calculate the saliency map. Based on this model, the pedestrian targets can be detected through focus of attention shifts. The experimental results verify the effectiveness of the proposed attention model for detecting pedestrians.

Formato

application/pdf

Identificador

http://pure.qub.ac.uk/portal/en/publications/semantic-featurebased-visual-attention-model-for-pedestrian-detection(81bbe64e-e2c5-4b55-99c8-f14083e2d348).html

http://dx.doi.org/10.11834/jig.20160605

http://pure.qub.ac.uk/ws/files/62017750/Semantic_Feature_based_Visual_Attention_Model_for_People_Detection.pdf

Idioma(s)

eng

Direitos

info:eu-repo/semantics/openAccess

Fonte

Li , N , Gong , Y , Xu , J , Gu , X , Xu , T & Zhou , H 2016 , ' Semantic feature-based visual attention model for pedestrian detection ' Journal of Image and Graphics , vol 21 , no. 6 , pp. 723-733 . DOI: 10.11834/jig.20160605

Tipo

article