Modelling human behaviour and event detection from videos


Autoria(s): Oruganti, Venkata Ramana Murthy
Data(s)

2016

Resumo

Automatic analysis of human behaviour in large collections of videos is gaining interest, even more so with the advent of file sharing sites such as YouTube. However, challenges still exist owing to several factors such as inter- and intra-class variations, cluttered backgrounds, occlusion, camera motion, scale, view and illumination changes. This research focuses on modelling human behaviour for action recognition in videos. The developed techniques are validated on large scale benchmark datasets and applied on real-world scenarios such as soccer videos. Three major contributions are made. The first contribution is in the area of proper choice of a feature representation for videos. This involved a study of state-of-the-art techniques for action recognition, feature extraction processing and dimensional reduction techniques so as to yield the best performance with optimal computational requirements. Secondly, temporal modelling of human behaviour is performed. This involved frequency analysis and temporal integration of local information in the video frames to yield a temporal feature vector. Current practices mostly average the frame information over an entire video and neglect the temporal order. Lastly, the proposed framework is applied and further adapted to real-world scenario such as soccer videos. A dataset consisting of video sequences depicting events of players falling is created from actual match data to this end and used to experimentally evaluate the proposed framework.

Identificador

http://www.canberra.edu.au/researchrepository/items/af263bec-f480-4435-b4eb-78035653f17a/1/

Direitos

©2016 Venkata Ramana Murthy Oruganti

Palavras-Chave #human activity recognition; computer simulation; body part interest points; body part trajectories; trajectories of moving objects
Tipo

Thesis - PhD