819 resultados para Visual surveillance, Human activity recognition, Video annotation
Resumo:
Most wearable activity recognition systems assume a predefined sensor deployment that remains unchanged during runtime. However, this assumption does not reflect real-life conditions. During the normal use of such systems, users may place the sensors in a position different from the predefined sensor placement. Also, sensors may move from their original location to a different one, due to a loose attachment. Activity recognition systems trained on activity patterns characteristic of a given sensor deployment may likely fail due to sensor displacements. In this work, we innovatively explore the effects of sensor displacement induced by both the intentional misplacement of sensors and self-placement by the user. The effects of sensor displacement are analyzed for standard activity recognition techniques, as well as for an alternate robust sensor fusion method proposed in a previous work. While classical recognition models show little tolerance to sensor displacement, the proposed method is proven to have notable capabilities to assimilate the changes introduced in the sensor position due to self-placement and provides considerable improvements for large misplacements.
Resumo:
This paper presents a novel method that leverages reasoning capabilities in a computer vision system dedicated to human action recognition. The proposed methodology is decomposed into two stages. First, a machine learning based algorithm - known as bag of words - gives a first estimate of action classification from video sequences, by performing an image feature analysis. Those results are afterward passed to a common-sense reasoning system, which analyses, selects and corrects the initial estimation yielded by the machine learning algorithm. This second stage resorts to the knowledge implicit in the rationality that motivates human behaviour. Experiments are performed in realistic conditions, where poor recognition rates by the machine learning techniques are significantly improved by the second stage in which common-sense knowledge and reasoning capabilities have been leveraged. This demonstrates the value of integrating common-sense capabilities into a computer vision pipeline. © 2012 Elsevier B.V. All rights reserved.
Resumo:
Smart Spaces, Ambient Intelligence, and Ambient Assisted Living are environmental paradigms that strongly depend on their capability to recognize human actions. While most solutions rest on sensor value interpretations and video analysis applications, few have realized the importance of incorporating common-sense capabilities to support the recognition process. Unfortunately, human action recognition cannot be successfully accomplished by only analyzing body postures. On the contrary, this task should be supported by profound knowledge of human agency nature and its tight connection to the reasons and motivations that explain it. The combination of this knowledge and the knowledge about how the world works is essential for recognizing and understanding human actions without committing common-senseless mistakes. This work demonstrates the impact that episodic reasoning has in improving the accuracy of a computer vision system for human action recognition. This work also presents formalization, implementation, and evaluation details of the knowledge model that supports the episodic reasoning.
Resumo:
Face recognition from images or video footage requires a certain level of recorded image quality. This paper derives acceptable bitrates (relating to levels of compression and consequently quality) of footage with human faces, using an industry implementation of the standard H.264/MPEG-4 AVC and the Closed-Circuit Television (CCTV) recording systems on London buses. The London buses application is utilized as a case study for setting up a methodology and implementing suitable data analysis for face recognition from recorded footage, which has been degraded by compression. The majority of CCTV recorders on buses use a proprietary format based on the H.264/MPEG-4 AVC video coding standard, exploiting both spatial and temporal redundancy. Low bitrates are favored in the CCTV industry for saving storage and transmission bandwidth, but they compromise the image usefulness of the recorded imagery. In this context, usefulness is determined by the presence of enough facial information remaining in the compressed image to allow a specialist to recognize a person. The investigation includes four steps: (1) Development of a video dataset representative of typical CCTV bus scenarios. (2) Selection and grouping of video scenes based on local (facial) and global (entire scene) content properties. (3) Psychophysical investigations to identify the key scenes, which are most affected by compression, using an industry implementation of H.264/MPEG-4 AVC. (4) Testing of CCTV recording systems on buses with the key scenes and further psychophysical investigations. The results showed a dependency upon scene content properties. Very dark scenes and scenes with high levels of spatial–temporal busyness were the most challenging to compress, requiring higher bitrates to maintain useful information.
Resumo:
This paper describes a real-time multi-camera surveillance system that can be applied to a range of application domains. This integrated system is designed to observe crowded scenes and has mechanisms to improve tracking of objects that are in close proximity. The four component modules described in this paper are (i) motion detection using a layered background model, (ii) object tracking based on local appearance, (iii) hierarchical object recognition, and (iv) fused multisensor object tracking using multiple features and geometric constraints. This integrated approach to complex scene tracking is validated against a number of representative real-world scenarios to show that robust, real-time analysis can be performed. Copyright (C) 2007 Hindawi Publishing Corporation. All rights reserved.
Resumo:
This paper presents recent developments to a vision-based traffic surveillance system which relies extensively on the use of geometrical and scene context. Firstly, a highly parametrised 3-D model is reported, able to adopt the shape of a wide variety of different classes of vehicle (e.g. cars, vans, buses etc.), and its subsequent specialisation to a generic car class which accounts for commonly encountered types of car (including saloon, batchback and estate cars). Sample data collected from video images, by means of an interactive tool, have been subjected to principal component analysis (PCA) to define a deformable model having 6 degrees of freedom. Secondly, a new pose refinement technique using “active” models is described, able to recover both the pose of a rigid object, and the structure of a deformable model; an assessment of its performance is examined in comparison with previously reported “passive” model-based techniques in the context of traffic surveillance. The new method is more stable, and requires fewer iterations, especially when the number of free parameters increases, but shows somewhat poorer convergence. Typical applications for this work include robot surveillance and navigation tasks.
Resumo:
New low cost sensors and the new open free libraries for 3D image processing are permitting to achieve important advances for robot vision applications such as tridimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a method to recognize the human hand and to track the fingers is proposed. This new method is based on point clouds from range images, RGBD. It does not require visual marks, camera calibration, environment knowledge and complex expensive acquisition systems. Furthermore, this method has been implemented to create a human interface in order to move a robot hand. The human hand is recognized and the movement of the fingers is analyzed. Afterwards, it is imitated from a Barret hand, using communication events programmed from ROS.
Resumo:
Automated human behaviour analysis has been, and still remains, a challenging problem. It has been dealt from different points of views: from primitive actions to human interaction recognition. This paper is focused on trajectory analysis which allows a simple high level understanding of complex human behaviour. It is proposed a novel representation method of trajectory data, called Activity Description Vector (ADV) based on the number of occurrences of a person is in a specific point of the scenario and the local movements that perform in it. The ADV is calculated for each cell of the scenario in which it is spatially sampled obtaining a cue for different clustering methods. The ADV representation has been tested as the input of several classic classifiers and compared to other approaches using CAVIAR dataset sequences obtaining great accuracy in the recognition of the behaviour of people in a Shopping Centre.
Resumo:
Automatic analysis of human behaviour in large collections of videos is gaining interest, even more so with the advent of file sharing sites such as YouTube. However, challenges still exist owing to several factors such as inter- and intra-class variations, cluttered backgrounds, occlusion, camera motion, scale, view and illumination changes. This research focuses on modelling human behaviour for action recognition in videos. The developed techniques are validated on large scale benchmark datasets and applied on real-world scenarios such as soccer videos. Three major contributions are made. The first contribution is in the area of proper choice of a feature representation for videos. This involved a study of state-of-the-art techniques for action recognition, feature extraction processing and dimensional reduction techniques so as to yield the best performance with optimal computational requirements. Secondly, temporal modelling of human behaviour is performed. This involved frequency analysis and temporal integration of local information in the video frames to yield a temporal feature vector. Current practices mostly average the frame information over an entire video and neglect the temporal order. Lastly, the proposed framework is applied and further adapted to real-world scenario such as soccer videos. A dataset consisting of video sequences depicting events of players falling is created from actual match data to this end and used to experimentally evaluate the proposed framework.
Resumo:
Recovering position from sensor information is an important problem in mobile robotics, known as localisation. Localisation requires a map or some other description of the environment to provide the robot with a context to interpret sensor data. The mobile robot system under discussion is using an artificial neural representation of position. Building a geometrical map of the environment with a single camera and artificial neural networks is difficult. Instead it would be simpler to learn position as a function of the visual input. Usually when learning images, an intermediate representation is employed. An appropriate starting point for biologically plausible image representation is the complex cells of the visual cortex, which have invariance properties that appear useful for localisation. The effectiveness for localisation of two different complex cell models are evaluated. Finally the ability of a simple neural network with single shot learning to recognise these representations and localise a robot is examined.
Resumo:
Human activity-induced vibrations in slender structural sys tems become apparent in many different excitation modes and consequent action effects that cause discomfort to occupants, crowd panic and damage to public infrastructure. Resulting loss of public confidence in safety of structures, economic losses, cost of retrofit and repairs can be significant. Advanced computational and visualisation techniques enable engineers and architects to evolve bold and innovative structural forms, very often without precedence. New composite and hybrid materials that are making their presence in structural systems lack historical evidence of satisfactory performance over anticipated design life. These structural systems are susceptible to multi-modal and coupled excitation that are very complex and have inadequate design guidance in the present codes and good practice guides. Many incidents of amplified resonant response have been reported in buildings, footbridges, stadia a nd other crowded structures with adverse consequences. As a result, attenuation of human-induced vibration of innovative and slender structural systems very ofte n requires special studies during the design process. Dynamic activities possess variable characteristics and thereby induce complex responses in structures that are sensitive to parametric variations. Rigorous analytical techniques are available for investigation of such complex actions and responses to produce acceptable performance in structural systems. This paper presents an overview and a critique of existing code provisions for human-induced vibration followed by studies on the performance of three contrasting structural systems that exhibit complex vibration. The dynamic responses of these systems under human-induced vibrations have been carried out using experimentally validated computer simulation techniques. The outcomes of these studies will have engineering applications for safe and sustainable structures and a basis for developing design guidance.
Resumo:
Accurate and detailed measurement of an individual's physical activity is a key requirement for helping researchers understand the relationship between physical activity and health. Accelerometers have become the method of choice for measuring physical activity due to their small size, low cost, convenience and their ability to provide objective information about physical activity. However, interpreting accelerometer data once it has been collected can be challenging. In this work, we applied machine learning algorithms to the task of physical activity recognition from triaxial accelerometer data. We employed a simple but effective approach of dividing the accelerometer data into short non-overlapping windows, converting each window into a feature vector, and treating each feature vector as an i.i.d training instance for a supervised learning algorithm. In addition, we improved on this simple approach with a multi-scale ensemble method that did not need to commit to a single window size and was able to leverage the fact that physical activities produced time series with repetitive patterns and discriminative features for physical activity occurred at different temporal scales.
Resumo:
One main challenge in developing a system for visual surveillance event detection is the annotation of target events in the training data. By making use of the assumption that events with security interest are often rare compared to regular behaviours, this paper presents a novel approach by using Kullback-Leibler (KL) divergence for rare event detection in a weakly supervised learning setting, where only clip-level annotation is available. It will be shown that this approach outperforms state-of-the-art methods on a popular real-world dataset, while preserving real time performance.
Resumo:
Visual tracking is an important task in various computer vision applications including visual surveillance, human computer interaction, event detection, video indexing and retrieval. Recent state of the art sparse representation (SR) based trackers show better robustness than many of the other existing trackers. One of the issues with these SR trackers is low execution speed. The particle filter framework is one of the major aspects responsible for slow execution, and is common to most of the existing SR trackers. In this paper,(1) we propose a robust interest point based tracker in l(1) minimization framework that runs at real-time with performance comparable to the state of the art trackers. In the proposed tracker, the target dictionary is obtained from the patches around target interest points. Next, the interest points from the candidate window of the current frame are obtained. The correspondence between target and candidate points is obtained via solving the proposed l(1) minimization problem. In order to prune the noisy matches, a robust matching criterion is proposed, where only the reliable candidate points that mutually match with target and candidate dictionary elements are considered for tracking. The object is localized by measuring the displacement of these interest points. The reliable candidate patches are used for updating the target dictionary. The performance and accuracy of the proposed tracker is benchmarked with several complex video sequences. The tracker is found to be considerably fast as compared to the reported state of the art trackers. The proposed tracker is further evaluated for various local patch sizes, number of interest points and regularization parameters. The performance of the tracker for various challenges including illumination change, occlusion, and background clutter has been quantified with a benchmark dataset containing 50 videos. (C) 2014 Elsevier B.V. All rights reserved.