980 resultados para Multi-camera


Relevância:

100.00% 100.00%

Publicador:

Resumo:

CCTV and surveillance networks are increasingly being used for operational as well as security tasks. One emerging area of technology that lends itself to operational analytics is soft biometrics. Soft biometrics can be used to describe a person and detect them throughout a sparse multi-camera network. This enables them to be used to perform tasks such as determining the time taken to get from point to point, and the paths taken through an environment by detecting and matching people across disjoint views. However, in a busy environment where there are 100's if not 1000's of people such as an airport, attempting to monitor everyone is highly unrealistic. In this paper we propose an average soft biometric, that can be used to identity people who look distinct, and are thus suitable for monitoring through a large, sparse camera network. We demonstrate how an average soft biometric can be used to identify unique people to calculate operational measures such as the time taken to travel from point to point.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Person re-identification involves recognising individuals in different locations across a network of cameras and is a challenging task due to a large number of varying factors such as pose (both subject and camera) and ambient lighting conditions. Existing databases do not adequately capture these variations, making evaluations of proposed techniques difficult. In this paper, we present a new challenging multi-camera surveillance database designed for the task of person re-identification. This database consists of 150 unscripted sequences of subjects travelling in a building environment though up to eight camera views, appearing from various angles and in varying illumination conditions. A flexible XML-based evaluation protocol is provided to allow a highly configurable evaluation setup, enabling a variety of scenarios relating to pose and lighting conditions to be evaluated. A baseline person re-identification system consisting of colour, height and texture models is demonstrated on this database.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In my capacity as a television professional and teacher specialising in multi-camera live television production for over 40 years, I was drawn to the conclusion that opaque or inadequately formed understandings of how creativity applies to the field of live television, have impeded the development of pedagogies suitable to the teaching of live television in universities. In the pursuit of this hypothesis, the thesis shows that television degrees were born out of film studies degrees, where intellectual creativity was aligned to single camera production, and the 'creative roles' of producers, directors and scriptwriters. At the same time, multi-camera live television production was subsumed under the 'mass communication' banner, leading to an understanding that roles other than producer and director are simply technical, and bereft of creative intent or acumen. The thesis goes on to show that this attitude to other television production personnel, for example, the vision mixer, videotape operator and camera operator, relegates their roles to that of 'button pusher'. This has resulted in university teaching models with inappropriate resources and unsuitable teaching practices. As a result, the industry is struggling to find people with the skills to fill the demands of the multi-camera live television sector. In specific terms the central hypothesis is pursued through the following sequenced approach. Firstly, the thesis sets out to outline the problems, and traces the origins of the misconceptions that hold with the notion that intellectual creativity does not exist in live multi-camera television. Secondly, this more adequately conceptualised rendition, of the origins particular to the misconceptions of live television and creativity, is then anchored to the field of examination by presentation of the foundations of the roles involved in making live television programs, using multicamera production techniques. Thirdly, this more nuanced rendition of the field sets the stage for a thorough analysis of education and training in the industry, and teaching models at Australian universities. The findings clearly establish that the pedagogical models are aimed at single camera production, a position that deemphasises the creative aspects of multi-camera live television production. Informed by an examination of theories of learning, qualitative interviews, professional reflective practice and observations, the roles of four multi-camera live production crewmembers (camera operator, vision mixer, EVS/videotape operator and director's assistant), demonstrate the existence of intellectual creativity during live production. Finally, supported by the theories of learning, and the development and explication of a successful teaching model, a new approach to teaching students how to work in live television is proposed and substantiated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automated crowd counting has become an active field of computer vision research in recent years. Existing approaches are scene-specific, as they are designed to operate in the single camera viewpoint that was used to train the system. Real world camera networks often span multiple viewpoints within a facility, including many regions of overlap. This paper proposes a novel scene invariant crowd counting algorithm that is designed to operate across multiple cameras. The approach uses camera calibration to normalise features between viewpoints and to compensate for regions of overlap. This compensation is performed by constructing an 'overlap map' which provides a measure of how much an object at one location is visible within other viewpoints. An investigation into the suitability of various feature types and regression models for scene invariant crowd counting is also conducted. The features investigated include object size, shape, edges and keypoints. The regression models evaluated include neural networks, K-nearest neighbours, linear and Gaussian process regresion. Our experiments demonstrate that accurate crowd counting was achieved across seven benchmark datasets, with optimal performance observed when all features were used and when Gaussian process regression was used. The combination of scene invariance and multi camera crowd counting is evaluated by training the system on footage obtained from the QUT camera network and testing it on three cameras from the PETS 2009 database. Highly accurate crowd counting was observed with a mean relative error of less than 10%. Our approach enables a pre-trained system to be deployed on a new environment without any additional training, bringing the field one step closer toward a 'plug and play' system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The selection of optimal camera configurations (camera locations, orientations, etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a trans-dimensional simulated annealing algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on binary integer programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than two alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Rapid Oscillations in the Solar Atmosphere (ROSA) instrument is a synchronized, six-camera high-cadence solar imaging instrument developed by Queen's University Belfast. The system is available on the Dunn Solar Telescope at the National Solar Observatory in Sunspot, New Mexico, USA, as a common-user instrument. Consisting of six 1k x 1k Peltier-cooled frame-transfer CCD cameras with very low noise (0.02 -aEuro parts per thousand 15 e s(-1) pixel(-1)), each ROSA camera is capable of full-chip readout speeds in excess of 30 Hz, or 200 Hz when the CCD is windowed. Combining multiple cameras and fast readout rates, ROSA will accumulate approximately 12 TB of data per 8 hours observing. Following successful commissioning during August 2008, ROSA will allow for multi-wavelength studies of the solar atmosphere at a high temporal resolution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Rapid Oscillations in the Solar Atmosphere (ROSA) instrument is a synchronized, six-camera high-cadence solar imaging instrument developed by Queen's University Belfast and recently commissioned at the Dunn Solar Telescope at the National Solar Observatory in Sunspot, New Mexico, USA, as a common-user instrument. Consisting of six 1k x 1k Peltier-cooled frame-transfer CCD cameras with very low noise (0.02 - 15 e/pixel/s), each ROSA camera is capable of full-chip readout speeds in excess of 30 Hz, and up to 200 Hz when the CCD is windowed. ROSA will allow for multi-wavelength studies of the solar atmosphere at a high temporal resolution. We will present the current instrument set-up and parameters, observing modes, and future plans, including a new high QE camera allowing 15 Hz for Halpha. Interested parties should see https://habu.pst.qub.ac.uk/groups/arcresearch/wiki/de502/ROSA.html

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional(3D)model of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents different application scenarios for which the registration of sub-sequence reconstructions or multi-camera reconstructions is essential for successful camera motion estimation and 3D reconstruction from video. The registration is achieved by merging unconnected feature point tracks between the reconstructions. One application is drift removal for sequential camera motion estimation of long sequences. The state-of-the-art in drift removal is to apply a RANSAC approach to find unconnected feature point tracks. In this paper an alternative spectral algorithm for pairwise matching of unconnected feature point tracks is used. It is then shown that the algorithms can be combined and applied to novel scenarios where independent camera motion estimations must be registered into a common global coordinate system. In the first scenario multiple moving cameras, which capture the same scene simultaneously, are registered. A second new scenario occurs in situations where the tracking of feature points during sequential camera motion estimation fails completely, e.g., due to large occluding objects in the foreground, and the unconnected tracks of the independent reconstructions must be merged. In the third scenario image sequences of the same scene, which are captured under different illuminations, are registered. Several experiments with challenging real video sequences demonstrate that the presented techniques work in practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present an adaptive multi-camera system for real time object detection able to efficiently adjust the computational requirements of video processing blocks to the available processing power and the activity of the scene. The system is based on a two level adaptation strategy that works at local and at global level. Object detection is based on a Gaussian mixtures model background subtraction algorithm. Results show that the system can efficiently adapt the algorithm parameters without a significant loss in the detection accuracy.