954 resultados para Video-camera
Resumo:
Scenes for Spectrography experiment Scenes were recorded following the tasks involved in spectrography experiments, which are carried out in front of "J9" output radiadion channel, the latter in open condition. These tasks may be executed by one or two persons. One person can do the tasks, but requiring him to crouch in front of "J9" to adjust the angular position the experimental appartus (a crystal to bend the neutron radiation to the spectograph), and then to get up to verify data in a computer aside; these movements are repeated until achieving the right operational conditions. Two people may aid one another in such a way one remais crouched while the other remains still in front of the computer. They may also interchange tasks so as to divide received doses. Up to now, there are available two scenes with one person and one scene with two persons. These scenes are described in the sequel: - Scene 2: Another take similat to Scene 1. Video file labels: "20140327180750_IPCAM": recorded by the left camera.
Resumo:
Scenes for Spectrography experiment Scenes were recorded following the tasks involved in spectrography experiments, which are carried out in front of "J9" output radiadion channel, the latter in open condition. These tasks may be executed by one or two persons. One person can do the tasks, but requiring him to crouch in front of "J9" to adjust the angular position the experimental appartus (a crystal to bend the neutron radiation to the spectograph), and then to get up to verify data in a computer aside; these movements are repeated until achieving the right operational conditions. Two people may aid one another in such a way one remais crouched while the other remains still in front of the computer. They may also interchange tasks so as to divide received doses. Up to now, there are available two scenes with one person and one scene with two persons. These scenes are described in the sequel: - Scene 3: Comprises the scene with two persons performing spectography experiment. Video file labels: "20140327182905_IPCAM": recorded by the right camera.
Resumo:
Scenes for Spectrography experiment Scenes were recorded following the tasks involved in spectrography experiments, which are carried out in front of "J9" output radiadion channel, the latter in open condition. These tasks may be executed by one or two persons. One person can do the tasks, but requiring him to crouch in front of "J9" to adjust the angular position the experimental appartus (a crystal to bend the neutron radiation to the spectograph), and then to get up to verify data in a computer aside; these movements are repeated until achieving the right operational conditions. Two people may aid one another in such a way one remais crouched while the other remains still in front of the computer. They may also interchange tasks so as to divide received doses. Up to now, there are available two scenes with one person and one scene with two persons. These scenes are described in the sequel: - Scene 3: Comprises the scene with two persons performing spectography experiment. Video file labels: "20140327182906_IPCAM": recorded by the left camera.
Resumo:
General simulated scenes These scenes followed a pre-defined script (see the Thesis for details), with common movements corresponding to general experiments. People go to or stand still in front of "J9", and/or go to the side of Argonauta reactor and come back again. The first type of movement is common during Irradiation experiments, where a material sample is put within the "J9" channel; and also during neutrongraphy or gammagraphy experiments, where a sample is placed in front of "J9". Here, the detailed movements of putting samples on these places were not reproduced in details, but only the whole bodies' movements were simulated (as crouching or being still in front of "J9"). The second type of movement may occur when operators go to the side of Argonauta to verify some operational condition. - Scene 2: Comprises one of the scenes with two persons. Both of them use clothes of dark colors. Both persons go to the side of Argonauta reactor and then come back and go out. Video file labels: "20140326154755_IPCAM": recorded by the left camera.
Resumo:
Real operation scene This scene was recorded during a real Irradiation operation, more specifically during its final tasks (removing the irradiated sample). This scene was an extra recording to the script and planned ones. - Scene: Involved a number of persons, as: two operators, two personnel belonging to the radiological protection service, and the "client" who asked for the irradiation. Video file labels: "20140402150657_IPCAM": recorded by the right camera.
Resumo:
Real operation scene This scene was recorded during a real Irradiation operation, more specifically during its final tasks (removing the irradiated sample). This scene was an extra recording to the script and planned ones. - Scene: Involved a number of persons, as: two operators, two personnel belonging to the radiological protection service, and the "client" who asked for the irradiation. Video file labels: "20140402150658_IPCAM": recorded by the left camera.
Resumo:
Description of the Annotation files: Annotation files are supplied for each video, for benchmarking. Annotations correspond to ground truths of peoples' positions in the image plane, and also for their feet positions, when they were visible. Annotations were performed manually, with the aid of a code developed by (Silva et al., 2014; see the Thesis for details). Targets (people or feet) are marked at variable frame intervals and then linearly interpolated.
Resumo:
Description of the Annotation files: Annotation files are supplied for each video, for benchmarking. Annotations correspond to ground truths of peoples' positions in the image plane, and also for their feet positions, when they were visible. Annotations were performed manually, with the aid of a code developed by (Silva et al., 2014; see the Thesis for details). Targets (people or feet) are marked at variable frame intervals and then linearly interpolated.
Resumo:
Description of the Annotation files: Annotation files are supplied for each video, for benchmarking. Annotations correspond to ground truths of peoples' positions in the image plane, and also for their feet positions, when they were visible. Annotations were performed manually, with the aid of a code developed by (Silva et al., 2014; see the Thesis for details). Targets (people or feet) are marked at variable frame intervals and then linearly interpolated.
Resumo:
Description of the Annotation files: Annotation files are supplied for each video, for benchmarking. Annotations correspond to ground truths of peoples' positions in the image plane, and also for their feet positions, when they were visible. Annotations were performed manually, with the aid of a code developed by (Silva et al., 2014; see the Thesis for details). Targets (people or feet) are marked at variable frame intervals and then linearly interpolated.
Resumo:
This contribution discusses the effects of camera aperture correction in broadcast video on colour-based keying. The aperture correction is used to ’sharpen’ an image and is one element that distinguishes the ’TV-look’ from ’film-look’. ’If a very high level of sharpening is applied, as is the case in many TV productions then this significantly shifts the colours around object boundaries with hight contrast. This paper discusses these effects and their impact on keying and describes a simple low-pass filter to compensate for them. Tests with colour-based segmentation algorithms show that the proposed compensation is an effective way of decreasing the keying artefacts on object boundaries.
Resumo:
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead
Resumo:
Digital still cameras capable of filming short video clips are readily available, but the quality of these recordings for telemedicine has not been reported. We performed a blinded study using four commonly available digital cameras. A simulated patient with a hemiplegic gait pattern was filmed by the same videographer in an identical, brightly lit indoor setting. Six neurologists viewed the blinded video clips on their PC and comparisons were made between cameras, between video clips recorded with and without a tripod, and between video clips filmed on high- or low-quality settings. Use of a tripod had a smaller effect than expected, while images taken on a high-quality setting were strongly preferred to those taken on a low-quality setting. Although there was some variability in video quality between selected cameras, all were of sufficient quality to identify physical signs such as gait and tremor. Adequate-quality video clips of movement disorders can be produced with low-cost cameras and transmitted by email for teleneurology purposes.
Resumo:
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very difficult for a human operator to effectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identification at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the effective use of more advanced technologies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identification. Before an object can be tracked, it must be detected. Motion segmentation techniques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erroneous motion caused by noise and lighting effects, or due to the detection routines being unable to split occluded regions into their component objects. Particle filters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (often manual) detection to initialise the filter. Particle filters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle filter. A novel hybrid motion segmentation / optical flow algorithm, capable of simultaneously extracting multiple layers of foreground and optical flow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical flow is capable of extracting a moving object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and significant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle filter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benefit from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle filter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking systems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classification in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a significant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi-automated video processing and therefore improve security in areas under surveillance.
Resumo:
CCTV and surveillance networks are increasingly being used for operational as well as security tasks. One emerging area of technology that lends itself to operational analytics is soft biometrics. Soft biometrics can be used to describe a person and detect them throughout a sparse multi-camera network. This enables them to be used to perform tasks such as determining the time taken to get from point to point, and the paths taken through an environment by detecting and matching people across disjoint views. However, in a busy environment where there are 100's if not 1000's of people such as an airport, attempting to monitor everyone is highly unrealistic. In this paper we propose an average soft biometric, that can be used to identity people who look distinct, and are thus suitable for monitoring through a large, sparse camera network. We demonstrate how an average soft biometric can be used to identify unique people to calculate operational measures such as the time taken to travel from point to point.