4 resultados para Video-camera

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sprites have been detected in video camera observations from Niger over mesoscale convective systems in Nigeria during the 2006 AMMA (African Monsoon Multidisciplinary Analysis) campaign The parent lightning flashes have been detected by multiple Extremely Low Frequency (ELF) receiving stations worldwide The recorded charge moments of the patent lightning flashes are often in excellent agreement between different receiving sites, and are furthermore consistent with conventional dielectric breakdown in the mesosphere as the origin of the sprites Analysis of the polarization of the horizontal magnetic field at the distant receivers provides evidence that the departure from linear magnetic polarization at ELF is caused primarily by the clay night asymmetry of the Earth-ionosphere cavity Copyright (C) 2009 Royal Meteorological Society

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Navigation is a broad topic that has been receiving considerable attention from the mobile robotic community over the years. In order to execute autonomous driving in outdoor urban environments it is necessary to identify parts of the terrain that can be traversed and parts that should be avoided. This paper describes an analyses of terrain identification based on different visual information using a MLP artificial neural network and combining responses of many classifiers. Experimental tests using a vehicle and a video camera have been conducted in real scenarios to evaluate the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a method to locate and track people by combining evidence from multiple cameras using the homography constraint. The proposed method use foreground pixels from simple background subtraction to compute evidence of the location of people on a reference ground plane. The algorithm computes the amount of support that basically corresponds to the ""foreground mass"" above each pixel. Therefore, pixels that correspond to ground points have more support. The support is normalized to compensate for perspective effects and accumulated on the reference plane for all camera views. The detection of people on the reference plane becomes a search for regions of local maxima in the accumulator. Many false positives are filtered by checking the visibility consistency of the detected candidates against all camera views. The remaining candidates are tracked using Kalman filters and appearance models. Experimental results using challenging data from PETS`06 show good performance of the method in the presence of severe occlusion. Ground truth data also confirms the robustness of the method. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768 x 576 with several moving objects at about 11 fps. (C)2011 Elsevier Ltd. All rights reserved.