834 resultados para Cameras
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Because of their learned avoidance of humans and the dense cover provided by forested areas, observation of coyote activity is often very limited in the Southeast. In this study we used digital motion-sensor cameras to detect activity among coyote populations in various urban and rural habitats. Camera stations were placed adjacent to regenerating clear cuts, forest trails and roads, agriculture fields, residential areas, and within city parks to determine activity and presence of coyotes in these various areas. Cameras were successful in detecting coyotes in all study sites throughout the year. Coyotes appear to show no avoidance of camera stations. Cameras may be helpful in gathering general biological and activity information on coyote populations in an area.
Resumo:
[ES] In this paper we address the problem of inserting virtual content in a video sequence. The method we propose uses just image information. We perform primitive tracking, camera calibration, real and virtual camera synchronisation and finally rendering to insert the virtual content in the real video sequence. To simplify the calibration step we assume that cameras are mounted on a tripod (which is a common situation in practise). The primitive tracking procedure, which uses lines and circles as primitives, is performed by means of a CART (Classification and Regression Tree). Finally, the virtual and real camera synchronisation and rendering is performed using functions of OpenGL (Open Graphic Library). We have applied the method proposed to sport event scenarios, specifically, soccer matches. In order to illustrate its performance, it has been applied to real HD (High Definition) video sequences. The quality of the proposed method is validated by inserting virtual elements in such HD video sequence.
Resumo:
[EN]Low cost real-time depth cameras offer new sensors for a wide field of applications apart from the gaming world. Other active research scenarios as for example surveillance, can take ad- vantage of the capabilities offered by this kind of sensors that integrate depth and visual information. In this paper, we present a system that operates in a novel application context for these devices, in troublesome scenarios where illumination conditions can suffer sudden changes. We focus on the people counting problem with re-identification and trajectory analysis.
Resumo:
An Internet survey demonstrated the existence of problems related to intraoperative tracking camera set-up and alignment. It is hypothesized that these problems are a result of the limited field of view of today's optoelectronic camera systems, which is usually insufficiently large to keep the entire site of surgical action in view during an intervention. A method is proposed to augment a camera's field of view by actively controlling camera orientation, enabling it to track instruments as they are used intraoperatively. In an experimental study, an increase of almost 300% was found in the effective volume in which instruments could be tracked.
Observations of Comet 9P/Tempel 1 around the Deep Impact event by the OSIRIS cameras onboard Rosetta
Observations of Comet 9P/Tempel 1 around the Deep Impact event by the OSIRIS cameras onboard Rosetta
Resumo:
A practical use of personal digital cameras for taking digital photographs in the microsurgical field through an operating microscope is described. This inexpensive and practical method for acquiring microscopic images at the desired magnification combines the advantages of the digital camera and the operating microscope.
Resumo:
Aims. Approach observations with the Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS) experiment onboard Rosetta are used to determine the rotation period, the direction of the spin axis, and the state of rotation of comet 67P’s nucleus. Methods. Photometric time series of 67P have been acquired by OSIRIS since the post wake-up commissioning of the payload in March 2014. Fourier analysis and convex shape inversion methods have been applied to the Rosetta data as well to the available ground-based observations. Results. Evidence is found that the rotation rate of 67P has significantly changed near the time of its 2009 perihelion passage, probably due to sublimation-induced torque. We find that the sidereal rotation periods P1 = 12.76129 ± 0.00005 h and P2 = 12.4043 ± 0.0007 h for the apparitions before and after the 2009 perihelion, respectively, provide the best fit to the observations. No signs of multiple periodicity are found in the light curves down to the noise level, which implies that the comet is presently in a simple rotation state around its axis of largest moment of inertia. We derive a prograde rotation model with spin vector J2000 ecliptic coordinates λ = 65° ± 15°, β = + 59° ± 15°, corresponding to equatorial coordinates RA = 22°, Dec = + 76°. However, we find that the mirror solution, also prograde, at λ = 275° ± 15°, β = + 50° ± 15° (or RA = 274°, Dec = + 27°), is also possible at the same confidence level, due to the intrinsic ambiguity of the photometric problem for observations performed close to the ecliptic plane.
Resumo:
In this work, a method that synchronizes two video sequences is proposed. Unlike previous methods, which require the existence of correspondences between features tracked in the two sequences, and/or that the cameras are static or jointly moving, the proposed approach does not impose any of these constraints. It works when the cameras move independently, even if different features are tracked in the two sequences. The assumptions underlying the proposed strategy are that the intrinsic parameters of the cameras are known and that two rigid objects, with independent motions on the scene, are visible in both sequences. The relative motion between these objects is used as clue for the synchronization. The extrinsic parameters of the cameras are assumed to be unknown. A new synchronization algorithm for static or jointly moving cameras that see (possibly) different parts of a common rigidly moving object is also proposed. Proof-of-concept experiments that illustrate the performance of these methods are presented, as well as a comparison with a state-of-the-art approach.
Resumo:
We present observations of total cloud cover and cloud type classification results from a sky camera network comprising four stations in Switzerland. In a comprehensive intercomparison study, records of total cloud cover from the sky camera, long-wave radiation observations, Meteosat, ceilometer, and visual observations were compared. Total cloud cover from the sky camera was in 65–85% of cases within ±1 okta with respect to the other methods. The sky camera overestimates cloudiness with respect to the other automatic techniques on average by up to 1.1 ± 2.8 oktas but underestimates it by 0.8 ± 1.9 oktas compared to the human observer. However, the bias depends on the cloudiness and therefore needs to be considered when records from various observational techniques are being homogenized. Cloud type classification was conducted using the k-Nearest Neighbor classifier in combination with a set of color and textural features. In addition, a radiative feature was introduced which improved the discrimination by up to 10%. The performance of the algorithm mainly depends on the atmospheric conditions, site-specific characteristics, the randomness of the selected images, and possible visual misclassifications: The mean success rate was 80–90% when the image only contained a single cloud class but dropped to 50–70% if the test images were completely randomly selected and multiple cloud classes occurred in the images.
Resumo:
CHARACTERIZATION OF THE COUNT RATE PERFORMANCE AND EVALUATION OF THE EFFECTS OF HIGH COUNT RATES ON MODERN GAMMA CAMERAS Michael Stephen Silosky, B.S. Supervisory Professor: S. Cheenu Kappadath, Ph.D. Evaluation of count rate performance (CRP) is an integral component of gamma camera quality assurance and measurement of system dead time (τ) is important for quantitative SPECT. The CRP of three modern gamma cameras was characterized using established methods (Decay and Dual Source) under a variety of experimental conditions. For the Decay method, input count rate was plotted against observed count rate and fit to the paralyzable detector model (PDM) to estimate τ (Rates method). A novel expression for observed counts as a function of measurement time interval was derived and the observed counts were fit to this expression to estimate τ (Counts method). Correlation and Bland-Altman analysis were performed to assess agreement in estimates of τ between methods. The dependencies of τ on energy window definition and incident energy spectrum were characterized. The Dual Source method was also used to estimate τ and its agreement with the Decay method under identical conditions and the effects of total activity and the ratio of source activities were investigated. Additionally, the effects of count rate on several performance metrics were evaluated. The CRP curves for each system agreed with the PDM at low count rates but deviated substantially at high count rates. Estimates of τ for the paralyzable portion of the CRP curves using the Rates and Counts methods were highly correlated (r=0.999) but with a small (~6%) difference. No significant difference was observed between the highly correlated estimates of τ using the Decay or Dual Source methods under identical experimental conditions (r=0.996). Estimates of τ increased as a power-law function with decreasing ratio of counts in the photopeak to the total counts and linearly with decreasing spectral effective energy. Dual Source method estimates of τ varied as a quadratic with the ratio of the single source to combined source activities and linearly with total activity used across a large range. Image uniformity, spatial resolution, and energy resolution degraded linearly with count rate and image distorting effects were observed. Guidelines for CRP testing and a possible method for the correction of count rate losses for clinical images have been proposed.
Resumo:
While modern sampling techniques, such as autonomous underwater vehicles, are increasing our knowledge of the fauna beneath Antarctic sea ice of only a few meters in depth, greater sampling difficulties mean that little is known about the marine life underneath Antarctic ice shelves over 100 m thick. In this study, we present underwater images showing the underside of an Antarctic ice shelf covered by aggregated invertebrate communities, most likely cnidarians and isopods. These images, taken at an average depth of 145 m, were obtained with a digital still camera system attached to Weddell seals Leptonychotes weddellii foraging just beneath the ice shelf. Our observations indicate that, similar to the sea floor, ice shelves serve as an important habitat for a remarkable amount of marine invertebrate fauna in Antarctica.