926 resultados para cameras and camera accessories
Resumo:
Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional(3D)model of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.
Resumo:
Adding virtual objects to real environments plays an important role in todays computer graphics: Typical examples are virtual furniture in a real room and virtual characters in real movies. For a believable appearance, consistent lighting of the virtual objects is required. We present an augmented reality system that displays virtual objects with consistent illumination and shadows in the image of a simple webcam. We use two high dynamic range video cameras with fisheye lenses permanently recording the environment illumination. A sampling algorithm selects a few bright parts in one of the wide angle images and the corresponding points in the second camera image. The 3D position can then be calculated using epipolar geometry. Finally, the selected point lights are used in a multi pass algorithm to draw the virtual object with shadows. To validate our approach, we compare the appearance and shadows of the synthetic objects with real objects.
Resumo:
For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.
Resumo:
This contribution discusses the effects of camera aperture correction in broadcast video on colour-based keying. The aperture correction is used to ’sharpen’ an image and is one element that distinguishes the ’TV-look’ from ’film-look’. ’If a very high level of sharpening is applied, as is the case in many TV productions then this significantly shifts the colours around object boundaries with hight contrast. This paper discusses these effects and their impact on keying and describes a simple low-pass filter to compensate for them. Tests with colour-based segmentation algorithms show that the proposed compensation is an effective way of decreasing the keying artefacts on object boundaries.
Resumo:
When depicting both virtual and physical worlds, the viewer's impression of presence in these worlds is strongly linked to camera motion. Plausible and artist-controlled camera movement can substantially increase scene immersion. While physical camera motion exhibits subtle details of position, rotation, and acceleration, these details are often missing for virtual camera motion. In this work, we analyze camera movement using signal theory. Our system allows us to stylize a smooth user-defined virtual base camera motion by enriching it with plausible details. A key component of our system is a database of videos filmed by physical cameras. These videos are analyzed with a camera-motion estimation algorithm (structure-from-motion) and labeled manually with a specific style. By considering spectral properties of location, orientation and acceleration, our solution learns camera motion details. Consequently, an arbitrary virtual base motion, defined in any conventional animation package, can be automatically modified according to a user-selected style. In an animation package the camera motion base path is typically defined by the user via function curves. Another possibility is to obtain the camera path by using a mixed reality camera in motion capturing studio. As shown in our experiments, the resulting shots are still fully artist-controlled, but appear richer and more physically plausible.
Resumo:
During the last years the use of tracking cameras for SLR observations became less important due to the high accuracy of the predicted orbits. Upcoming new targets like satellites in eccentric orbits and space debris objects, however, require tracking cameras again. In 2013 the interline CCD camera was replaced at the Zimmerwald Observatory with a so called scientific CMOS camera. This technology promises a better performance for this application than all kinds of CCD cameras. After the comparison of the different technologies the focus will be on the integration in the Zimmerwald SLR system.
Resumo:
Efforts are ongoing to decrease the noise of the GRACE gravity field models and hence to arrive closer to the GRACE baseline. The most significant error sources belong the untreated errors in the observation data and the imperfections in the background models. The recent study (Bandikova&Flury,2014) revealed that the current release of the star camera attitude data (SCA1B RL02) contain noise systematically higher than expected by about a factor 3-4. This is due to an incorrect implementation of the algorithms for quaternion combination in the JPL processing routines. Generating improved SCA data requires that valid data from both star camera heads are available which is not always the case because the Sun and Moon at times blind one camera. In the gravity field modeling, the attitude data are needed for the KBR antenna offset correction and to orient the non-gravitational linear accelerations sensed by the accelerometer. Hence any improvement in the SCA data is expected to be reflected in the gravity field models. In order to quantify the effect on the gravity field, we processed one month of observation data using two different approaches: the celestial mechanics approach (AIUB) and the variational equations approach (ITSG). We show that the noise in the KBR observations and the linear accelerations has effectively decreased. However, the effect on the gravity field on a global scale is hardly evident. We conclude that, at the current level of accuracy, the errors seen in the temporal gravity fields are dominated by errors coming from sources other than the attitude data.
Resumo:
We present observations of total cloud cover and cloud type classification results from a sky camera network comprising four stations in Switzerland. In a comprehensive intercomparison study, records of total cloud cover from the sky camera, long-wave radiation observations, Meteosat, ceilometer, and visual observations were compared. Total cloud cover from the sky camera was in 65–85% of cases within ±1 okta with respect to the other methods. The sky camera overestimates cloudiness with respect to the other automatic techniques on average by up to 1.1 ± 2.8 oktas but underestimates it by 0.8 ± 1.9 oktas compared to the human observer. However, the bias depends on the cloudiness and therefore needs to be considered when records from various observational techniques are being homogenized. Cloud type classification was conducted using the k-Nearest Neighbor classifier in combination with a set of color and textural features. In addition, a radiative feature was introduced which improved the discrimination by up to 10%. The performance of the algorithm mainly depends on the atmospheric conditions, site-specific characteristics, the randomness of the selected images, and possible visual misclassifications: The mean success rate was 80–90% when the image only contained a single cloud class but dropped to 50–70% if the test images were completely randomly selected and multiple cloud classes occurred in the images.
Resumo:
While modern sampling techniques, such as autonomous underwater vehicles, are increasing our knowledge of the fauna beneath Antarctic sea ice of only a few meters in depth, greater sampling difficulties mean that little is known about the marine life underneath Antarctic ice shelves over 100 m thick. In this study, we present underwater images showing the underside of an Antarctic ice shelf covered by aggregated invertebrate communities, most likely cnidarians and isopods. These images, taken at an average depth of 145 m, were obtained with a digital still camera system attached to Weddell seals Leptonychotes weddellii foraging just beneath the ice shelf. Our observations indicate that, similar to the sea floor, ice shelves serve as an important habitat for a remarkable amount of marine invertebrate fauna in Antarctica.
Resumo:
Particles sinking out of the euphotic zone are important vehicles of carbon export from the surface ocean. Most of the particles produce heavier aggregates by coagulating with each other before they sink. We implemented an aggregation model into the biogeochemical model of Regional Oceanic Modelling System (ROMS) to simulate the distribution of particles in the water column and their downward transport in the Northwest African upwelling region. Accompanying settling chamber, sediment trap and particle camera measurements provide data for model validation. In situ aggregate settling velocities measured by the settling chamber were around 55 m d**-1. Aggregate sizes recorded by the particle camera hardly exceeded 1 mm. The model is based on a continuous size spectrum of aggregates, characterised by the prognostic aggregate mass and aggregate number concentration. Phytoplankton and detritus make up the aggregation pool, which has an averaged, prognostic and size dependent sinking. Model experiments were performed with dense and porous approximations of aggregates with varying maximum aggregate size and stickiness as well as with the inclusion of a disaggregation term. Similar surface productivity in all experiments has been generated in order to find the best combination of parameters that produce measured deep water fluxes. Although the experiments failed to represent surface particle number spectra, in the deep water some of them gave very similar slope and spectrum range as the particle camera observations. Particle fluxes at the mesotrophic sediment trap site off Cape Blanc (CB) have been successfully reproduced by the porous experiment with disaggregation term when particle remineralisation rate was 0.2 d**-1. The aggregation-disaggregation model improves the prediction capability of the original biogeochemical model significantly by giving much better estimates of fluxes for both upper and lower trap. The results also point to the need for more studies to enhance our knowledge on particle decay and its variation and to the role that stickiness play in the distribution of vertical fluxes.
Resumo:
We compared particle data from a moored video camera system with sediment trap derived fluxes at ~1100 m depth in the highly dynamic coastal upwelling system off Cape Blanc, Mauritania. Between spring 2008 and winter 2010 the trap collected settling particles in 9-day intervals, while the camera recorded in-situ particle abundance and size-distribution every third day. Particle fluxes were highly variable (40-1200 mg m**-2 d**-1) and followed distinct seasonal patterns with peaks during spring, summer and fall. The particle flux patterns from the sediment traps correlated to the total particle volume captured by the video camera, which ranged from1 to 22 mm**3 l**-1. The measured increase in total particle volume during periods of high mass flux appeared to be better related to increases in the particle concentrations, rather than to increased average particle size. We observed events that had similar particle fluxes, but showed clear differences in particle abundance and size-distribution, and vice versa. Such observations can only be explained by shifts in the composition of the settling material, with changes both in particle density and chemical composition. For example, the input of wind-blown dust from the Sahara during September 2009 led to the formation of high numbers of comparably small particles in the water column. This suggests that, besides seasonal changes, the composition of marine particles in one region underlies episodical changes. The time between the appearance of high dust concentrations in the atmosphere and the increase lithogenic flux in the 1100 m deep trap suggested an average settling rate of 200 m d**-1, indicating a close and fast coupling between dust input and sedimentation of the material.
Resumo:
The ROV operations had three objectives: (1) to check, whether the "Cherokee" system is suited for advanced benthological work in the high latitude Antarctic shelf areas; (2) to support the disturbance experiment, providing immediate visual Information; (3) to continue ecological work that started in 1989 at the hilltop situated at the northern margin of the Norsel Bank off the 4-Seasons Inlet (Weddell Sea). The "Cherokee" is was equipped with 3 video cameras, 2 of which support the operation. A high resolution Tritech Typhoon camera is used for scientific observations to be recorded. In addition, the ROV has a manipulator, a still camera, lights and strobe, compass, 2 lasers, a Posidonia transponder and an obstacle avoidance Sonar. The size of the vehicle is 160 X 90 X 90cm. In the present configuration without TMS (tether management system) the deployment has to start with paying out the full cable length, lay it in loops on deck and connect the glass fibres at the tether's spool winch. After a final technical check the vehicle is deployed into the water, actively driven perpendicular to the ship's axis and floatings are fixed to the tether. At a cable length of approx. 50 m, the tether is tightened to the depressor by several cable ties and both components are lowered towards the sea floor, the vehicle by the thruster's propulsion and the depressor by the ship's winch. At 5 m intervals the tether has to be tied to the single conductor cable. In good weather conditions the instruments supporting the navigation of the ROV, especially the Posidonia system, allow an operation mode to follow the ship's course if the ship's speed is slow. Together with the lasers which act as a scale in the images they also allow a reproducible scientific analysis since the transect can be plotted in a GIS system. Consequently, the area observed can be easily calculated. An operation as a predominantly drifting system, especially in areas with bottom near currents, is also possible, however, the connection of the tether at the rear of the vehicle is unsuitable for such conditions. The recovery of the system corresponds to that of the deployment. Most important is to reach the surface of the sea at a safe distance perpendicular to the ship's axis in order not to interfere with the ship's propellers. During this phase the Posidonia transponder system is of high relevance although it has to be switched off at a water depth of approx. 40 m. The minimum personal needed is 4 persons to handle the tether on deck, one person to operate the ship's winch, one pilot and one additional technician for the ROV's operation itself, one scientist, and one person on the ship's bridge in addition to one on deck for whale watching when the Posidonia system is in use. The time for the deployment of the ROV until it reaches the sea floor depends on the water depth and consequently on the length of the cable to be paid out beforehand and to be tightened to the single conductor cable. Deployment and recovery at intermediate water depths can last up to 2 hours each. A reasonable time for benthological observations close to the sea floor is 1 to 3 hours but can be extended if scientifically justified. Preliminary results: after a first test station, the ROV was deployed 3 times for observations related to the disturbance experiment. A first attempt to Cross the hilltop at the northern margin of the Norsel Bank close to the 4- Seasons Inlet was successful only for the first hundreds of metres transect length. The benthic community was dominated in biomass by the demosponge Cinachyra barbata. Due to the strong current of approx. 1 nm/h, the design of the system, and an expected more difficult current regime between grounded icebergs and the top of the hilltop the operation was stopped before the hilltop was reached. In a second attempt the hilltop was successfully crossed because the current and wind situation was much more suitable. In contrast to earlier expeditions with the "sprint" ROV it was the first time that both slopes, the smoother in the northeast and the steeper in the southwest were continuously observed during one cast. A coarse classification of the hilltop fauna shows patches dominated by single taxa: cnidarians, hydrozoans, holothurians, sea urchins and stalked sponges. Approximately 20 % of the north-eastern slope was devastated by grounding icebergs. Here the sediments consisted of large boulders, gravel or blocks of finer sediment looking like an irregularly ploughed field. On the Norsel Bank the Cinachyra concentrations were locally associated with high abundances of sea anemones. Total observation time amounted to 11.5 hours corresponding to almost 6-9 km transect length.
Resumo:
Below are the results of the survey of the Iberian lynx obtained with camera-trapping between 2000 and 2007 in Sierra Morena. Two very important aspects of camera-trapping concerning its efficiency are also analyzed. The first is the evolution along years according to the camera-trapping type used of two efficiency indicators. The results obtained demonstrate that the most efficient lure is rabbit, though it is the less proven (92 trap-nights), followed by camera-trapping in the most frequent marking places (latrines). And, we propose as a novel the concept of use area as a spatial reference unit for the camera-trapping monitoring of non radio-marked animals is proposed, and its validity discussed.