993 resultados para Stereoscopic cameras


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vegetation phenology is an important indicator of climate change and climate variability and it is strongly connected to biospheric–atmospheric gas exchange. We aimed to evaluate the applicability of phenological information derived from digital imagery for the interpretation of CO2 exchange measurements. For the years 2005–2007 we analyzed seasonal phenological development of 2 temperate mixed forests using tower-based imagery from standard RGB cameras. Phenological information was jointly analyzed with gross primary productivity (GPP) derived from net ecosystem exchange data. Automated image analysis provided reliable information on vegetation developmental stages of beech and ash trees covering all seasons. A phenological index derived from image color values was strongly correlated with GPP, with a significant mean time lag of several days for ash trees and several weeks for beech trees in early summer (May to mid-July). Leaf emergence dates for the dominant tree species partly explained temporal behaviour of spring GPP but were also masked by local meteorological conditions. We conclude that digital cameras at flux measurement sites not only provide an objective measure of the physiological state of a forest canopy at high temporal and spatial resolutions, but also complement CO2 and water exchange measurements, improving our knowledge of ecosystem processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional(3D)model of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adding virtual objects to real environments plays an important role in todays computer graphics: Typical examples are virtual furniture in a real room and virtual characters in real movies. For a believable appearance, consistent lighting of the virtual objects is required. We present an augmented reality system that displays virtual objects with consistent illumination and shadows in the image of a simple webcam. We use two high dynamic range video cameras with fisheye lenses permanently recording the environment illumination. A sampling algorithm selects a few bright parts in one of the wide angle images and the corresponding points in the second camera image. The 3D position can then be calculated using epipolar geometry. Finally, the selected point lights are used in a multi pass algorithm to draw the virtual object with shadows. To validate our approach, we compare the appearance and shadows of the synthetic objects with real objects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Having to carry input devices can be inconvenient when interacting with wall-sized, high-resolution tiled displays. Such displays are typically driven by a cluster of computers. Running existing games on a cluster is non-trivial, and the performance attained using software solutions like Chromium is not good enough. This paper presents a touch-free, multi-user, humancomputer interface for wall-sized displays that enables completely device-free interaction. The interface is built using 16 cameras and a cluster of computers, and is integrated with the games Quake 3 Arena (Q3A) and Homeworld. The two games were parallelized using two different approaches in order to run on a 7x4 tile, 21 megapixel display wall with good performance. The touch-free interface enables interaction with a latency of 116 ms, where 81 ms are due to the camera hardware. The rendering performance of the games is compared to their sequential counterparts running on the display wall using Chromium. Parallel Q3A’s framerate is an order of magnitude higher compared to using Chromium. The parallel version of Homeworld performed on par with the sequential, which did not run at all using Chromium. Informal use of the touch-free interface indicates that it works better for controlling Q3A than Homeworld.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR) setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents different application scenarios for which the registration of sub-sequence reconstructions or multi-camera reconstructions is essential for successful camera motion estimation and 3D reconstruction from video. The registration is achieved by merging unconnected feature point tracks between the reconstructions. One application is drift removal for sequential camera motion estimation of long sequences. The state-of-the-art in drift removal is to apply a RANSAC approach to find unconnected feature point tracks. In this paper an alternative spectral algorithm for pairwise matching of unconnected feature point tracks is used. It is then shown that the algorithms can be combined and applied to novel scenarios where independent camera motion estimations must be registered into a common global coordinate system. In the first scenario multiple moving cameras, which capture the same scene simultaneously, are registered. A second new scenario occurs in situations where the tracking of feature points during sequential camera motion estimation fails completely, e.g., due to large occluding objects in the foreground, and the unconnected tracks of the independent reconstructions must be merged. In the third scenario image sequences of the same scene, which are captured under different illuminations, are registered. Several experiments with challenging real video sequences demonstrate that the presented techniques work in practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The staircase is presented as the architectural component that most potently embodies thresholds, boundaries and passages due to its diagonal orientation and essence as an intermediary zone. Connections then are made between the kinesthetic requirements of traversing a staircase and viewing a stereoscopic photograph. From this foundation, the haptic essence of stereoscopic photography is proposed as uniquely qualified medium through which to view a staircase and therefore thresholds, boundaries, and passages within architecture. Analyses of stereoviews of staircases in the Palais de Justice in Brussels, the Library of Congress in Washington, and the Palais Garnier (Opéra) in Paris close the essay.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dieser Beitrag beschreibt die Integration von laufzeitmessenden 3D Kamerasystemen in die Gabelzinkenspitzen eines Flurförderzeugs. Mit Hilfe der integrierten Kameras und deren ausgewerteter Aufnahmen wurde ein Assistenzsystem für die Handhabung von Ladungsträgern realisiert, das dem Fahrer des Flurförderzeugs Verfahrempfehlungen für die Optimierung der Relativposition zwischen Gabelzinken und Ladungsträger bzw. Lagerplatz ausgibt. Neben der Vorstellung der verwendeten Kamera-Hardware und der Integration am Fahrzeug wird auch der Ablauf der Bildverarbeitung beschrieben.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When depicting both virtual and physical worlds, the viewer's impression of presence in these worlds is strongly linked to camera motion. Plausible and artist-controlled camera movement can substantially increase scene immersion. While physical camera motion exhibits subtle details of position, rotation, and acceleration, these details are often missing for virtual camera motion. In this work, we analyze camera movement using signal theory. Our system allows us to stylize a smooth user-defined virtual base camera motion by enriching it with plausible details. A key component of our system is a database of videos filmed by physical cameras. These videos are analyzed with a camera-motion estimation algorithm (structure-from-motion) and labeled manually with a specific style. By considering spectral properties of location, orientation and acceleration, our solution learns camera motion details. Consequently, an arbitrary virtual base motion, defined in any conventional animation package, can be automatically modified according to a user-selected style. In an animation package the camera motion base path is typically defined by the user via function curves. Another possibility is to obtain the camera path by using a mixed reality camera in motion capturing studio. As shown in our experiments, the resulting shots are still fully artist-controlled, but appear richer and more physically plausible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ContentsPoll: Ideal study abroad location?One-on-one: Samantha WeeseEditorial: Cameras in CampustownISU collides with CERN's discoveriesIowa Games takes new direction

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ContentsEagles finally SOARTeach-in addresses partnershipRelay for Life unites people of Story CountySoftball falls to LonghornsDo cameras curb traffic violations?Main Street Cultural District moves toward a new look

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: Photography through a microscope is virtually identical to that used with an astronomical telescope. For years, the 35mm camera was the choice for microphotography, but we live in a digital camera age now. We describe a custom homemade adapter that can be fit most of the cameras and microscopes. [See PDF for complete abstract]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports on the results of a research project, on comparing one virtual collaborative environment with a first-person visual immersion (first-perspective interaction) and a second one where the user interacts through a sound-kinetic virtual representation of himself (avatar), as a stress-coping environment in real-life situations. Recent developments in coping research are proposing a shift from a trait-oriented approach of coping to a more situation-specific treatment. We defined as real-life situation a target-oriented situation that demands a complex coping skills inventory of high self-efficacy and internal or external "locus of control" strategies. The participants were 90 normal adults with healthy or impaired coping skills, 25-40 years of age, randomly spread across two groups. There was the same number of participants across groups and gender balance within groups. All two groups went through two phases. In Phase I, Solo, one participant was assessed using a three-stage assessment inspired by the transactional stress theory of Lazarus and the stress inoculation theory of Meichenbaum. In Phase I, each participant was given a coping skills measurement within the time course of various hypothetical stressful encounters performed in two different conditions and a control group. In Condition A, the participant was given a virtual stress assessment scenario relative to a first-person perspective (VRFP). In Condition B, the participant was given a virtual stress assessment scenario relative to a behaviorally realistic motion controlled avatar with sonic feedback (VRSA). In Condition C, the No Treatment Condition (NTC), the participant received just an interview. In Phase II, all three groups were mixed and exercised the same tasks but with two participants in pairs. The results showed that the VRSA group performed notably better in terms of cognitive appraisals, emotions and attributions than the other two groups in Phase I (VRSA, 92%; VRFP, 85%; NTC, 34%). In Phase II, the difference again favored the VRSA group against the other two. These results indicate that a virtual collaborative environment seems to be a consistent coping environment, tapping two classes of stress: (a) aversive or ambiguous situations, and (b) loss or failure situations in relation to the stress inoculation theory. In terms of coping behaviors, a distinction is made between self-directed and environment-directed strategies. A great advantage of the virtual collaborative environment with the behaviorally enhanced sound-kinetic avatar is the consideration of team coping intentions in different stages. Even if the aim is to tap transactional processes in real-life situations, it might be better to conduct research using a sound-kinetic avatar based collaborative environment than a virtual first-person perspective scenario alone. The VE consisted of two dual-processor PC systems, a video splitter, a digital camera and two stereoscopic CRT displays. The system was programmed in C++ and VRScape Immersive Cluster from VRCO, which created an artificial environment that encodes the user's motion from a video camera, targeted at the face of the users and physiological sensors attached to the body.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this investigation, bromine-77 was produced with a medical cyclotron and imaged with gamma cameras. Br-77 emits a 240 kev photon with a half life of 56 hours. The C-Br bond is stronger than the C-I bond and bromine is not collected in the thyroid. Bromine can be used to label many organic molecules by methods analogous to radioiodination. The only North American source of Br-77 in the 70's and 80's was Los Alamos National Laboratory, but it discontinued production in 1989. In this method, a p,3n reaction on Br-77 produces Kr-77 which decays with a 1.2 hour half life to Br-77. A cyclotron generated 40 MeV proton beam is incident on a nearly saturated NaBr or LiBr solution contained in a copper or titanium target. A cooling chamber through which helium gas is flowed separates the solution from the cyclotron beam line. Helium gas is also flowed through the solution to extract Kr-77 gas. The mixture flows through a nitrogen trap where Kr-77 freezes and is allowed to decay to Br-77. Eight production runs were performed, three with a copper target and five with a titanium target with yields of 40, 104, 180, 679, 1080, 685, 762 and 118 uCi respectively. Gamma ray spectroscopy has shown the product to be very pure, however corrosion has been a major obstacle, causing the premature retirement of the copper target. Phantom and in-vivo rat nuclear images, and an autoradiograph in a rat are presented. The quality of the nuclear scans is reasonable and the autoradiograph reveals high isotope uptake in the renal parenchyma, a more moderate but uniform uptake in pulmonary and hepatic tissue, and low soft tissue uptake. There is no isotope uptake in the brain or the gastric mucosa. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Source materials like fine art, over-sized, fragile maps, and delicate artifacts have traditionally been digitally converted through the use of controlled lighting and high resolution scanners and camera backs. In addition the capture of items such as general and special collections bound monographs has recently grown both through consortial efforts like the Internet Archive's Open Content Alliance and locally at the individual institution level. These projects, in turn, have introduced increasingly higher resolution consumer-grade digital single lens reflex cameras or "DSLRs" as a significant part of the general cultural heritage digital conversion workflow. Central to the authors' discussion is the fact that both camera backs and DSLRs commonly share the ability to capture native raw file formats. Because these formats include such advantages as access to an image's raw mosaic sensor data within their architecture, many institutions choose raw for initial capture due to its high bit-level and unprocessed nature. However to date these same raw formats, so important to many at the point of capture, have yet to be considered "archival" within most published still imaging standards, if they are considered at all. Throughout many workflows raw files are deleted and thrown away after more traditionally "archival" uncompressed TIFF or JPEG 2000 files have been derived downstream from their raw source formats [1][2]. As a result, the authors examine the nature of raw anew and consider the basic questions, Should raw files be retained? What might their role be? Might they in fact form a new archival format space? Included in the discussion is a survey of assorted raw file types and their attributes. Also addressed are various sustainability issues as they pertain to archival formats with a special emphasis on both raw's positive and negative characteristics as they apply to archival practices. Current common archival workflows versus possible raw-based ones are investigated as well. These comparisons are noted in the context of each approach's differing levels of usable captured image data, various preservation virtues, and the divergent ideas of strictly fixed renditions versus the potential for improved renditions over time. Special attention is given to the DNG raw format through a detailed inspection of a number of its various structural components and the roles that they play in the format's latest specification. Finally an evaluation is drawn of both proprietary raw formats in general and DNG in particular as possible alternative archival formats for still imaging.