993 resultados para automatic virtual camera


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768 x 576 with several moving objects at about 11 fps. (C)2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Virtual worlds exploration techniques are used in a wide variety of domains — from graph drawing to robot motion. This paper is dedicated to virtual world exploration techniques which have to help a human being to understand a 3D scene. An improved method of viewpoint quality estimation is presented in the paper, together with a new off-line method for automatic 3D scene exploration, based on a virtual camera. The automatic exploration method is working in two steps. In the first step, a set of “good” viewpoints is computed. The second step uses this set of points of view to compute a camera path around the scene. Finally, we define a notion of semantic distance between objects of the scene to improve the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When depicting both virtual and physical worlds, the viewer's impression of presence in these worlds is strongly linked to camera motion. Plausible and artist-controlled camera movement can substantially increase scene immersion. While physical camera motion exhibits subtle details of position, rotation, and acceleration, these details are often missing for virtual camera motion. In this work, we analyze camera movement using signal theory. Our system allows us to stylize a smooth user-defined virtual base camera motion by enriching it with plausible details. A key component of our system is a database of videos filmed by physical cameras. These videos are analyzed with a camera-motion estimation algorithm (structure-from-motion) and labeled manually with a specific style. By considering spectral properties of location, orientation and acceleration, our solution learns camera motion details. Consequently, an arbitrary virtual base motion, defined in any conventional animation package, can be automatically modified according to a user-selected style. In an animation package the camera motion base path is typically defined by the user via function curves. Another possibility is to obtain the camera path by using a mixed reality camera in motion capturing studio. As shown in our experiments, the resulting shots are still fully artist-controlled, but appear richer and more physically plausible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Robots are ever increasing in a variety of different workplaces providing an array of benefits such alternative solutions to traditional human labor. While developing fully autonomous robots is the ultimate goal in many robotic applications the reality is that there still exist many situationswere robots require some level of teleoperation in order to achieve assigned goals especially when deployed in non-deterministic environments. For instance teleoperation is commonly used in areas such as search and rescue, bomb disposal and exploration of inaccessible or harsh terrain. This is due to a range of factors such as the lack of ability for robots to quickly and reliably navigate unknown environments or provide high-level decision making especially intime critical tasks. To provide an adequate solution for such situations human-in-the-loop control is required. When developing human-in-the-loop control it is important to take advantage of the complimentary skill-sets that both humans and robots share. For example robots can performrapid calculations, provide accurate measurements through hardware such as sensors and store large amounts of data while humans provide experience, intuition, risk management and complex decision making capabilities. Shared autonomy is the concept of building robotic systems that take advantage of these complementary skills-sets to provide a robust an efficient robotic solution. While the requirement of human-in-the-loop control exists Human Machine Interaction (HMI) remains an important research topic especially the area of User Interface (UI) design.In order to provide operators with an effective teleoperation system it is important that the interface is intuitive and dynamic while also achieving a high level of immersion. Recent advancements in virtual and augmented reality hardware is giving rise to innovative HMI systems. Interactive hardware such as Microsoft Kinect, leap motion, Oculus Rift, Samsung Gear VR and even CAVE Automatic Virtual Environments [1] are providing vast improvements over traditional user interface designs such as the experimental web browser JanusVR [2]. This combined with the introduction of standardized robot frameworks such as ROS and Webots [3] that now support a large number of different robots provides an opportunity to develop a universal UI for teleoperation control to improve operator efficiency while reducing teleoperation training.This research introduces the concept of a dynamic virtual workspace for teleoperation of heterogeneous robots in non-deterministic environments that require human-in-the-loop control. The system first identifies the connected robots through the use kinematic information then determines its network capabilities such as latency and bandwidth. Given the robot type and network capabilities the system can then provide the operator with available teleoperation modes such as pick and place control or waypoint navigation while also allowing them to manipulate the virtual workspace layout to provide information from onboard camera’s or sensors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Virtual Production is a rapidly growing approach to filmmaking that utilises 3D software, virtual camera systems and motion capture technology to visually interact with a real-time virtual environment. The use of these technologies has continued to increase, however, little has been done to document the various approaches for incorporating this new film making technique into a production. This practice-led research project outlines the development of virtual production in the entertainment industry and explores possible strategies for adopting aspects of this new film making technique into the production of short animated films. The outcome is an improved understanding of possible strategies that could be utilised to assist producers and directors with the transition into this new film making technique. - See more at: http://dl4.globalstf.org/?wpsc-product=adopting-virtual-production-for-animated-filmaking#sthash.DLzRph4Z.dpuf

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In daily life, rich experiences evolve in every environmental and social interaction. Because experience has a strong impact on how people behave, scholars in different fields are interested in understanding what constitutes an experience. Yet even if interest in conscious experience is on the increase, there is no consensus on how such experience should be studied. Whatever approach is taken, the subjective and psychologically multidimensional nature of experience should be respected. This study endeavours to understand and evaluate conscious experiences. First I intro-duce a theoretical approach to psychologically-based and content-oriented experience. In the experiential cycle presented here, classical psychology and orienting-environmental content are connected. This generic approach is applicable to any human-environment interaction. Here I apply the approach to entertainment virtual environments (VEs) such as digital games and develop a framework with the potential for studying experiences in VEs. The development of the methodological framework included subjective and objective data from experiences in the Cave Automatic Virtual Environment (CAVE) and with numerous digital games (N=2,414). The final framework consisted of fifteen factor-analytically formed subcomponents of the sense of presence, involvement and flow. Together, these show the multidimensional experiential profile of VEs. The results present general experiential laws of VEs and show that the interface of a VE is related to (physical) presence, which psychologically means attention, perception and the cognitively evaluated realness and spatiality of the VE. The narrative of the VE elicits (social) presence and involvement and affects emotional outcomes. Psychologically, these outcomes are related to social cognition, motivation and emotion. The mechanics of a VE affect the cognitive evaluations and emotional outcomes related to flow. In addition, at the very least, user background, prior experience and use context affect the experiential variation. VEs are part of many peoples lives and many different outcomes are related to them, such as enjoyment, learning and addiction, depending on who is making the evalua-tion. This makes VEs societally important and psychologically fruitful to study. The approach and framework presented here contribute to our understanding of experiences in general and VEs in particular. The research can provide VE developers with a state-of-the art method (www.eveqgp.fi) that can be utilized whenever new product and service concepts are designed, prototyped and tested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ce mémoire s'intéresse à la reconstruction d'un modèle 3D à partir de plusieurs images. Le modèle 3D est élaboré avec une représentation hiérarchique de voxels sous la forme d'un octree. Un cube englobant le modèle 3D est calculé à partir de la position des caméras. Ce cube contient les voxels et il définit la position de caméras virtuelles. Le modèle 3D est initialisé par une enveloppe convexe basée sur la couleur uniforme du fond des images. Cette enveloppe permet de creuser la périphérie du modèle 3D. Ensuite un coût pondéré est calculé pour évaluer la qualité de chaque voxel à faire partie de la surface de l'objet. Ce coût tient compte de la similarité des pixels provenant de chaque image associée à la caméra virtuelle. Finalement et pour chacune des caméras virtuelles, une surface est calculée basée sur le coût en utilisant la méthode de SGM. La méthode SGM tient compte du voisinage lors du calcul de profondeur et ce mémoire présente une variation de la méthode pour tenir compte des voxels précédemment exclus du modèle par l'étape d'initialisation ou de creusage par une autre surface. Par la suite, les surfaces calculées sont utilisées pour creuser et finaliser le modèle 3D. Ce mémoire présente une combinaison innovante d'étapes permettant de créer un modèle 3D basé sur un ensemble d'images existant ou encore sur une suite d'images capturées en série pouvant mener à la création d'un modèle 3D en temps réel.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Commissioned for the It’s Timely exhibition at the Blacktown Arts Centre, Just Dawn is a response to two speeches that former Australian Prime Minister Gough Whitlam delivered in Blacktown in 1972 and 1974. Throughout the video, a series of white words and phrases fade in and out as a virtual camera flies towards an abstract horizon line. The narrative thread of the text is directed towards an unnamed Whitlam through the repeated appearance of the words ‘you said’. As the video progresses, the colours of the animated background slowly brighten to resemble an emerging dawn, and the sound, text and camera movements build in frequency and intensity. As they do so, the once optimistic outlook becomes increasingly unsteady. In these ways, Just Dawn is equal parts homage and lament for the ideological acuity and ambition of Whitlam’s agenda. It explores how Whitlam’s words can become markers for the complexities of both his own specific transformative policies, and the character of the socially progressive movement more broadly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An approach for estimating 3D body pose from multiple, uncalibrated views is proposed. First, a mapping from image features to 2D body joint locations is computed using a statistical framework that yields a set of several body pose hypotheses. The concept of a "virtual camera" is introduced that makes this mapping invariant to translation, image-plane rotation, and scaling of the input. As a consequence, the calibration matrices (intrinsics) of the virtual cameras can be considered completely known, and their poses are known up to a single angular displacement parameter. Given pose hypotheses obtained in the multiple virtual camera views, the recovery of 3D body pose and camera relative orientations is formulated as a stochastic optimization problem. An Expectation-Maximization algorithm is derived that can obtain the locally most likely (self-consistent) combination of body pose hypotheses. Performance of the approach is evaluated with synthetic sequences as well as real video sequences of human motion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction : Au sein de la population vieillissante, les chutes à domicile représentent une problématique importante (1 personne âgée/3 chute au moins 1 fois/année). Pour détecter automatiquement les chutes en respectant la vie privée, une technologie novatrice a été développée : la vidéosurveillance intelligente. Objectif : Explorer la perception et la réceptivité des personnes âgées concernant l’introduction de cette nouvelle technologie, à domicile. Méthodologie : Trente personnes âgées ont participé à une entrevue structurée (devis mixte). Une analyse de contenu (données qualitatives) et des analyses descriptives (données quantitatives) ont été effectuées puis combinées. Résultats : 93,4% des participants sont favorables (ou partiellement) à la vidéosurveillance intelligente et 43,3% l’utiliserait pour le sentiment de sécurité et la confidentialité procurés. Conclusion : Le contexte de vie des personnes âgées influence leur perception et réceptivité envers la vidéosurveillance intelligente. Il s’agit maintenant d’évaluer cette technologie dans divers milieux de vie.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a new method for rendering novel images of flexible 3D objects from a small number of example images in correspondence. The strength of the method is the ability to synthesize images whose viewing position is significantly far away from the viewing cone of the example images ("view extrapolation"), yet without ever modeling the 3D structure of the scene. The method relies on synthesizing a chain of "trilinear tensors" that governs the warping function from the example images to the novel image, together with a multi-dimensional interpolation function that synthesizes the non-rigid motions of the viewed object from the virtual camera position. We show that two closely spaced example images alone are sufficient in practice to synthesize a significant viewing cone, thus demonstrating the ability of representing an object by a relatively small number of model images --- for the purpose of cheap and fast viewers that can run on standard hardware.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[ES] In this paper we address the problem of inserting virtual content in a video sequence. The method we propose uses just image information. We perform primitive tracking, camera calibration, real and virtual camera synchronisation and finally rendering to insert the virtual content in the real video sequence. To simplify the calibration step we assume that cameras are mounted on a tripod (which is a common situation in practise). The primitive tracking procedure, which uses lines and circles as primitives, is performed by means of a CART (Classification and Regression Tree). Finally, the virtual and real camera synchronisation and rendering is performed using functions of OpenGL (Open Graphic Library). We have applied the method proposed to sport event scenarios, specifically, soccer matches. In order to illustrate its performance, it has been applied to real HD (High Definition) video sequences. The quality of the proposed method is validated by inserting virtual elements in such HD video sequence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Image overlay projection is a form of augmented reality that allows surgeons to view underlying anatomical structures directly on the patient surface. It improves intuitiveness of computer-aided surgery by removing the need for sight diversion between the patient and a display screen and has been reported to assist in 3-D understanding of anatomical structures and the identification of target and critical structures. Challenges in the development of image overlay technologies for surgery remain in the projection setup. Calibration, patient registration, view direction, and projection obstruction remain unsolved limitations to image overlay techniques. In this paper, we propose a novel, portable, and handheld-navigated image overlay device based on miniature laser projection technology that allows images of 3-D patient-specific models to be projected directly onto the organ surface intraoperatively without the need for intrusive hardware around the surgical site. The device can be integrated into a navigation system, thereby exploiting existing patient registration and model generation solutions. The position of the device is tracked by the navigation system’s position sensor and used to project geometrically correct images from any position within the workspace of the navigation system. The projector was calibrated using modified camera calibration techniques and images for projection are rendered using a virtual camera defined by the projectors extrinsic parameters. Verification of the device’s projection accuracy concluded a mean projection error of 1.3 mm. Visibility testing of the projection performed on pig liver tissue found the device suitable for the display of anatomical structures on the organ surface. The feasibility of use within the surgical workflow was assessed during open liver surgery. We show that the device could be quickly and unobtrusively deployed within the sterile environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Event-based visual servoing is a recently presented approach that performs the positioning of a robot using visual information only when it is required. From the basis of the classical image-based visual servoing control law, the scheme proposed in this paper can reduce the processing time at each loop iteration in some specific conditions. The proposed control method enters in action when an event deactivates the classical image-based controller (i.e. when there is no image available to perform the tracking of the visual features). A virtual camera is then moved through a straight line path towards the desired position. The virtual path used to guide the robot improves the behavior of the previous event-based visual servoing proposal.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08