12 resultados para Computergraphik


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stipping, non-photorealistic rendering, non-photorealistic computer graphics

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visualistics, computer science, picture syntax, picture semantics, picture pragmatics, interactive pictures

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Illustration Watermarks, Image annotation, Virtual data exploration, Interaction techniques

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die vorliegende Arbeit entstand während meiner Zeit als wissenschaftlicher Mitarbeiter im Fachgebiet Technische Informatik an der Universität Kassel. Im Rahmen dieser Arbeit werden der Entwurf und die Implementierung eines Cluster-basierten verteilten Szenengraphen gezeigt. Bei der Implementierung des verteilten Szenengraphen wurde von der Entwicklung eines eigenen Szenengraphen abgesehen. Stattdessen wurde ein bereits vorhandener Szenengraph namens OpenSceneGraph als Basis für die Entwicklung des verteilten Szenengraphen verwendet. Im Rahmen dieser Arbeit wurde eine Clusterunterstützung in den vorliegenden OpenSceneGraph integriert. Bei der Erweiterung des OpenSceneGraphs wurde besonders darauf geachtet den vorliegenden Szenengraphen möglichst nicht zu verändern. Zusätzlich wurde nach Möglichkeit auf die Verwendung und Integration externer Clusterbasierten Softwarepakete verzichtet. Für die Verteilung des OpenSceneGraphs wurde auf Basis von Sockets eine eigene Kommunikationsschicht entwickelt und in den OpenSceneGraph integriert. Diese Kommunikationsschicht wurde verwendet um Sort-First- und Sort-Last-basierte Visualisierung dem OpenSceneGraph zur Verfügung zu stellen. Durch die Erweiterung des OpenScenGraphs um die Cluster-Unterstützung wurde eine Ansteuerung beliebiger Projektionssysteme wie z.B. einer CAVE ermöglicht. Für die Ansteuerung einer CAVE wurden mittels VRPN diverse Eingabegeräte sowie das Tracking in den OpenSceneGraph integriert. Durch die Anbindung der Geräte über VRPN können diese Eingabegeräte auch bei den anderen Cluster-Betriebsarten wie z.B. einer segmentierten Anzeige verwendet werden. Die Verteilung der Daten auf den Cluster wurde von dem Kern des OpenSceneGraphs separat gehalten. Damit kann eine beliebige OpenSceneGraph-basierte Anwendung jederzeit und ohne aufwendige Modifikationen auf einem Cluster ausgeführt werden. Dadurch ist der Anwender in seiner Applikationsentwicklung nicht behindert worden und muss nicht zwischen Cluster-basierten und Standalone-Anwendungen unterscheiden.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

“Dual contouring” approaches provide an alternative to standard Marching Cubes (MC) method to extract and approximate an isosurface from trivariate data given on a volumetric mesh. These dual approaches solve some of the problems encountered by the MC methods. We present a simple method based on the MC method and the ray intersection technique to compute isosurface points in the cell interior. One of the advantages of our method is that it does not require us to use Hermite interpolation scheme, unlike other dual contouring methods. We perform a complete analysis of all possible configurations to generate a look-up table for all configurations. We use the look-up table to optimize the ray-intersection method to obtain minimum number of points necessarily sufficient for defining topologically correct isosurfaces in all possible configurations. Isosurface points are connected using a simple strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last years, the well known ray tracing algorithm gained new popularity with the introduction of interactive ray tracing methods. The high modularity and the ability to produce highly realistic images make ray tracing an attractive alternative to raster graphics hardware. Interactive ray tracing also proved its potential in the field of Mixed Reality rendering and provides novel methods for seamless integration of real and virtual content. Actor insertion methods, a subdomain of Mixed Reality and closely related to virtual television studio techniques, can use ray tracing for achieving high output quality in conjunction with appropriate visual cues like shadows and reflections at interactive frame rates. In this paper, we show how interactive ray tracing techniques can provide new ways of implementing virtual studio applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional(3D)model of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a new compression algorithm for dynamic 3d meshes. In such a sequence of meshes, neighboring vertices have a strong tendency to behave similarly and the degree of dependencies between their locations in two successive frames is very large which can be efficiently exploited using a combination of Predictive and DCT coders (PDCT). Our strategy gathers mesh vertices of similar motions into clusters, establish a local coordinate frame (LCF) for each cluster and encodes frame by frame and each cluster separately. The vertices of each cluster have small variation over a time relative to the LCF. Therefore, the location of each new vertex is well predicted from its location in the previous frame relative to the LCF of its cluster. The difference between the original and the predicted local coordinates are then transformed into frequency domain using DCT. The resulting DCT coefficients are quantized and compressed with entropy coding. The original sequence of meshes can be reconstructed from only a few non-zero DCT coefficients without significant loss in visual quality. Experimental results show that our strategy outperforms or comes close to other coders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Image-based Relighting (IBRL) has recently attracted a lot of research interest for its ability to relight real objects or scenes, from novel illuminations captured in natural/synthetic environments. Complex lighting effects such as subsurface scattering, interreflection, shadowing, mesostructural self-occlusion, refraction and other relevant phenomena can be generated using IBRL. The main advantage of image-based graphics is that the rendering time is independent of scene complexity as the rendering is actually a process of manipulating image pixels, instead of simulating light transport. The goal of this paper is to provide a complete and systematic overview of the research in Imagebased Relighting. We observe that essentially all IBRL techniques can be broadly classified into three categories (Fig. 9), based on how the scene/illumination information is captured: Reflectance function-based, Basis function-based and Plenoptic function-based. We discuss the characteristics of each of these categories and their representative methods. We also discuss about the sampling density and types of light source(s), relevant issues of IBRL.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interactive ray tracing of non-trivial scenes is just becoming feasible on single graphics processing units (GPU). Recent work in this area focuses on building effective acceleration structures, which work well under the constraints of current GPUs. Most approaches are targeted at static scenes and only allow navigation in the virtual scene. So far support for dynamic scenes has not been considered for GPU implementations. We have developed a GPU-based ray tracing system for dynamic scenes consisting of a set of individual objects. Each object may independently move around, but its geometry and topology are static.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new approach to diffuse reflectance estimation for dynamic scenes. Non-parametric image statistics are used to transfer reflectance properties from a static example set to a dynamic image sequence. The approach allows diffuse reflectance estimation for surface materials with inhomogeneous appearance, such as those which commonly occur with patterned or textured clothing. Material editing is also possible by transferring edited reflectance properties. Material reflectance properties are initially estimated from static images of the subject under multiple directional illuminations using photometric stereo. The estimated reflectance together with the corresponding image under uniform ambient illumination form a prior set of reference material observations. Material reflectance properties are then estimated for video sequences of a moving person captured under uniform ambient illumination by matching the observed local image statistics to the reference observations. Results demonstrate that the transfer of reflectance properties enables estimation of the dynamic surface normals and subsequent relighting combined with material editing. This approach overcomes limitations of previous work on material transfer and relighting of dynamic scenes which was limited to surfaces with regions of homogeneous reflectance. We evaluate our approach for relighting 3D model sequences reconstructed from multiple view video. Comparison to previous model relighting demonstrates improved reproduction of detailed texture and shape dynamics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. XSAMPL3D was designed to serve as action representation language in an imitation-based approach to character animation: First, a human demonstrates a sequence of object manipulations in an immersive Virtual Reality (VR) environment. From this demonstration, an XSAMPL3D description is automatically derived that represents the actions in terms of high-level action types and involved objects. The XSAMPL3D action description can then be used for the synthesis of animations where virtual humans of different body sizes and proportions reproduce the demonstrated action. Actions are encoded in a compact and human-readable XML-format. Thus, XSAMPL3D describtions are also amenable to manual authoring, e.g. for rapid prototyping of animations when no immersive VR environment is at the animator's disposal. However, when XSAMPL3D descriptions are derived from VR interactions, they can accomodate many details of the demonstrated action, such as motion trajectiories,hand shapes and other hand-object relations during grasping. Such detail would be hard to specify with manual motion authoring techniques only. Through the inclusion of language features that allow the representation of all relevant aspects of demonstrated object manipulations, XSAMPL3D is a suitable action representation language for the imitation-based approach to character animation.