2 resultados para In-Shader Rendering

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. This study was designed to determine the precision and accuracy of angular measurements using three-dimensional computed tomography (3D-CT) volume rendering by computer systems. Study design. The study population consisted of 28 dried skulls that were scanned with a 64-row multislice CT, and 3D-CT images were generated. Angular measurements, (n = 6) based upon conventional craniometric anatomical landmarks (n = 9), were identified independently in 3D-CT images by 2 radiologists, twice each, and were then performed by 3D-CT imaging. Subsequently, physical measurements were made by a third examiner using a Beyond Crysta-C9168 series 900 device. Results. The results demonstrated no statistically significant difference between interexaminer and intraexaminer analysis. The mean difference between the physical and 3-D-based angular measurements was -1.18% and -0.89%, respectively, for both examiners, demonstrating high accuracy. Conclusion. Maxillofacial analysis of angular measurements using 3D-CT volume rendering by 64-row multislice CT is established and can be used for orthodontic and dentofacial orthopedic applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768 x 576 with several moving objects at about 11 fps. (C)2011 Elsevier Ltd. All rights reserved.