905 resultados para Computer Graphics Interattiva, Maya 3D, Unity 3D.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Liu, Yonghuai. Automatic 3d free form shape matching using the graduated assignment algorithm. Pattern Recognition, vol. 38, no. 10, pp. 1615-1631, 2005.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tabletop computers featuring multi-touch input and object tracking are a common platform for research on Tangible User Interfaces (also known as Tangible Interaction). However, such systems are confined to sensing activity on the tabletop surface, disregarding the rich and relatively unexplored interaction canvas above the tabletop. This dissertation contributes with tCAD, a 3D modeling tool combining fiducial marker tracking, finger tracking and depth sensing in a single system. This dissertation presents the technical details of how these features were integrated, attesting to its viability through the design, development and early evaluation of the tCAD application. A key aspect of this work is a description of the interaction techniques enabled by merging tracked objects with direct user input on and above a table surface.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a technique for performing analog design synthesis at circuit level providing feedback to the designer through the exploration of the Pareto frontier. A modified simulated annealing which is able to perform crossover with past anchor points when a local minimum is found which is used as the optimization algorithm on the initial synthesis procedure. After all specifications are met, the algorithm searches for the extreme points of the Pareto frontier in order to obtain a non-exhaustive exploration of the Pareto front. Finally, multi-objective particle swarm optimization is used to spread the results and to find a more accurate frontier. Piecewise linear functions are used as single-objective cost functions to produce a smooth and equal convergence of all measurements to the desired specifications during the composition of the aggregate objective function. To verify the presented technique two circuits were designed, which are: a Miller amplifier with 96 dB Voltage gain, 15.48 MHz unity gain frequency, slew rate of 19.2 V/mu s with a current supply of 385.15 mu A, and a complementary folded cascode with 104.25 dB Voltage gain, 18.15 MHz of unity gain frequency and a slew rate of 13.370 MV/mu s. These circuits were synthesized using a 0.35 mu m technology. The results show that the method provides a fast approach for good solutions using the modified SA and further good Pareto front exploration through its connection to the particle swarm optimization algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (Computer Tomography, Magnetic Resonance Imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

Relevância:

60.00% 60.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a new method to automatically refine a facial disparity map obtained with standard cameras and under conventional illumination conditions by using a smart combination of traditional computer vision and 3D graphics techniques. Our system inputs two stereo images acquired with standard (calibrated) cameras and uses dense disparity estimation strategies to obtain a coarse initial disparity map, and SIFT to detect and match several feature points in the subjects face. We then use these points as anchors to modify the disparity in the facial area by building a Delaunay triangulation of their convex hull and interpolating their disparity values inside each triangle. We thus obtain a refined disparity map providing a much more accurate representation of the the subjects facial features. This refined facial disparity map may be easily transformed, through the camera calibration parameters, into a depth map to be used, also automatically, to improve the facial mesh of a 3D avatar to match the subjects real human features.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabajo se centra en la construcción de la parte física del personaje virtual. El desarrollo muestra téecnicas de modelado 3D, cinemática y animación usadas para la creación de personajes virtuales. Se incluye además una implementación que está dividida en: modelado del personaje virtual, creación de un sistema de cinemática inversa y la creación de animaciones utilizando el sistema de cinemática. Primero, crear un modelo 3D exacto al diseño original, segundo, el desarrollo de un sistema de cinemática inversa que resuelva con exactitud las posiciones de las partes articuladas que forman el personaje virtual, y tercero, la creación de animaciones haciendo uso del sistema de cinemática para conseguir animaciones fluidas y depuradas. Como consecuencia, se ha obtenido un componente 3D animado, reutilizable, ampliable, y exportable a otros entornos virtuales. ---ABSTRACT---This article is pointed in the making of the physical part of the virtual character. Development shows modeling 3D, kinematic and animation techniques used for create the virtual character. In addition, an implementation is included, and it is divided in: to model the 3D character, to create an inverse kinematics system, and to create animations using a kinematic system. First, creating an exact 3D model from the original design, second, developing an inverse kinematics system that resolves the positions of the articulated pieces that compose the virtual character, and third, creating animation using the inverse kinematics system to get fluid and refined animations in realtime. As consequence, a 3D animated, reusable, extendable and to other virtual environments exportable component has been obtained.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Shading reduces the power output of a photovoltaic (PV) system. The design engineering of PV systems requires modeling and evaluating shading losses. Some PV systems are affected by complex shading scenes whose resulting PV energy losses are very difficult to evaluate with current modeling tools. Several specialized PV design and simulation software include the possibility to evaluate shading losses. They generally possess a Graphical User Interface (GUI) through which the user can draw a 3D shading scene, and then evaluate its corresponding PV energy losses. The complexity of the objects that these tools can handle is relatively limited. We have created a software solution, 3DPV, which allows evaluating the energy losses induced by complex 3D scenes on PV generators. The 3D objects can be imported from specialized 3D modeling software or from a 3D object library. The shadows cast by this 3D scene on the PV generator are then directly evaluated from the Graphics Processing Unit (GPU). Thanks to the recent development of GPUs for the video game industry, the shadows can be evaluated with a very high spatial resolution that reaches well beyond the PV cell level, in very short calculation times. A PV simulation model then translates the geometrical shading into PV energy output losses. 3DPV has been implemented using WebGL, which allows it to run directly from a Web browser, without requiring any local installation from the user. This also allows taken full benefits from the information already available from Internet, such as the 3D object libraries. This contribution describes, step by step, the method that allows 3DPV to evaluate the PV energy losses caused by complex shading. We then illustrate the results of this methodology to several application cases that are encountered in the world of PV systems design. Keywords: 3D, modeling, simulation, GPU, shading, losses, shadow mapping, solar, photovoltaic, PV, WebGL

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Actualmente existen varios dispositivos que aceptan gestos sobre superficies táctiles, sean celulares, tabletas, computadores, etc. a los cuales las personas se acostumbran rápidamente a su uso y los aceptan como herramientas necesarias en su vida. Del mismo modo existen algunas aplicaciones que manejan entornos en 3D, y permiten captar gestos realizados con las manos, cuerpo, cabeza. Estas técnicas se han desarrollado mucho por separado pero se ha podido evidenciar en base a los artículos revisados que no existen muchos estudios que combinen las aplicaciones táctiles con las 3D manejadas por gestos en el aire. El presente trabajo muestra un prototipo que permite la comunicación y coordinación entre dos aplicaciones, una que muestra documentos representados por esferas en una aplicación con interacción táctil desarrollada en Unity que funciona sobre Android, y una segunda aplicación desarrollada también en Unity que maneja un entorno 3D con el que se interactúa mediante gestos realizados en el aire. Luego de algunos intentos la interacción entre ambas aplicaciones fue lograda implementando comunicación por sockets entre la aplicación en el dispositivo Android y la aplicación 3D que se encuentra alojada en un computador con Windows 7. La captura de gestos en el aire se realiza mediante el sistema Tracking Tools desarrollado por la compañía Optitrack que captura los movimientos con cámaras infrarrojas y marcadores en los dedos. Este sistema envía los datos de los gestos a nuestra aplicación 3D. Estos equipos son de propiedad del laboratorio Decoroso Crespo de la Universidad Politécnica de Madrid. Una vez lograda la implementación e interacción entre las aplicaciones se han realizado pruebas de usabilidad con nueve estudiantes del Máster Universitario en Software y Sistemas de la Universidad Politécnica de Madrid. Cada uno ha respondido una serie de encuestas para poder obtener resultados sobre cuán usable es el prototipo, la experiencia del usuario y qué mejoras se podrían realizar sobre éste. En la parte final de este documento se presentan los resultados de las encuestas y se muestran las conclusiones y trabajo futuro.---ABSTRACT---Currently there are several devices that accept gestures on touch surfaces like phones, tablets, computers, etc. to which people quickly become accustomed to their use and accept them as necessary tools in their life. Similarly there are some applications that handle 3D environments and like televisions, holograms and allow capture gestures made with hands, body, and head. These techniques have been developed on a separated way but based on some research we may say that the are not many studies that combine touch with 3D applications handled by gestures in the air. This paper presents a prototype of the interaction of two issues of a 2D showing documents represented by spheres on a touch application developed in Unity that works on Android and allows communicating with the second application also developed in Unity that handles a 3D environment interaction of gestures made in air. After some attempts interaction was achieved by implementing communication sockets between the application on the Android device and 3D application that is hosted on a computer with windows 7, and gestures capturing in the air is done by the system Tracking Tools developed by the Optitrack company it captures movements with infrared cameras and markers on the fingers, which sends data to this application gestures, these equipment are owned by the Decoroso Crespo laboratory of the Polytechnic University of Madrid. Once achieved the interaction of applications has been conducted performance tests with ten students of the university master of the Universidad Politécnica de Madrid, each has answered a series of surveys to get results on how usable is the prototype, the user experience and that improvements could be made on this.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06