994 resultados para Multi-GPU Rendering


Relevância:

30.00% 30.00%

Publicador:

Resumo:

 Haptic rendering of complex models is usually prohibitive due to its much higher update rate requirement compared to visual rendering. Previous works have tried to solve this issue by introducing local simulation or multi-rate simulation for the two pipelines. Although these works have improved the capacity of haptic rendering pipeline, they did not take into consideration the situation of heterogeneous objects in one scenario, where rigid objects and deformable objects coexist in one scenario and close to each other. In this paper, we propose a novel idea to support interactive visuo-haptic rendering of complex heterogeneous models. The idea incorporates different collision detection and response algorithms and have them seamlessly switched on and off on the fly, as the HIP travels in the scenario. The selection of rendered models is based on the hypothesis of “parallel universes”, where the transition of rendering one group of models to another is totally transparent to users. To facilitate this idea, we proposed a procedure to convert the traditional single universe scenario into a “multiverse” scenario, where the original models are grouped and split into each parallel universe, depending on the scenario rendering requirement rather than just locality. We also proposed to add simplified visual objects as background avatars in each parallel universe to visually maintain the original scenario while not overly increase the scenario complexity. We tested the proposed idea in a haptically-enabled needle thoracostomy training environment and the result demonstrates that our idea is able to substantially accelerate visuo-haptic rendering with complex heterogeneous scenario objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The efficient emulation of a many-core architecture is a challenging task, each core could be emulated through a dedicated thread and such threads would be interleaved on an either single-core or a multi-core processor. The high number of context switches will results in an unacceptable performance. To support this kind of application, the GPU computational power is exploited in order to schedule the emulation threads on the GPU cores. This presents a non trivial divergence issue, since GPU computational power is offered through SIMD processing elements, that are forced to synchronously execute the same instruction on different memory portions. Thus, a new emulation technique is introduced in order to overcome this limitation: instead of providing a routine for each ISA opcode, the emulator mimics the behavior of the Micro Architecture level, here instructions are date that a unique routine takes as input. Our new technique has been implemented and compared with the classic emulation approach, in order to investigate the chance of a hybrid solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with heterogeneous architectures in standard workstations. Heterogeneous architectures represent an appealing alternative to traditional supercomputers because they are based on commodity components fabricated in large quantities. Hence their price-performance ratio is unparalleled in the world of high performance computing (HPC). In particular, different aspects related to the performance and consumption of heterogeneous architectures have been explored. The thesis initially focuses on an efficient implementation of a parallel application, where the execution time is dominated by an high number of floating point instructions. Then the thesis touches the central problem of efficient management of power peaks in heterogeneous computing systems. Finally it discusses a memory-bounded problem, where the execution time is dominated by the memory latency. Specifically, the following main contributions have been carried out: A novel framework for the design and analysis of solar field for Central Receiver Systems (CRS) has been developed. The implementation based on desktop workstation equipped with multiple Graphics Processing Units (GPUs) is motivated by the need to have an accurate and fast simulation environment for studying mirror imperfection and non-planar geometries. Secondly, a power-aware scheduling algorithm on heterogeneous CPU-GPU architectures, based on an efficient distribution of the computing workload to the resources, has been realized. The scheduler manages the resources of several computing nodes with a view to reducing the peak power. The two main contributions of this work follow: the approach reduces the supply cost due to high peak power whilst having negligible impact on the parallelism of computational nodes. from another point of view the developed model allows designer to increase the number of cores without increasing the capacity of the power supply unit. Finally, an implementation for efficient graph exploration on reconfigurable architectures is presented. The purpose is to accelerate graph exploration, reducing the number of random memory accesses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Having to carry input devices can be inconvenient when interacting with wall-sized, high-resolution tiled displays. Such displays are typically driven by a cluster of computers. Running existing games on a cluster is non-trivial, and the performance attained using software solutions like Chromium is not good enough. This paper presents a touch-free, multi-user, humancomputer interface for wall-sized displays that enables completely device-free interaction. The interface is built using 16 cameras and a cluster of computers, and is integrated with the games Quake 3 Arena (Q3A) and Homeworld. The two games were parallelized using two different approaches in order to run on a 7x4 tile, 21 megapixel display wall with good performance. The touch-free interface enables interaction with a latency of 116 ms, where 81 ms are due to the camera hardware. The rendering performance of the games is compared to their sequential counterparts running on the display wall using Chromium. Parallel Q3A’s framerate is an order of magnitude higher compared to using Chromium. The parallel version of Homeworld performed on par with the sequential, which did not run at all using Chromium. Informal use of the touch-free interface indicates that it works better for controlling Q3A than Homeworld.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel algorithm to reconstruct high-quality images from sampled pixels and gradients in gradient-domain rendering. Our approach extends screened Poisson reconstruction by adding additional regularization constraints. Our key idea is to exploit local patches in feature images, which contain per-pixels normals, textures, position, etc., to formulate these constraints. We describe a GPU implementation of our approach that runs on the order of seconds on megapixel images. We demonstrate a significant improvement in image quality over screened Poisson reconstruction under the L1 norm. Because we adapt the regularization constraints to the noise level in the input, our algorithm is consistent and converges to the ground truth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an innovative system to encode and transmit textured multi-resolution 3D meshes in a progressive way, with no need to send several texture images, one for each mesh LOD (Level Of Detail). All texture LODs are created from the finest one (associated to the finest mesh), but can be re- constructed progressively from the coarsest thanks to refinement images calculated in the encoding process, and transmitted only if needed. This allows us to adjust the LOD/quality of both 3D mesh and texture according to the rendering power of the device that will display them, and to the network capacity. Additionally, we achieve big savings in data transmission by avoiding altogether texture coordinates, which are generated automatically thanks to an unwrapping system agreed upon by both encoder and decoder.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El audio multicanal ha avanzado a pasos agigantados en los últimos años, y no solo en las técnicas de reproducción, sino que en las de capitación también. Por eso en este proyecto se encuentran ambas cosas: un array microfónico, EigenMike32 de MH Acoustics, y un sistema de reproducción con tecnología Wave Field Synthesis, instalado Iosono en la Jade Höchscule Oldenburg. Para enlazar estos dos puntos de la cadena de audio se proponen dos tipos distintos de codificación: la reproducción de la toma horizontal del EigenMike32; y el 3er orden de Ambisonics (High Order Ambisonics, HOA), una técnica de codificación basada en Armónicos Esféricos mediante la cual se simula el campo acústico en vez de simular las distintas fuentes. Ambas se desarrollaron en el entorno Matlab y apoyadas por la colección de scripts de Isophonics llamada Spatial Audio Matlab Toolbox. Para probar éstas se llevaron a cabo una serie de test en los que se las comparó con las grabaciones realizadas a la vez con un Dummy Head, a la que se supone el método más aproximado a nuestro modo de escucha. Estas pruebas incluían otras grabaciones hechas con un Doble MS de Schoeps que se explican en el proyecto “Sally”. La forma de realizar éstas fue, una batería de 4 audios repetida 4 veces para cada una de las situaciones garbadas (una conversación, una clase, una calle y un comedor universitario). Los resultados fueron inesperados, ya que la codificación del tercer orden de HOA quedo por debajo de la valoración Buena, posiblemente debido a la introducción de material hecho para un array tridimensional dentro de uno de 2 dimensiones. Por el otro lado, la codificación que consistía en extraer los micrófonos del plano horizontal se mantuvo en el nivel de Buena en todas las situaciones. Se concluye que HOA debe seguir siendo probado con mayores conocimientos sobre Armónicos Esféricos; mientras que el otro codificador, mucho más sencillo, puede ser usado para situaciones sin mucha complejidad en cuanto a espacialidad. In the last years the multichannel audio has increased in leaps and bounds and not only in the playback techniques, but also in the recording ones. That is the reason of both things being in this project: a microphone array, EigenMike32 from MH Acoustics; and a playback system with Wave Field Synthesis technology, installed by Iosono in Jade Höchscule Oldenburg. To link these two points of the audio chain, 2 different kinds of codification are proposed: the reproduction of the EigenMike32´s horizontal take, and the Ambisonics´ third order (High Order Ambisonics, HOA), a codification technique based in Spherical Harmonics through which the acoustic field is simulated instead of the different sound sources. Both have been developed inside Matlab´s environment and supported by the Isophonics´ scripts collection called Spatial Audio Matlab Toolbox. To test these, a serial of tests were made in which they were compared with recordings made at the time by a Dummy Head, which is supposed to be the closest method to our hearing way. These tests included other recording and codifications made by a Double MS (DMS) from Schoeps which are explained in the project named “3D audio rendering through Ambisonics techniques: from multi-microphone recordings (DMS Schoeps) to a WFS system, through Matlab”. The way to perform the tests was, a collection made of 4 audios repeated 4 times for each recorded situation (a chat, a class, a street and college canteen or Mensa). The results were unexpected, because the HOA´s third order stood under the Well valuation, possibly caused by introducing material made for a tridimensional array inside one made only by 2 dimensions. On the other hand, the codification that consisted of extracting the horizontal plane microphones kept the Well valuation in all the situations. It is concluded that HOA should keep being tested with larger knowledge about Spherical Harmonics; while the other coder, quite simpler, can be used for situations without a lot of complexity with regards to spatiality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proliferation of video games and other applications of computer graphics in everyday life demands a much easier way to create animatable virtual human characters. Traditionally, this has been the job of highly skilled artists and animators that painstakingly model, rig and animate their avatars, and usually have to tune them for each application and transmission/rendering platform. The emergence of virtual/mixed reality environments also calls for practical and costeffective ways to produce custom models of actual people. The purpose of the present dissertation is bringing 3D human scanning closer to the average user. For this, two different techniques are presented, one passive and one active. The first one is a fully automatic system for generating statically multi-textured avatars of real people captured with several standard cameras. Our system uses a state-of-the-art shape from silhouette technique to retrieve the shape of subject. However, to deal with the lack of detail that is common in the facial region for these kind of techniques, which do not handle concavities correctly, our system proposes an approach to improve the quality of this region. This face enhancement technique uses a generic facial model which is transformed according to the specific facial features of the subject. Moreover, this system features a novel technique for generating view-independent texture atlases computed from the original images. This static multi-texturing system yields a seamless texture atlas calculated by combining the color information from several photos. We suppress the color seams due to image misalignments and irregular lighting conditions that multi-texturing approaches typically suffer from, while minimizing the blurring effect introduced by color blending techniques. The second technique features a system to retrieve a fully animatable 3D model of a human using a commercial depth sensor. Differently to other approaches in the current state of the art, our system does not require the user to be completely still through the scanning process, and neither the depth sensor is moved around the subject to cover all its surface. Instead, the depth sensor remains static and the skeleton tracking information is used to compensate the user’s movements during the scanning stage. RESUMEN La popularización de videojuegos y otras aplicaciones de los gráficos por ordenador en el día a día requiere una manera más sencilla de crear modelos virtuales humanos animables. Tradicionalmente, estos modelos han sido creados por artistas profesionales que cuidadosamente los modelan y animan, y que tienen que adaptar específicamente para cada aplicación y plataforma de transmisión o visualización. La aparición de los entornos de realidad virtual/mixta aumenta incluso más la demanda de técnicas prácticas y baratas para producir modelos 3D representando personas reales. El objetivo de esta tesis es acercar el escaneo de humanos en 3D al usuario medio. Para ello, se presentan dos técnicas diferentes, una pasiva y una activa. La primera es un sistema automático para generar avatares multi-texturizados de personas reales mediante una serie de cámaras comunes. Nuestro sistema usa técnicas del estado del arte basadas en shape from silhouette para extraer la forma del sujeto a escanear. Sin embargo, este tipo de técnicas no gestiona las concavidades correctamente, por lo que nuestro sistema propone una manera de incrementar la calidad en una región del modelo que se ve especialmente afectada: la cara. Esta técnica de mejora facial usa un modelo 3D genérico de una cara y lo modifica según los rasgos faciales específicos del sujeto. Además, el sistema incluye una novedosa técnica para generar un atlas de textura a partir de las imágenes capturadas. Este sistema de multi-texturización consigue un atlas de textura sin transiciones abruptas de color gracias a su manera de mezclar la información de color de varias imágenes sobre cada triángulo. Todas las costuras y discontinuidades de color debidas a las condiciones de iluminación irregulares son eliminadas, minimizando el efecto de desenfoque de la interpolación que normalmente introducen este tipo de métodos. La segunda técnica presenta un sistema para conseguir un modelo humano 3D completamente animable utilizando un sensor de profundidad. A diferencia de otros métodos del estado de arte, nuestro sistema no requiere que el usuario esté completamente quieto durante el proceso de escaneado, ni mover el sensor alrededor del sujeto para cubrir toda su superficie. Por el contrario, el sensor se mantiene estático y el esqueleto virtual de la persona, que se va siguiendo durante el proceso, se utiliza para compensar sus movimientos durante el escaneado.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A parallel algorithm for image noise removal is proposed. The algorithm is based on peer group concept and uses a fuzzy metric. An optimization study on the use of the CUDA platform to remove impulsive noise using this algorithm is presented. Moreover, an implementation of the algorithm on multi-core platforms using OpenMP is presented. Performance is evaluated in terms of execution time and a comparison of the implementation parallelised in multi-core, GPUs and the combination of both is conducted. A performance analysis with large images is conducted in order to identify the amount of pixels to allocate in the CPU and GPU. The observed time shows that both devices must have work to do, leaving the most to the GPU. Results show that parallel implementations of denoising filters on GPUs and multi-cores are very advisable, and they open the door to use such algorithms for real-time processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A parallel algorithm to remove impulsive noise in digital images using heterogeneous CPU/GPU computing is proposed. The parallel denoising algorithm is based on the peer group concept and uses an Euclidean metric. In order to identify the amount of pixels to be allocated in multi-core and GPUs, a performance analysis using large images is presented. A comparison of the parallel implementation in multi-core, GPUs and a combination of both is performed. Performance has been evaluated in terms of execution time and Megapixels/second. We present several optimization strategies especially effective for the multi-core environment, and demonstrate significant performance improvements. The main advantage of the proposed noise removal methodology is its computational speed, which enables efficient filtering of color images in real-time applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Fourier domain optical coherence tomography (FD-OCT), a large amount of interference data needs to be resampled from the wavelength domain to the wavenumber domain prior to Fourier transformation. We present an approach to optimize this data processing, using a graphics processing unit (GPU) and parallel processing algorithms. We demonstrate an increased processing and rendering rate over that previously reported by using GPU paged memory to render data in the GPU rather than copying back to the CPU. This avoids unnecessary and slow data transfer, enabling a processing and display rate of well over 524,000 A-scan/s for a single frame. To the best of our knowledge this is the fastest processing demonstrated to date and the first time that FD-OCT processing and rendering has been demonstrated entirely on a GPU.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new mesoscale simulation model for solids dissolution based on an computationally efficient and versatile digital modelling approach (DigiDiss) is considered and validated against analytical solutions and published experimental data for simple geometries. As the digital model is specifically designed to handle irregular shapes and complex multi-component structures, use of the model is explored for single crystals (sugars) and clusters. Single crystals and the cluster were first scanned using X-ray microtomography to obtain a digital version of their structures. The digitised particles and clusters were used as a structural input to digital simulation. The same particles were then dissolved in water and the dissolution process was recorded by a video camera and analysed yielding: the overall dissolution times and images of particle size and shape during the dissolution. The results demonstrate the coherence of simulation method to reproduce experimental behaviour, based on known chemical and diffusion properties of constituent phase. The paper discusses how further sophistications to the modelling approach will need to include other important effects such as complex disintegration effects (particle ejection, uncertainties in chemical properties). The nature of the digital modelling approach is well suited to for future implementation with high speed computation using hybrid conventional (CPU) and graphical processor (GPU) systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Femtosecond laser microfabrication has emerged over the last decade as a 3D flexible technology in photonics. Numerical simulations provide an important insight into spatial and temporal beam and pulse shaping during the course of extremely intricate nonlinear propagation (see e.g. [1,2]). Electromagnetics of such propagation is typically described in the form of the generalized Non-Linear Schrdinger Equation (NLSE) coupled with Drude model for plasma [3]. In this paper we consider a multi-threaded parallel numerical solution for a specific model which describes femtosecond laser pulse propagation in transparent media [4, 5]. However our approach can be extended to similar models. The numerical code is implemented in NVIDIA Graphics Processing Unit (GPU) which provides an effitient hardware platform for multi-threded computing. We compare the performance of the described below parallel code implementated for GPU using CUDA programming interface [3] with a serial CPU version used in our previous papers [4,5]. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08