4 resultados para Multi-GPU Rendering

em Deakin Research Online - Australia


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Results generated by simulation of computer systems are often presented as a multi-dimensional data set, where the number of dimensions may be greater than 4 if sufficient system parameters are modelled. This paper describes a visualization system intended to assist in understanding the relationship between, and effect upon system behavior of, the different values of the system parameters.

The system is applied to data that cannot be represented using a mesh or isosurface representation, and in general can only be represented as a cloud of points. The use of stereoscopic rendering and rapid interaction with the data are compared with regard to their value in providing insight into the nature of the data.

A number of techniques are implemented for displaying projections of the data set with up to 7 dimensions, and for allowing intuitive manipulation of the remaining dimensions. In this way the effect of changes in one variable in the presence of a number of others can be explored.

The use of these techniques, when applied to data from computer system simulation, results in an intuitive understanding of the effects of the system parameters on system behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a multi-point haptic platform that employs two Phantom Omni haptic devices. A gripper attachment connects to both devices and enables multi-point haptic grasping in virtual environments. In contrast to more complex approaches, this setup benefits from low-cost, reliability, and ease of programming while being capable of independently rendering forces to each of the user’s fingertips. The ability to grasp with multiple points potentially lends itself to applications such as virtual training, telesurgery and telemanipulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

 CHAI3D is a widely accepted haptic SDK in the society because it is open-source and provides support to devices from different vendors. In many cases, CHAI3D and its related demos are used for benchmarking various haptic collision and rendering algorithms. However, CHAI3D is designed for off-the-shelf single-point haptic devices only, and it does not provide native support to customised multi-point haptic devices. In this paper, we aim to extend the existing CHAI3D framework and provide a standardized routine to support customised, single/multi-point haptic devices. Our extension aims at two issues: Intra-device communication and Inter-device communication. Therefore, our extension includes an HIP wrapper layer to concurrently handle multiple HIPs of a single device, and a communication layer to concurrently handle multiple position, orientation and force calculations of multiple haptic devices. Our extension runs on top of a custom-built 8-channel device controller, although other offthe shelf controllers can also be integrated easily. Our extension complies with the CHAI3D design framework and advanced provide inter-device communication capabilities for multi-device operations. With straightforward conversion routines, existing CHAI3D demos can be adapted to multi-point demos, supporting real-time parallel collision detection and force rendering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

 Haptic rendering of complex models is usually prohibitive due to its much higher update rate requirement compared to visual rendering. Previous works have tried to solve this issue by introducing local simulation or multi-rate simulation for the two pipelines. Although these works have improved the capacity of haptic rendering pipeline, they did not take into consideration the situation of heterogeneous objects in one scenario, where rigid objects and deformable objects coexist in one scenario and close to each other. In this paper, we propose a novel idea to support interactive visuo-haptic rendering of complex heterogeneous models. The idea incorporates different collision detection and response algorithms and have them seamlessly switched on and off on the fly, as the HIP travels in the scenario. The selection of rendered models is based on the hypothesis of “parallel universes”, where the transition of rendering one group of models to another is totally transparent to users. To facilitate this idea, we proposed a procedure to convert the traditional single universe scenario into a “multiverse” scenario, where the original models are grouped and split into each parallel universe, depending on the scenario rendering requirement rather than just locality. We also proposed to add simplified visual objects as background avatars in each parallel universe to visually maintain the original scenario while not overly increase the scenario complexity. We tested the proposed idea in a haptically-enabled needle thoracostomy training environment and the result demonstrates that our idea is able to substantially accelerate visuo-haptic rendering with complex heterogeneous scenario objects.