937 resultados para Virtual Performance
Resumo:
Virtual assembly environment (VAE) technology has the great potential for benefiting the manufacturing applications in industry. Usability is an important aspect of the VAE. This paper presents the usability evaluation of a developed multi-sensory VAE. The evaluation is conducted by using its three attributes: (a) efficiency of use; (b) user satisfaction; and (c) reliability. These are addressed by using task completion times (TCTs), questionnaires, and human performance error rates (HPERs), respectively. A peg-in-a-hole and a Sener electronic box assembly task have been used to perform the experiments, using sixteen participants. The outcomes showed that the introduction of 3D auditory and/or visual feedback could improve the usability. They also indicated that the integrated feedback (visual plus auditory) offered better usability than either feedback used in isolation. Most participants preferred the integrated feedback to either feedback (visual or auditory) or no feedback. The participants' comments demonstrated that nonrealistic or inappropriate feedback had negative effects on the usability, and easily made them feel frustrated. The possible reasons behind the outcomes are also analysed. © 2007 ACADEMY PUBLISHER.
Resumo:
An important characteristic of virtual assembly is interaction. Traditional di-rect manipulation in virtual assembly relies on dynamic collision detection, which is very time-consuming and even impossible in desktop virtual assembly environment. Feature-matching isa critical process in harmonious virtual assembly, and is the premise of assembly constraint sens-ing. This paper puts forward an active object-based feature-matching perception mechanism and afeature-matching interactive computing process, both of which make the direct manipulation in vir-tual assembly break away from collision detection. They also help to enhance virtual environmentunderstandability of user intention and promote interaction performance. Experimental resultsshow that this perception mechanism can ensure that users achieve real-time direct manipulationin desktop virtual environment.
Resumo:
In this note, I propose two extensions to the Java virtual machine (or VM) to allow dynamic languages such as Dylan, Scheme and Smalltalk to be efficiently implemented on the VM. These extensions do not affect the performance of pure Java programs on the machine. The first extension allows for efficient encoding of dynamic data; the second allows for efficient encoding of language-specific computational elements.
Resumo:
As an animator and practice-based researcher with a background in games development, I am interested in technological change in the video game medium, with a focus on the tools and technologies that drive game character animation and interactive story. In particular, I am concerned with the issue of ‘user agency’, or the ability of the end user to affect story development—a key quality of the gaming experience and essential to the aesthetics of gaming, which is defined in large measure by its interactive elements. In this paper I consider the unique qualities of the video game1 as an artistic medium and the impact that these qualities have on the production of animated virtual character performances. I discuss the somewhat oppositional nature of animated character performances found in games from recent years, which range from inactive to active—in other words, low to high agency. Where procedural techniques (based on coded rules of movement) are used to model dynamic character performances, the user has the ability to interactively affect characters in real-time within the larger sphere of the game. This game play creates a high degree of user agency. However, it lacks the aesthetic nuances of the more crafted sections of games: the short cut-scenes, or narrative interludes where entire acted performances are mapped onto game characters (often via performance capture)2 and constructed into relatively cinematic representations. While visually spectacular, cut-scenes involve minimal interactivity, so user agency is low. Contemporary games typically float between these two distinct methods of animation, from a focus on user agency and dynamically responsive animation to a focus on animated character performance in sections where the user is a passive participant. We tend to think of the majority of action in games as taking place via playable figures: an avatar or central character that represents a player. However, there is another realm of characters that also partake in actions ranging from significant to incidental: non-playable characters, or NPCs, which populate action sequences where game play takes place as well as cut scenes that unfold without much or any interaction on the part of the player. NPCs are the equivalent to supporting roles, bit characters, or extras in the world of cinema. Minor NPCs may simply be background characters or enemies to defeat, but many NPCs are crucial to the overall game story. It is my argument that, thus far, no game has successfully utilized the full potential of these characters to contribute toward development of interactive, high performance action. In particular, a type of NPC that I have identified as ‘pivotal’3—those constituting the supporting cast of a video game—are essential to the telling of a game story, particularly in genres that focus on story and characters: adventure games, action games, and role-playing games. A game story can be defined as the entirety of the narrative, told through non-interactive cut-scenes as well a interactive sections of play, and development of more complex stories in games clearly impacts the animation of NPCs. I argue that NPCs in games must be capable of acting with emotion throughout a game—in the cutscenes, which are tightly controlled, but also in sections of game play, where player agency can potentially alter the story in real-time. When the animated performance of NPCs and user agency are not continuous throughout the game, the implication is that game stories may be primarily told through short movies within games, making it more difficult to define video games animation as a distinct artistic medium.
Resumo:
This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses.
Resumo:
© 2015 IEEE.In virtual reality applications, there is an aim to provide real time graphics which run at high refresh rates. However, there are many situations in which this is not possible due to simulation or rendering issues. When running at low frame rates, several aspects of the user experience are affected. For example, each frame is displayed for an extended period of time, causing a high persistence image artifact. The effect of this artifact is that movement may lose continuity, and the image jumps from one frame to another. In this paper, we discuss our initial exploration of the effects of high persistence frames caused by low refresh rates and compare it to high frame rates and to a technique we developed to mitigate the effects of low frame rates. In this technique, the low frame rate simulation images are displayed with low persistence by blanking out the display during the extra time such image would be displayed. In order to isolate the visual effects, we constructed a simulator for low and high persistence displays that does not affect input latency. A controlled user study comparing the three conditions for the tasks of 3D selection and navigation was conducted. Results indicate that the low persistence display technique may not negatively impact user experience or performance as compared to the high persistence case. Directions for future work on the use of low persistence displays for low frame rate situations are discussed.
Resumo:
Presented is a study that expands the body of knowledge on the effect of in-cycle speed fluctuations on performance of small engines. It uses the methods developed previously by Callahan, et al. (1) to examine a variety of two-stroke engines and one four-stroke engine. The two-stroke engines were: a high performance single-cylinder, a low performance single-cylinder, a high performance multi-cylinder, and a medium performance multi-cylinder. The four-stroke engine was a high performance single-cylinder unit. Each engine was modeled in Virtual Engines, which is a fully detailed one-dimensional thermodynamic engine simulator. Measured or predicted in-cycle speed data were input into the engine models. Predicted performance changes due to drivetrain effects are shown in each case, and conclusions are drawn from those results. The simulations for the high performance single-cylinder two-stroke engine predicted significant in-cycle crankshaft speed fluctuation amplitudes and significant changes in performance when the fluctuations were input into the engine model. This was validated experimentally on a firing test engine based on a Yamaha YZ250. The four-stroke engine showed significant changes in predicted performance compared to the prediction with zero speed fluctuation assumed in the model. Measured speed fluctuations from a firing Yamaha YZ400F engine were applied to the simulation in addition to data from a simple free mass model. Both methods predicted similar fluctuation profiles and changes in performance. It is shown that the gear reduction between the crankshaft and clutch allowed for this similar behavior. The multi-cylinder, high performance two-stroke engine also showed significant changes in performance, in this case depending on the firing configuration. The low output two-stroke engine simulation showed only a negligible change in performance in spite of high amplitude speed fluctuations. This was due to its flat torque versus speed characteristic. The medium performance multi-cylinder two-stroke engine also showed only a negligible change in performance, in this case due to a relatively high inertia rotating assembly and multiple cylinder firing events within the revolution. These smoothed the net torque pulsations and reduced the amplitude of the speed fluctuation itself.
Resumo:
BACKGROUND:
tissue MicroArrays (TMAs) are a valuable platform for tissue based translational research and the discovery of tissue biomarkers. The digitised TMA slides or TMA Virtual Slides, are ultra-large digital images, and can contain several hundred samples. The processing of such slides is time-consuming, bottlenecking a potentially high throughput platform.
METHODS:
a High Performance Computing (HPC) platform for the rapid analysis of TMA virtual slides is presented in this study. Using an HP high performance cluster and a centralised dynamic load balancing approach, the simultaneous analysis of multiple tissue-cores were established. This was evaluated on Non-Small Cell Lung Cancer TMAs for complex analysis of tissue pattern and immunohistochemical positivity.
RESULTS:
the automated processing of a single TMA virtual slide containing 230 patient samples can be significantly speeded up by a factor of circa 22, bringing the analysis time to one minute. Over 90 TMAs could also be analysed simultaneously, speeding up multiplex biomarker experiments enormously.
CONCLUSIONS:
the methodologies developed in this paper provide for the first time a genuine high throughput analysis platform for TMA biomarker discovery that will significantly enhance the reliability and speed for biomarker research. This will have widespread implications in translational tissue based research.
Resumo:
We propose simple models to predict the performance degradation of disk requests due to storage device contention in consolidated virtualized environments. Model parameters can be deduced from measurements obtained inside Virtual Machines (VMs) from a system where a single VM accesses a remote storage server. The parameterized model can then be used to predict the effect of storage contention when multiple VMs are consolidated on the same server. We first propose a trace-driven approach that evaluates a queueing network with fair share scheduling using simulation. The model parameters consider Virtual Machine Monitor level disk access optimizations and rely on a calibration technique. We further present a measurement-based approach that allows a distinct characterization of read/write performance attributes. In particular, we define simple linear prediction models for I/O request mean response times, throughputs and read/write mixes, as well as a simulation model for predicting response time distributions. We found our models to be effective in predicting such quantities across a range of synthetic and emulated application workloads.
A Theoretical and Experimental Study of Resonance in a High Performance Engine Intake System: Part 1
Resumo:
The unsteady gas dynamic phenomena in engine intake systems of the type found in racecars have been examined. In particular, the resonant tuning effects, including cylinder-to-cylinder power variations, which can occur as a result of the interaction between an engine and its airbox have been considered. Frequency analysis of the output from a Virtual 4-Stroke 1D engine simulation was used to characterise the forcing function applied by an engine to an airbox. A separate computational frequency sweeping technique, which employed the CFD package FLUENT, was used to determine the natural frequencies of virtual airboxes in isolation from an engine. Using this technique, an airbox with a natural frequency at 75 Hz was designed for a Yamaha R6 4-cylinder motorcycle engine. The existence of an airbox natural frequency at 75 Hz was subsequently confirmed by an experimental frequency sweeping technique carried out on the engine test bed. A coupled 1D/3D analysis which employed the engine simulation package Virtual 4-Stroke and the CFD package FLUENT, was used to model the combined engine and airbox system. The coupled 1D/3D analysis predicted a 75 Hz resonance of the airbox at an engine speed of 9000 rpm. This frequency was the induction frequency for a single cylinder. An airbox was fabricated and tested on the engine. Static pressure was recorded at a grid of points in the airbox as the engine was swept through a speed range of 3000 to 10000 rpm. The measured engine speed corresponding to resonance in the airbox agreed well with the predicted values. There was also good correlation between the amplitude and phase of the pressure traces recorded within the airbox and the 1D/3D predictions.
Resumo:
OBJECTIVES:: We assessed the effectiveness of ToT from VR laparoscopic simulation training in 2 studies. In a second study, we also assessed the TER. ToT is a detectable performance improvement between equivalent groups, and TER is the observed percentage performance differences between 2 matched groups carrying out the same task but with 1 group pretrained on VR simulation. Concordance between simulated and in-vivo procedure performance was also assessed. DESIGN:: Prospective, randomized, and blinded. PARTICIPANTS:: In Study 1, experienced laparoscopic surgeons (n = 195) and in Study 2 laparoscopic novices (n = 30) were randomized to either train on VR simulation before completing an equivalent real-world task or complete the real-world task only. RESULTS:: Experienced laparoscopic surgeons and novices who trained on the simulator performed significantly better than their controls, thus demonstrating ToT. Their performance showed a TER between 7% and 42% from the virtual to the real tasks. Simulation training impacted most on procedural error reduction in both studies (32- 42%). The correlation observed between the VR and real-world task performance was r > 0·96 (Study 2). CONCLUSIONS:: VR simulation training offers a powerful and effective platform for training safer skills.
Resumo:
A novel Networks-on-Chip (NoC) router architecture specified for FPGA based implementation with configurable Virtual-Channel (VC) is presented. Each pipeline stage of the proposed architecture has been optimized so that low packet propagation latency and reduced hardware overhead can be achieved. The proposed architecture enables high performance and cost effective VC NoC based on-chip system interconnects to be deployed on FPGA.
Resumo:
We propose a trace-driven approach to predict the performance degradation of disk request response times due to storage device contention in consolidated virtualized environments. Our performance model evaluates a queueing network with fair share scheduling using trace-driven simulation. The model parameters can be deduced from measurements obtained inside Virtual Machines (VMs) from a system where a single VM accesses a remote storage server. The parameterized model can then be used to predict the effect of storage contention when multiple VMs are consolidated on the same virtualized server. The model parameter estimation relies on a search technique that tries to estimate the splitting and merging of blocks at the the Virtual Machine Monitor (VMM) level in the case of multiple competing VMs. Simulation experiments based on traces of the Postmark and FFSB disk benchmarks show that our model is able to accurately predict the impact of workload consolidation on VM disk IO response times.
A Theoretical and Experimental Study of Resonance in a High Performance Engine Intake System: Part 2
Resumo:
The unsteady gas dynamic phenomena in a racecar airbox have been examined, and resonant tuning effects have been considered. A coupled 1D/3D analysis, using the engine simulation package Virtual 4-Stroke and the CFD package FLUENT, was used to model the engine and airbox. The models were experimentally validated. An airbox was designed with a natural frequency in the region of 75 Hz. A coupled 1D/3D analysis of the airbox and a Yamaha R6 4 cylinder engine predicted resonance at the single-cylinder induction frequency; 75 Hz at an engine speed of 9000 rpm.