900 resultados para Advanced virtual reality system
Resumo:
This paper presents a system for 3-D reconstruction of a patient-specific surface model from calibrated X-ray images. Our system requires two X-ray images of a patient with one acquired from the anterior-posterior direction and the other from the axial direction. A custom-designed cage is utilized in our system to calibrate both images. Starting from bone contours that are interactively identified from the X-ray images, our system constructs a patient-specific surface model of the proximal femur based on a statistical model based 2D/3D reconstruction algorithm. In this paper, we present the design and validation of the system with 25 bones. An average reconstruction error of 0.95 mm was observed.
Resumo:
This paper presents preliminary results about the use of virtual characters, penile plethysmography and gaze behaviour dynamics to assess deviant sexual preferences. Pedophile patients’ responses are compared to those of non-deviant subjects while they were immersed with virtual characters depicting relevant sexual features.
Resumo:
The use of virtual reality as tool in the area of spatial cognition raises the question of the quality of learning transfer from a virtual to a real environment. It is first necessary to determine with healthy subjects, the cognitive aids that improve the quality of transfer and the conditions required, especially since virtual reality can be used as effective tool in cognitive rehabilitation. The purpose of this study was to investigate the influence of the exploration mode of virtual environment (Passive vs. Active) according to Route complexity (Simple vs. Complex) on the quality of spatial knowledge transfer in three spatial tasks. Ninety subjects (45 men and 45 women) participated. Spatial learning was evaluated by Wayfinding, sketch-mapping and picture classification tasks in the context of the Bordeaux district. In the Wayfinding task, results indicated that active learning in a Virtual Environment (VE) increased the performances compared to the passive learning condition, irrespective of the route complexity factor. In the Sketch-mapping task, active learning in a VE helped the subjects to transfer their spatial knowledge from the VE to reality, but only when the route was complex. In the Picture classification task, active learning in a VE when the route was complex did not help the subjects to transfer their spatial knowledge. These results are explained in terms of knowledge levels and frame/strategy of reference [SW75, PL81, TH82].
Resumo:
This article begins with some recent considerations about real-time music, inspired by the latest contribution of French composer Philippe Manoury. Then, through the case study of the scenic performance La Traversée de la nuit, we analyse some perspectives for designing an Informed Virtual Environment dedicated to live show artistic domain.
Resumo:
Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.
Resumo:
The characteristics of moving sound sources have strong implications on the listener's distance perception and the estimation of velocity. Modifications of the typical sound emissions as they are currently occurring due to the tendency towards electromobility have an impact on the pedestrian's safety in road traffic. Thus, investigations of the relevant cues for velocity and distance perception of moving sound sources are not only of interest for the psychoacoustic community, but also for several applications, like e.g. virtual reality, noise pollution and safety aspects of road traffic. This article describes a series of psychoacoustic experiments in this field. Dichotic and diotic stimuli of a set of real-life recordings taken from a passing passenger car and a motorcycle were presented to test subjects who in turn were asked to determine the velocity of the object and its minimal distance from the listener. The results of these psychoacoustic experiments show that the estimated velocity is strongly linked to the object's distance. Furthermore, it could be shown that binaural cues contribute significantly to the perception of velocity. In a further experiment, it was shown that - independently of the type of the vehicle - the main parameter for distance determination is the maximum sound pressure level at the listener's position. The article suggests a system architecture for the adequate consideration of moving sound sources in virtual auditory environments. Virtual environments can thus be used to investigate the influence of new vehicle powertrain concepts and the related sound emissions of these vehicles on the pedestrians' ability to estimate the distance and velocity of moving objects.
Resumo:
When depicting both virtual and physical worlds, the viewer's impression of presence in these worlds is strongly linked to camera motion. Plausible and artist-controlled camera movement can substantially increase scene immersion. While physical camera motion exhibits subtle details of position, rotation, and acceleration, these details are often missing for virtual camera motion. In this work, we analyze camera movement using signal theory. Our system allows us to stylize a smooth user-defined virtual base camera motion by enriching it with plausible details. A key component of our system is a database of videos filmed by physical cameras. These videos are analyzed with a camera-motion estimation algorithm (structure-from-motion) and labeled manually with a specific style. By considering spectral properties of location, orientation and acceleration, our solution learns camera motion details. Consequently, an arbitrary virtual base motion, defined in any conventional animation package, can be automatically modified according to a user-selected style. In an animation package the camera motion base path is typically defined by the user via function curves. Another possibility is to obtain the camera path by using a mixed reality camera in motion capturing studio. As shown in our experiments, the resulting shots are still fully artist-controlled, but appear richer and more physically plausible.
Resumo:
Non-verbal communication (NVC) is considered to represent more than 90 percent of everyday communication. In virtual world, this important aspect of interaction between virtual humans (VH) is strongly neglected. This paper presents a user-test study to demonstrate the impact of automatically generated graphics-based NVC expression on the dialog quality: first, we wanted to compare impassive and emotion facial expression simulation for impact on the chatting. Second, we wanted to see whether people like chatting within a 3D graphical environment. Our model only proposes facial expressions and head movements induced from spontaneous chatting between VHs. Only subtle facial expressions are being used as nonverbal cues - i.e. related to the emotional model. Motion capture animations related to hand gestures, such as cleaning glasses, were randomly used to make the virtual human lively. After briefly introducing the technical architecture of the 3D-chatting system, we focus on two aspects of chatting through VHs. First, what is the influence of facial expressions that are induced from text dialog? For this purpose, we exploited an emotion engine extracting an emotional content from a text and depicting it into a virtual character developed previously [GAS11]. Second, as our goal was not addressing automatic generation of text, we compared the impact of nonverbal cues in conversation with a chatbot or with a human operator with a wizard of oz approach. Among main results, the within group study -involving 40 subjects- suggests that subtle facial expressions impact significantly not only on the quality of experience but also on dialog understanding.
Resumo:
In order to display a homogeneous image using multiple projectors, differences in the projected intensities must be compensated. In this paper, we present novel approaches to combine and extend existing techniques for edge blending and luminance harmonization to achieve a detailed luminance control. Furthermore, we apply techniques for improving the contrast ratio of multi-segmented displays also to the black offset correction. We also present a simple scheme to involve the displayed context in the correction process to dynamically improve the contrast in brighter images. In addition, we present a metric to evaluate the different methods and their influence on the visual quality.
Resumo:
Three-dimensional (3D) immersive virtual worlds have been touted as being capable of facilitating highly interactive, engaging, multimodal learning experiences. Much of the evidence gathered to support these claims has been anecdotal but the potential that these environments hold to solve traditional problems in online and technology-mediated education—primarily learner isolation and student disengagement—has resulted in considerable investments in virtual world platforms like Second Life, OpenSimulator, and Open Wonderland by both professors and institutions. To justify this ongoing and sustained investment, institutions and proponents of simulated learning environments must assemble a robust body of evidence that illustrates the most effective use of this powerful learning tool. In this authoritative collection, a team of international experts outline the emerging trends and developments in the use of 3D virtual worlds for teaching and learning. They explore aspects of learner interaction with virtual worlds, such as user wayfinding in Second Life, communication modes and perceived presence, and accessibility issues for elderly or disabled learners. They also examine advanced technologies that hold potential for the enhancement of learner immersion and discuss best practices in the design and implementation of virtual world-based learning interventions and tasks. By evaluating and documenting different methods, approaches, and strategies, the contributors to Learning in Virtual Worlds offer important information and insight to both scholars and practitioners in the field. AU Press is an open access publisher and the book is available for free in PDF format as well as for purchase on our website: http://bit.ly/1W4yTRA
Resumo:
Las técnicas de cirugía de mínima invasión (CMI) se están consolidando hoy en día como alternativa a la cirugía tradicional, debido a sus numerosos beneficios para los pacientes. Este cambio de paradigma implica que los cirujanos deben aprender una serie de habilidades distintas de aquellas requeridas en cirugía abierta. El entrenamiento y evaluación de estas habilidades se ha convertido en una de las mayores preocupaciones en los programas de formación de cirujanos, debido en gran parte a la presión de una sociedad que exige cirujanos bien preparados y una reducción en el número de errores médicos. Por tanto, se está prestando especial atención a la definición de nuevos programas que permitan el entrenamiento y la evaluación de las habilidades psicomotoras en entornos seguros antes de que los nuevos cirujanos puedan operar sobre pacientes reales. Para tal fin, hospitales y centros de formación están gradualmente incorporando instalaciones de entrenamiento donde los residentes puedan practicar y aprender sin riesgos. Es cada vez más común que estos laboratorios dispongan de simuladores virtuales o simuladores físicos capaces de registrar los movimientos del instrumental de cada residente. Estos simuladores ofrecen una gran variedad de tareas de entrenamiento y evaluación, así como la posibilidad de obtener información objetiva de los ejercicios. Los diferentes estudios de validación llevados a cabo dan muestra de su utilidad; pese a todo, los niveles de evidencia presentados son en muchas ocasiones insuficientes. Lo que es más importante, no existe un consenso claro a la hora de definir qué métricas son más útiles para caracterizar la pericia quirúrgica. El objetivo de esta tesis doctoral es diseñar y validar un marco de trabajo conceptual para la definición y validación de entornos para la evaluación de habilidades en CMI, en base a un modelo en tres fases: pedagógica (tareas y métricas a emplear), tecnológica (tecnologías de adquisición de métricas) y analítica (interpretación de la competencia en base a las métricas). Para tal fin, se describe la implementación práctica de un entorno basado en (1) un sistema de seguimiento de instrumental fundamentado en el análisis del vídeo laparoscópico; y (2) la determinación de la pericia en base a métricas de movimiento del instrumental. Para la fase pedagógica se diseñó e implementó un conjunto de tareas para la evaluación de habilidades psicomotoras básicas, así como una serie de métricas de movimiento. La validación de construcción llevada a cabo sobre ellas mostró buenos resultados para tiempo, camino recorrido, profundidad, velocidad media, aceleración media, economía de área y economía de volumen. Adicionalmente, los resultados obtenidos en la validación de apariencia fueron en general positivos en todos los grupos considerados (noveles, residentes, expertos). Para la fase tecnológica, se introdujo el EVA Tracking System, una solución para el seguimiento del instrumental quirúrgico basado en el análisis del vídeo endoscópico. La precisión del sistema se evaluó a 16,33ppRMS para el seguimiento 2D de la herramienta en la imagen; y a 13mmRMS para el seguimiento espacial de la misma. La validación de construcción con una de las tareas de evaluación mostró buenos resultados para tiempo, camino recorrido, profundidad, velocidad media, aceleración media, economía de área y economía de volumen. La validación concurrente con el TrEndo® Tracking System por su parte presentó valores altos de correlación para 8 de las 9 métricas analizadas. Finalmente, para la fase analítica se comparó el comportamiento de tres clasificadores supervisados a la hora de determinar automáticamente la pericia quirúrgica en base a la información de movimiento del instrumental, basados en aproximaciones lineales (análisis lineal discriminante, LDA), no lineales (máquinas de soporte vectorial, SVM) y difusas (sistemas adaptativos de inferencia neurodifusa, ANFIS). Los resultados muestran que en media SVM presenta un comportamiento ligeramente superior: 78,2% frente a los 71% y 71,7% obtenidos por ANFIS y LDA respectivamente. Sin embargo las diferencias estadísticas medidas entre los tres no fueron demostradas significativas. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la definición de sistemas de evaluación de habilidades para cirugía de mínima invasión, a la utilidad del análisis de vídeo como fuente de información y a la importancia de la información de movimiento de instrumental a la hora de caracterizar la pericia quirúrgica. Basándose en estos cimientos, se han de abrir nuevos campos de investigación que contribuyan a la definición de programas de formación estructurados y objetivos, que puedan garantizar la acreditación de cirujanos sobradamente preparados y promocionen la seguridad del paciente en el quirófano. Abstract Minimally invasive surgery (MIS) techniques have become a standard in many surgical sub-specialties, due to their many benefits for patients. However, this shift in paradigm implies that surgeons must acquire a complete different set of skills than those normally attributed to open surgery. Training and assessment of these skills has become a major concern in surgical learning programmes, especially considering the social demand for better-prepared professionals and for the decrease of medical errors. Therefore, much effort is being put in the definition of structured MIS learning programmes, where practice with real patients in the operating room (OR) can be delayed until the resident can attest for a minimum level of psychomotor competence. To this end, skills’ laboratory settings are being introduced in hospitals and training centres where residents may practice and be assessed on their psychomotor skills. Technological advances in the field of tracking technologies and virtual reality (VR) have enabled the creation of new learning systems such as VR simulators or enhanced box trainers. These systems offer a wide range of tasks, as well as the capability of registering objective data on the trainees’ performance. Validation studies give proof of their usefulness; however, levels of evidence reported are in many cases low. More importantly, there is still no clear consensus on topics such as the optimal metrics that must be used to assess competence, the validity of VR simulation, the portability of tracking technologies into real surgeries (for advanced assessment) or the degree to which the skills measured and obtained in laboratory environments transfer to the OR. The purpose of this PhD is to design and validate a conceptual framework for the definition and validation of MIS assessment environments based on a three-pillared model defining three main stages: pedagogical (tasks and metrics to employ), technological (metric acquisition technologies) and analytical (interpretation of competence based on metrics). To this end, a practical implementation of the framework is presented, focused on (1) a video-based tracking system and (2) the determination of surgical competence based on the laparoscopic instruments’ motionrelated data. The pedagogical stage’s results led to the design and implementation of a set of basic tasks for MIS psychomotor skills’ assessment, as well as the definition of motion analysis parameters (MAPs) to measure performance on said tasks. Validation yielded good construct results for parameters such as time, path length, depth, average speed, average acceleration, economy of area and economy of volume. Additionally, face validation results showed positive acceptance on behalf of the experts, residents and novices. For the technological stage the EVA Tracking System is introduced. EVA provides a solution for tracking laparoscopic instruments from the analysis of the monoscopic video image. Accuracy tests for the system are presented, which yielded an average RMSE of 16.33pp for 2D tracking of the instrument on the image and of 13mm for 3D spatial tracking. A validation experiment was conducted using one of the tasks and the most relevant MAPs. Construct validation showed significant differences for time, path length, depth, average speed, average acceleration, economy of area and economy of volume; especially between novices and residents/experts. More importantly, concurrent validation with the TrEndo® Tracking System presented high correlation values (>0.7) for 8 of the 9 MAPs proposed. Finally, the analytical stage allowed comparing the performance of three different supervised classification strategies in the determination of surgical competence based on motion-related information. The three classifiers were based on linear (linear discriminant analysis, LDA), non-linear (support vector machines, SVM) and fuzzy (adaptive neuro fuzzy inference systems, ANFIS) approaches. Results for SVM show slightly better performance than the other two classifiers: on average, accuracy for LDA, SVM and ANFIS was of 71.7%, 78.2% and 71% respectively. However, when confronted, no statistical significance was found between any of the three. Overall, this PhD corroborates the investigated research hypotheses regarding the definition of MIS assessment systems, the use of endoscopic video analysis as the main source of information and the relevance of motion analysis in the determination of surgical competence. New research fields in the training and assessment of MIS surgeons can be proposed based on these foundations, in order to contribute to the definition of structured and objective learning programmes that guarantee the accreditation of well-prepared professionals and the promotion of patient safety in the OR.
Resumo:
Este proyecto de fin de carrera tiene como objetivo obtener una visión detallada de los sistemas y tecnologías de grabación y reproducción utilizadas para aplicaciones de audio 3D y entornos de realidad virtual, analizando las diferentes alternativas existentes, su funcionamiento, características, detalles técnicos y sus ámbitos de aplicación. Como punto de partida se estudiará la teoría psicoacústica y la localización de fuentes sonoras en el espacio, base para el estudio de los sistemas de audio 3D. Se estudiará tanto la espacialización sonora en un espacio real y la espacialización virtual (simulación mediante procesado de información de la localización de fuentes sonoras), en los que intervienen algunos fenómenos acústicos y psicoacústicos como ITD, o diferencia de tiempo que existe entre una señal acústica que llega a los pabellones auditivos, la ILD, o diferencia de intensidad o amplitud que hay entre la señal que llega a los pabellones auditivos y la localización espacial mediante otra serie de mecanismos biaurales. Tras una visión general de la teoría psicoacústica y la espacialización sonora, se analizarán con detalle los elementos de grabación y reproducción existentes para audio 3D. Concretamente, a lo largo del proyecto se profundizará en el funcionamiento del sistema estéreo, caracterizado por el posicionamiento sonoro mediante la utilización de dos canales; del sistema biaural, caracterizado por reconstruir campos sonoros mediante el uso de las HRTF; de los sistemas multicanal, detallando gran parte de las alternativas y configuraciones existentes; del sistema Ambiophonics, caracterizado por implementar filtros de cruce; del sistema Ambisonics, y sus diferentes formatos y técnicas de codificación y decodificación; y del sistema Wavefield Synthesis, caracterizado por recrear ambientes sonoros en grandes espacios. ABSTRACT This project aims to get a detailed view of recording and reproducing systems and technologies used to 3D audio applications and virtual reality environments, analyzing the different alternatives available, their functioning, features, technical details and their different scopes of applications. As a starting point, will be studied the psychoacoustic theory and the localization of sound sources in space, basis for the 3D audio study. Will be studied both the spacialization of sound sources in real space as virtual spatialization of sound sources (simulation by information processing of localization of sound sources), in which involves some acoustic and psychoacoustic phenomena like ITD (or the Interaural time difference), the ILD, (or the Interaural Level Difference) and spatial localization by another set of binaural mechanisms. After a general overview of the psychoacoustics theory and the sound spatialization, will be analyzed in detail existing methods of recording and reproducing for 3D audio. Specifically, during the project will analyze the characteristics of the stereo systems, characterized by sound positioning using two channels; the binaural systems, characterized by reconstructing sound fields by using the HRTF; the multichannel systems, detailing many of the existing alternatives and configurations; the Ambiophonics system, which is characterized by implementing crosstalk elimination techniques; the Ambiosonics system, and its various formats and encoding and decoding techniques; and the Wavefield Synthesis system, characterized by recreate soundscapes in large spaces.
Resumo:
Upper limb function impairment is one of the most common sequelae of central nervous system injury, especially in stroke patients and when spinal cord injury produces tetraplegia. Conventional assessment methods cannot provide objective evaluation of patient performance and the tiveness of therapies. The most common assessment tools are based on rating scales, which are inefficient when measuring small changes and can yield subjective bias. In this study, we designed an inertial sensor-based monitoring system composed of five sensors to measure and analyze the complex movements of the upper limbs, which are common in activities of daily living. We developed a kinematic model with nine degrees of freedom to analyze upper limb and head movements in three dimensions. This system was then validated using a commercial optoelectronic system. These findings suggest that an inertial sensor-based motion tracking system can be used in patients who have upper limb impairment through data integration with a virtual reality-based neuroretation system.
Resumo:
This article presents an interactive Java software platform which enables any user to easily create advanced virtual laboratories (VLs) for Robotics. This novel tool provides both support for developing applications with full 3D interactive graphical interface and a complete functional framework for modelling and simulation of arbitrary serial-link manipulators. In addition, its software architecture contains a high number of functionalities included as high-level tools, with the advantage of allowing any user to easily develop complex interactive robotic simulations with a minimum of programming. In order to show the features of the platform, the article describes, step-by-step, the implementation methodology of a complete VL for Robotics education using the presented approach. Finally, some educational results about the experience of implementing this approach are reported.