22 resultados para Dimensional Modeling and Virtual Reality
em Universidad Politécnica de Madrid
Resumo:
While a number of virtual data-gloves have been used in stroke, there is little evidence about their use in spinal cord injury (SCI). A pilot clinical experience with nine SCI subjects was performed comparing two groups: one carried out a virtual rehabilitation training based on the use of a data glove, CyberTouch combined with traditional rehabilitation, during 30 minutes a day twice a week along two weeks; while the other made only conventional rehabilitation. Furthermore, two functional indexes were developed in order to assess the patient’s performance of the sessions: normalized trajectory lengths and repeatability. While differences between groups were not statistically significant, the data-glove group seemed to obtain better results in the muscle balance and functional parameters, and in the dexterity, coordination and fine grip tests. Related to the indexes that we implemented, normalized trajectory lengths and repeatability, every patient showed an improvement in at least one of the indexes, either along Y-axis trajectory or Z-axis trajectory. This study might be a step in investigating new ways of treatments and objective measures in order to obtain more accurate data about the patient’s evolution, allowing the clinicians to develop rehabilitation treatments, adapted to the abilities and needs of the patients.
Resumo:
This paper presents a methodology for the incorporation of a Virtual Reality development applied to the teaching of manufacturing processes, namely the group of machining processes in numerical control of machine tools. The paper shows how it is possible to supplement the teaching practice through virtual machine-tools whose operation is similar to the 'real' machines while eliminating the risks of use for both users and the machines.
Resumo:
This paper is concerned with the low dimensional structure of optimal streaks in a wedge flow boundary layer, which have been recently shown to consist of a unique (up to a constant factor) three-dimensional streamwise evolving mode, known as the most unstable streaky mode. Optimal streaks exhibit a still unexplored/unexploited approximate self-similarity (not associated with the boundary layer self-similarity), namely the streamwise velocity re-scaled with their maximum remains almost independent of both the spanwise wavenumber and the streamwise coordinate; the remaining two velocity components instead do not satisfy this property. The approximate self-similar behavior is analyzed here and exploited to further simplify the description of optimal streaks. In particular, it is shown that streaks can be approximately described in terms of the streamwise evolution of the scalar amplitudes of just three one-dimensional modes, providing the wall normal profiles of the streamwise velocity and two combinations of the cross flow velocity components; the scalar amplitudes obey a singular system of three ordinary differential equations (involving only two degrees of freedom), which approximates well the streamwise evolution of the general streaks.
Resumo:
We present a novel framework for encoding latency analysis of arbitrary multiview video coding prediction structures. This framework avoids the need to consider an specific encoder architecture for encoding latency analysis by assuming an unlimited processing capacity on the multiview encoder. Under this assumption, only the influence of the prediction structure and the processing times have to be considered, and the encoding latency is solved systematically by means of a graph model. The results obtained with this model are valid for a multiview encoder with sufficient processing capacity and serve as a lower bound otherwise. Furthermore, with the objective of low latency encoder design with low penalty on rate-distortion performance, the graph model allows us to identify the prediction relationships that add higher encoding latency to the encoder. Experimental results for JMVM prediction structures illustrate how low latency prediction structures with a low rate-distortion penalty can be derived in a systematic manner using the new model.
Resumo:
Las técnicas de cirugía de mínima invasión (CMI) se están consolidando hoy en día como alternativa a la cirugía tradicional, debido a sus numerosos beneficios para los pacientes. Este cambio de paradigma implica que los cirujanos deben aprender una serie de habilidades distintas de aquellas requeridas en cirugía abierta. El entrenamiento y evaluación de estas habilidades se ha convertido en una de las mayores preocupaciones en los programas de formación de cirujanos, debido en gran parte a la presión de una sociedad que exige cirujanos bien preparados y una reducción en el número de errores médicos. Por tanto, se está prestando especial atención a la definición de nuevos programas que permitan el entrenamiento y la evaluación de las habilidades psicomotoras en entornos seguros antes de que los nuevos cirujanos puedan operar sobre pacientes reales. Para tal fin, hospitales y centros de formación están gradualmente incorporando instalaciones de entrenamiento donde los residentes puedan practicar y aprender sin riesgos. Es cada vez más común que estos laboratorios dispongan de simuladores virtuales o simuladores físicos capaces de registrar los movimientos del instrumental de cada residente. Estos simuladores ofrecen una gran variedad de tareas de entrenamiento y evaluación, así como la posibilidad de obtener información objetiva de los ejercicios. Los diferentes estudios de validación llevados a cabo dan muestra de su utilidad; pese a todo, los niveles de evidencia presentados son en muchas ocasiones insuficientes. Lo que es más importante, no existe un consenso claro a la hora de definir qué métricas son más útiles para caracterizar la pericia quirúrgica. El objetivo de esta tesis doctoral es diseñar y validar un marco de trabajo conceptual para la definición y validación de entornos para la evaluación de habilidades en CMI, en base a un modelo en tres fases: pedagógica (tareas y métricas a emplear), tecnológica (tecnologías de adquisición de métricas) y analítica (interpretación de la competencia en base a las métricas). Para tal fin, se describe la implementación práctica de un entorno basado en (1) un sistema de seguimiento de instrumental fundamentado en el análisis del vídeo laparoscópico; y (2) la determinación de la pericia en base a métricas de movimiento del instrumental. Para la fase pedagógica se diseñó e implementó un conjunto de tareas para la evaluación de habilidades psicomotoras básicas, así como una serie de métricas de movimiento. La validación de construcción llevada a cabo sobre ellas mostró buenos resultados para tiempo, camino recorrido, profundidad, velocidad media, aceleración media, economía de área y economía de volumen. Adicionalmente, los resultados obtenidos en la validación de apariencia fueron en general positivos en todos los grupos considerados (noveles, residentes, expertos). Para la fase tecnológica, se introdujo el EVA Tracking System, una solución para el seguimiento del instrumental quirúrgico basado en el análisis del vídeo endoscópico. La precisión del sistema se evaluó a 16,33ppRMS para el seguimiento 2D de la herramienta en la imagen; y a 13mmRMS para el seguimiento espacial de la misma. La validación de construcción con una de las tareas de evaluación mostró buenos resultados para tiempo, camino recorrido, profundidad, velocidad media, aceleración media, economía de área y economía de volumen. La validación concurrente con el TrEndo® Tracking System por su parte presentó valores altos de correlación para 8 de las 9 métricas analizadas. Finalmente, para la fase analítica se comparó el comportamiento de tres clasificadores supervisados a la hora de determinar automáticamente la pericia quirúrgica en base a la información de movimiento del instrumental, basados en aproximaciones lineales (análisis lineal discriminante, LDA), no lineales (máquinas de soporte vectorial, SVM) y difusas (sistemas adaptativos de inferencia neurodifusa, ANFIS). Los resultados muestran que en media SVM presenta un comportamiento ligeramente superior: 78,2% frente a los 71% y 71,7% obtenidos por ANFIS y LDA respectivamente. Sin embargo las diferencias estadísticas medidas entre los tres no fueron demostradas significativas. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la definición de sistemas de evaluación de habilidades para cirugía de mínima invasión, a la utilidad del análisis de vídeo como fuente de información y a la importancia de la información de movimiento de instrumental a la hora de caracterizar la pericia quirúrgica. Basándose en estos cimientos, se han de abrir nuevos campos de investigación que contribuyan a la definición de programas de formación estructurados y objetivos, que puedan garantizar la acreditación de cirujanos sobradamente preparados y promocionen la seguridad del paciente en el quirófano. Abstract Minimally invasive surgery (MIS) techniques have become a standard in many surgical sub-specialties, due to their many benefits for patients. However, this shift in paradigm implies that surgeons must acquire a complete different set of skills than those normally attributed to open surgery. Training and assessment of these skills has become a major concern in surgical learning programmes, especially considering the social demand for better-prepared professionals and for the decrease of medical errors. Therefore, much effort is being put in the definition of structured MIS learning programmes, where practice with real patients in the operating room (OR) can be delayed until the resident can attest for a minimum level of psychomotor competence. To this end, skills’ laboratory settings are being introduced in hospitals and training centres where residents may practice and be assessed on their psychomotor skills. Technological advances in the field of tracking technologies and virtual reality (VR) have enabled the creation of new learning systems such as VR simulators or enhanced box trainers. These systems offer a wide range of tasks, as well as the capability of registering objective data on the trainees’ performance. Validation studies give proof of their usefulness; however, levels of evidence reported are in many cases low. More importantly, there is still no clear consensus on topics such as the optimal metrics that must be used to assess competence, the validity of VR simulation, the portability of tracking technologies into real surgeries (for advanced assessment) or the degree to which the skills measured and obtained in laboratory environments transfer to the OR. The purpose of this PhD is to design and validate a conceptual framework for the definition and validation of MIS assessment environments based on a three-pillared model defining three main stages: pedagogical (tasks and metrics to employ), technological (metric acquisition technologies) and analytical (interpretation of competence based on metrics). To this end, a practical implementation of the framework is presented, focused on (1) a video-based tracking system and (2) the determination of surgical competence based on the laparoscopic instruments’ motionrelated data. The pedagogical stage’s results led to the design and implementation of a set of basic tasks for MIS psychomotor skills’ assessment, as well as the definition of motion analysis parameters (MAPs) to measure performance on said tasks. Validation yielded good construct results for parameters such as time, path length, depth, average speed, average acceleration, economy of area and economy of volume. Additionally, face validation results showed positive acceptance on behalf of the experts, residents and novices. For the technological stage the EVA Tracking System is introduced. EVA provides a solution for tracking laparoscopic instruments from the analysis of the monoscopic video image. Accuracy tests for the system are presented, which yielded an average RMSE of 16.33pp for 2D tracking of the instrument on the image and of 13mm for 3D spatial tracking. A validation experiment was conducted using one of the tasks and the most relevant MAPs. Construct validation showed significant differences for time, path length, depth, average speed, average acceleration, economy of area and economy of volume; especially between novices and residents/experts. More importantly, concurrent validation with the TrEndo® Tracking System presented high correlation values (>0.7) for 8 of the 9 MAPs proposed. Finally, the analytical stage allowed comparing the performance of three different supervised classification strategies in the determination of surgical competence based on motion-related information. The three classifiers were based on linear (linear discriminant analysis, LDA), non-linear (support vector machines, SVM) and fuzzy (adaptive neuro fuzzy inference systems, ANFIS) approaches. Results for SVM show slightly better performance than the other two classifiers: on average, accuracy for LDA, SVM and ANFIS was of 71.7%, 78.2% and 71% respectively. However, when confronted, no statistical significance was found between any of the three. Overall, this PhD corroborates the investigated research hypotheses regarding the definition of MIS assessment systems, the use of endoscopic video analysis as the main source of information and the relevance of motion analysis in the determination of surgical competence. New research fields in the training and assessment of MIS surgeons can be proposed based on these foundations, in order to contribute to the definition of structured and objective learning programmes that guarantee the accreditation of well-prepared professionals and the promotion of patient safety in the OR.
Resumo:
BioMet®Tools is a set of software applications developed for the biometrical characterization of voice in different fields as voice quality evaluation in laryngology, speech therapy and rehabilitation, education of the singing voice, forensic voice analysis in court, emotional detection in voice, secure access to facilities and services, etc. Initially it was conceived as plain research code to estimate the glottal source from voice and obtain the biomechanical parameters of the vocal folds from the spectral density of the estimate. This code grew to what is now the Glottex®Engine package (G®E). Further demands from users in medical and forensic fields instantiated the development of different Graphic User Interfaces (GUI’s) to encapsulate user interaction with the G®E. This required the personalized design of different GUI’s handling the same G®E. In this way development costs and time could be saved. The development model is described in detail leading to commercial production and distribution. Study cases from its application to the field of laryngology and speech therapy are given and discussed.
Resumo:
Este proyecto de fin de carrera tiene como objetivo obtener una visión detallada de los sistemas y tecnologías de grabación y reproducción utilizadas para aplicaciones de audio 3D y entornos de realidad virtual, analizando las diferentes alternativas existentes, su funcionamiento, características, detalles técnicos y sus ámbitos de aplicación. Como punto de partida se estudiará la teoría psicoacústica y la localización de fuentes sonoras en el espacio, base para el estudio de los sistemas de audio 3D. Se estudiará tanto la espacialización sonora en un espacio real y la espacialización virtual (simulación mediante procesado de información de la localización de fuentes sonoras), en los que intervienen algunos fenómenos acústicos y psicoacústicos como ITD, o diferencia de tiempo que existe entre una señal acústica que llega a los pabellones auditivos, la ILD, o diferencia de intensidad o amplitud que hay entre la señal que llega a los pabellones auditivos y la localización espacial mediante otra serie de mecanismos biaurales. Tras una visión general de la teoría psicoacústica y la espacialización sonora, se analizarán con detalle los elementos de grabación y reproducción existentes para audio 3D. Concretamente, a lo largo del proyecto se profundizará en el funcionamiento del sistema estéreo, caracterizado por el posicionamiento sonoro mediante la utilización de dos canales; del sistema biaural, caracterizado por reconstruir campos sonoros mediante el uso de las HRTF; de los sistemas multicanal, detallando gran parte de las alternativas y configuraciones existentes; del sistema Ambiophonics, caracterizado por implementar filtros de cruce; del sistema Ambisonics, y sus diferentes formatos y técnicas de codificación y decodificación; y del sistema Wavefield Synthesis, caracterizado por recrear ambientes sonoros en grandes espacios. ABSTRACT This project aims to get a detailed view of recording and reproducing systems and technologies used to 3D audio applications and virtual reality environments, analyzing the different alternatives available, their functioning, features, technical details and their different scopes of applications. As a starting point, will be studied the psychoacoustic theory and the localization of sound sources in space, basis for the 3D audio study. Will be studied both the spacialization of sound sources in real space as virtual spatialization of sound sources (simulation by information processing of localization of sound sources), in which involves some acoustic and psychoacoustic phenomena like ITD (or the Interaural time difference), the ILD, (or the Interaural Level Difference) and spatial localization by another set of binaural mechanisms. After a general overview of the psychoacoustics theory and the sound spatialization, will be analyzed in detail existing methods of recording and reproducing for 3D audio. Specifically, during the project will analyze the characteristics of the stereo systems, characterized by sound positioning using two channels; the binaural systems, characterized by reconstructing sound fields by using the HRTF; the multichannel systems, detailing many of the existing alternatives and configurations; the Ambiophonics system, which is characterized by implementing crosstalk elimination techniques; the Ambiosonics system, and its various formats and encoding and decoding techniques; and the Wavefield Synthesis system, characterized by recreate soundscapes in large spaces.
Resumo:
The deployment of nodes in Wireless Sensor Networks (WSNs) arises as one of the biggest challenges of this field, which involves in distributing a large number of embedded systems to fulfill a specific application. The connectivity of WSNs is difficult to estimate due to the irregularity of the physical environment and affects the WSN designers? decision on deploying sensor nodes. Therefore, in this paper, a new method is proposed to enhance the efficiency and accuracy on ZigBee propagation simulation in indoor environments. The method consists of two steps: automatic 3D indoor reconstruction and 3D ray-tracing based radio simulation. The automatic 3D indoor reconstruction employs unattended image classification algorithm and image vectorization algorithm to build the environment database accurately, which also significantly reduces time and efforts spent on non-radio propagation issue. The 3D ray tracing is developed by using kd-tree space division algorithm and a modified polar sweep algorithm, which accelerates the searching of rays over the entire space. Signal propagation model is proposed for the ray tracing engine by considering both the materials of obstacles and the impact of positions along the ray path of radio. Three different WSN deployments are realized in the indoor environment of an office and the results are verified to be accurate. Experimental results also indicate that the proposed method is efficient in pre-simulation strategy and 3D ray searching scheme and is suitable for different indoor environments.
Resumo:
In this paper some mathematical programming models are exposed in order to set the number of services on a specified system of bus lines, which are intended to assist high demand levels which may arise because of the disruption of Rapid Transit services or during the celebration of massive events. By means of this model two types of basic magnitudes can be determined, basically: a) the number of bus units assigned to each line and b) the number of services that should be assigned to those units. In these models, passenger flow assignment to lines can be considered of the system optimum type, in the sense that the assignment of units and of services is carried out minimizing a linear combination of operation costs and total travel time of users. The models consider delays experienced by buses as a consequence of the get in/out of the passengers, queueing at stations and the delays that passengers experience waiting at the stations. For the case of a congested strategy based user optimal passenger assignment model with strict capacities on the bus lines, the use of the method of successive averages is shown.
Resumo:
The Atomic Physics Group at the Institute of Nuclear Fusion (DENIM) in Spain has accumulated experience over the years in developing a collection of computational models and tools for determining some relevant microscopic properties of, mainly, ICF and laser-produced plasmas in a variety of conditions. In this work several applications of those models in determining some relevant microscopic properties are presented.
Resumo:
The mechanisms of growth of a circular void by plastic deformation were studied by means of molecular dynamics in two dimensions (2D). While previous molecular dynamics (MD) simulations in three dimensions (3D) have been limited to small voids (up to ≈10 nm in radius), this strategy allows us to study the behavior of voids of up to 100 nm in radius. MD simulations showed that plastic deformation was triggered by the nucleation of dislocations at the atomic steps of the void surface in the whole range of void sizes studied. The yield stress, defined as stress necessary to nucleate stable dislocations, decreased with temperature, but the void growth rate was not very sensitive to this parameter. Simulations under uniaxial tension, uniaxial deformation and biaxial deformation showed that the void growth rate increased very rapidly with multiaxiality but it did not depend on the initial void radius. These results were compared with previous 3D MD and 2D dislocation dynamics simulations to establish a map of mechanisms and size effects for plastic void growth in crystalline solids.
Resumo:
Ripple-based controls can strongly reduce the required output capacitance in PowerSoC converter thanks to a very fast dynamic response. Unfortunately, these controls are prone to sub-harmonic oscillations and several parameters affect the stability of these systems. This paper derives and validates a simulation-based modeling and stability analysis of a closed-loop V 2Ic control applied to a 5 MHz Buck converter using discrete modeling and Floquet theory to predict stability. This allows the derivation of sensitivity analysis to design robust systems. The work is extended to different V 2 architectures using the same methodology.
Resumo:
The SESAR (Single European Sky ATM Research) program is an ambitious re-search and development initiative to design the future European air traffic man-agement (ATM) system. The study of the behavior of ATM systems using agent-based modeling and simulation tools can help the development of new methods to improve their performance. This paper presents an overview of existing agent-based approaches in air transportation (paying special attention to the challenges that exist for the design of future ATM systems) and, subsequently, describes a new agent-based approach that we proposed in the CASSIOPEIA project, which was developed according to the goals of the SESAR program. In our approach, we use agent models for different ATM stakeholders, and, in contrast to previous work, our solution models new collaborative decision processes for flow traffic management, it uses an intermediate level of abstraction (useful for simulations at larger scales), and was designed to be a practical tool (open and reusable) for the development of different ATM studies. It was successfully applied in three stud-ies related to the design of future ATM systems in Europe.
Resumo:
Background: In recent years, Spain has implemented a number of air quality control measures that are expected to lead to a future reduction in fine particle concentrations and an ensuing positive impact on public health. Objectives: We aimed to assess the impact on mortality attributable to a reduction in fine particle levels in Spain in 2014 in relation to the estimated level for 2007. Methods: To estimate exposure, we constructed fine particle distribution models for Spain for 2007 (reference scenario) and 2014 (projected scenario) with a spatial resolution of 16x16 km2. In a second step, we used the concentration-response functions proposed by cohort studies carried out in Europe (European Study of Cohorts for Air Pollution Effects and Rome longitudinal cohort) and North America (American Cancer Society cohort, Harvard Six Cities study and Canadian national cohort) to calculate the number of attributable annual deaths corresponding to all causes, all non-accidental causes, ischemic heart disease and lung cancer among persons aged over 25 years (2005-2007 mortality rate data). We examined the effect of the Spanish demographic shift in our analysis using 2007 and 2012 population figures. Results: Our model suggested that there would be a mean overall reduction in fine particle levels of 1mg/m3 by 2014. Taking into account 2007 population data, between 8 and 15 all-cause deaths per 100,000 population could be postponed annually by the expected reduction in fine particle levels. For specific subgroups, estimates varied from 10 to 30 deaths for all non-accidental causes, from 1 to 5 for lung cancer, and from 2 to 6 for ischemic heart disease. The expected burden of preventable mortality would be even higher in the future due to the Spanish population growth. Taking into account the population older than 30 years in 2012, the absolute mortality impact estimate would increase approximately by 18%. Conclusions: Effective implementation of air quality measures in Spain, in a scenario with a short-term projection, would amount to an appreciable decline infine particle concentrations, and this, in turn, would lead to notable health-related benefits. Recent European cohort studies strengthen the evidence of an association between long-term exposure to fine particles and health effects, and could enhance the health impact quantification in Europe. Air quality models can contribute to improved assessment of air pollution health impact estimates, particularly in study areas without air pollution monitoring data.
Resumo:
This article presents the first musculoskeletal model and simulation of upper plexus brachial injury. From this model is possible to analyse forces and movement ranges in order to develop a robotic exoskeleton to improve rehabilitation. The software that currently exists for musculoskeletal modeling is varied and most have advanced features for proper analysis and study of motion simulations. Whilst more powerful computer packages are usually expensive, there are other free and open source packages available which offer different tools to perform animations and simulations and which obtain forces and moments of inertia. Among them, Musculoskeletal Modeling Software was selected to construct a model of the upper limb, which has 7 degrees of freedom and 10 muscles. These muscles are important for two of the movements simulated in this article that are part of the post-surgery rehabilitation protocol. We performed different movement animations which are made using the inertial measurement unit to capture real data from movements made by a human being. We also performed the simulation of forces produced in elbow flexion-extension and arm abduction-adduction of a healthy subject and one with upper brachial plexus injury in a postoperative state to compare the force that is capable of being produced in both cases.