963 resultados para Advanced virtual reality system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Patients with amnestic mild cognitive impairment are at high risk for developing Alzheimer's disease. Besides episodic memory dysfunction they show deficits in accessing contextual knowledge that further specifies a general spatial navigation task or an executive function (EF) virtual action planning. Virtual reality (VR) environments have already been successfully used in cognitive rehabilitation and show increased potential for use in neuropsychological evaluation allowing for greater ecological validity while being more engaging and user friendly. In our study we employed the in-house platform of virtual action planning museum (VAP-M) and a sample of 25 MCI and 25 controls, in order to investigate deficits in spatial navigation, prospective memory, and executive function. In addition, we used the morphology of late components in event-related potential (ERP) responses, as a marker for cognitive dysfunction. The related measurements were fed to a common classification scheme facilitating the direct comparison of both approaches. Our results indicate that both the VAP-M and ERP averages were able to differentiate between healthy elders and patients with amnestic mild cognitive impairment and agree with the findings of the virtual action planning supermarket (VAP-S). The sensitivity (specificity) was 100% (98%) for the VAP-M data and 87% (90%) for the ERP responses. Considering that ERPs have proven to advance the early detection and diagnosis of "presymptomatic AD," the suggested VAP-M platform appears as an appealing alternative.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Humans possess a highly developed sensitivity for facial features. This sensitivity is also deployed to non-human beings and inanimate objects such as cars. In the present study we aimed to investigate whether car design has a bearing on the behaviour of pedestrians. Methods: An immersive virtual reality environment with a zebra crossing was used to determine a) whether the minimum accepted distance for crossing the street is bigger for cars with dominant appearance than for cars with friendly appearance (Block 1) and b) whether the speed of dominant cars are overestimated compared to friendly cars (Block 2). In Block 1, the participant's task was to cross the road in front of an approaching car at the latest moment. The point of time when entering and leaving the street was measured. In Block 2 they were asked to estimate the speed of each passing car. An independent sample rated dominant cars as being more dominant, angry and hostile than friendly cars. Results: None of the predictions regarding the car design was confirmed. Instead, there was an effect of starting position: From the centre island, participants entered the road significantly later (smaller accepted distance) and left the road later than when starting from the pavement. Consistently, the speed of the cars was estimated significantly lower when standing on the centre island compared to the pavement. When entering the visual size of the cars as factor (instead of dominance), we found that participants started to cross the road significantly later in front of small cars compared to big cars and that the speed of smaller cars was overestimated compared to big cars (size-speed bias). Conclusions: Car size and starting position, not car design seem to have an influence on road crossing behaviour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aims at assessing the skill of several climate field reconstruction techniques (CFR) to reconstruct past precipitation over continental Europe and the Mediterranean at seasonal time scales over the last two millennia from proxy records. A number of pseudoproxy experiments are performed within the virtual reality ofa regional paleoclimate simulation at 45 km resolution to analyse different aspects of reconstruction skill. Canonical Correlation Analysis (CCA), two versions of an Analog Method (AM) and Bayesian hierarchical modeling (BHM) are applied to reconstruct precipitation from a synthetic network of pseudoproxies that are contaminated with various types of noise. The skill of the derived reconstructions is assessed through comparison with precipitation simulated by the regional climate model. Unlike BHM, CCA systematically underestimates the variance. The AM can be adjusted to overcome this shortcoming, presenting an intermediate behaviour between the two aforementioned techniques. However, a trade-off between reconstruction-target correlations and reconstructed variance is the drawback of all CFR techniques. CCA (BHM) presents the largest (lowest) skill in preserving the temporal evolution, whereas the AM can be tuned to reproduce better correlation at the expense of losing variance. While BHM has been shown to perform well for temperatures, it relies heavily on prescribed spatial correlation lengths. While this assumption is valid for temperature, it is hardly warranted for precipitation. In general, none of the methods outperforms the other. All experiments agree that a dense and regularly distributed proxy network is required to reconstruct precipitation accurately, reflecting its high spatial and temporal variability. This is especially true in summer, when a specifically short de-correlation distance from the proxy location is caused by localised summertime convective precipitation events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Crossing a street can be a very difficult task for older pedestrians. With increased age and potential cognitive decline, older people take the decision to cross a street primarily based on vehicles' distance, and not on their speed. Furthermore, older pedestrians tend to overestimate their own walking speed, and could not adapt it according to the traffic conditions. Pedestrians' behavior is often tested using virtual reality. Virtual reality presents the advantage of being safe, cost-effective, and allows using standardized test conditions. METHODS: This paper describes an observational study with older and younger adults. Street crossing behavior was investigated in 18 healthy, younger and 18 older subjects by using a virtual reality setting. The aim of the study was to measure behavioral data (such as eye and head movements) and to assess how the two age groups differ in terms of number of safe street crossings, virtual crashes, and missed street crossing opportunities. Street crossing behavior, eye and head movements, in older and younger subjects, were compared with non-parametric tests. RESULTS: The results showed that younger pedestrians behaved in a more secure manner while crossing a street, as compared to older people. The eye and head movements analysis revealed that older people looked more at the ground and less at the other side of the street to cross. CONCLUSIONS: The less secure behavior in street crossing found in older pedestrians could be explained by their reduced cognitive and visual abilities, which, in turn, resulted in difficulties in the decision-making process, especially under time pressure. Decisions to cross a street are based on the distance of the oncoming cars, rather than their speed, for both groups. Older pedestrians look more at their feet, probably because of their need of more time to plan precise stepping movement and, in turn, pay less attention to the traffic. This might help to set up guidelines for improving senior pedestrians' safety, in terms of speed limits, road design, and mixed physical-cognitive trainings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Advanced Land Observation System (ALOS) Phased-Array Synthetic-Aperture Radar (PALSAR) is an L-band frequency (1.27 GHz) radar capable of continental-scale interferometric observations of ice sheet motion. Here, we show that PALSAR data yield excellent measurements of ice motion compared to C-band (5.6 GHz) radar data because of greater temporal coherence over snow and firn. We compare PALSAR velocities from year 2006 in Pine Island Bay, West Antarctica with those spanning years 1974 to 2007. Between 1996 and 2007, Pine Island Glacier sped up 42% and ungrounded over most of its ice plain. Smith Glacier accelerated 83% and ungrounded as well. Their largest speed up are recorded in 2007. Thwaites Glacier is not accelerating but widening with time and its eastern ice shelf doubled its speed. Total ice discharge from these glaciers increased 30% in 12 yr and the net mass loss increased 170% from 39 ± 15 Gt/yr to 105 ± 27 Gt/yr. Longer-term velocity changes suggest only a moderate loss in the 1970s. As the glaciers unground into the deeper, smoother beds inland, the mass loss from this region will grow considerably larger in years to come.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cultural content on the Web is available in various domains (cultural objects, datasets, geospatial data, moving images, scholarly texts and visual resources), concerns various topics, is written in different languages, targeted to both laymen and experts, and provided by different communities (libraries, archives museums and information industry) and individuals (Figure 1). The integration of information technologies and cultural heritage content on the Web is expected to have an impact on everyday life from the point of view of institutions, communities and individuals. In particular, collaborative environment scan recreate 3D navigable worlds that can offer new insights into our cultural heritage (Chan 2007). However, the main barrier is to find and relate cultural heritage information by end-users of cultural contents, as well as by organisations and communities managing and producing them. In this paper, we explore several visualisation techniques for supporting cultural interfaces, where the role of metadata is essential for supporting the search and communication among end-users (Figure 2). A conceptual framework was developed to integrate the data, purpose, technology, impact, and form components of a collaborative environment, Our preliminary results show that collaborative environments can help with cultural heritage information sharing and communication tasks because of the way in which they provide a visual context to end-users. They can be regarded as distributed virtual reality systems that offer graphically realised, potentially infinite, digital information landscapes. Moreover, collaborative environments also provide a new way of interaction between an end-user and a cultural heritage data set. Finally, the visualisation of metadata of a dataset plays an important role in helping end-users in their search for heritage contents on the Web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Surgical simulators are currently essential within any laparoscopic training program because they provide a low-stakes, reproducible and reliable environment to acquire basic skills. The purpose of this study is to determine the training learning curve based on different metrics corresponding to five tasks included in SINERGIA laparoscopic virtual reality simulator. Methods: Thirty medical students without surgical experience participated in the study. Five tasks of SINERGIA were included: Coordination, Navigation, Navigation and touch, Accurate grasping and Coordinated pulling. Each participant was trained in SINERGIA. This training consisted of eight sessions (R1–R8) of the five mentioned tasks and was carried out in two consecutive days with four sessions per day. A statistical analysis was made, and the results of R1, R4 and R8 were pair-wise compared with Wilcoxon signed-rank test. Significance is considered at P value <0.005. Results: In total, 84.38% of the metrics provided by SINERGIA and included in this study show significant differences when comparing R1 and R8. Metrics are mostly improved in the first session of training (75.00% when R1 and R4 are compared vs. 37.50% when R4 and R8 are compared). In tasks Coordination and Navigation and touch, all metrics are improved. On the other hand, Navigation just improves 60% of the analyzed metrics. Most learning curves show an improvement with better results in the fulfillment of the different tasks. Conclusions: Learning curves of metrics that assess the basic psychomotor laparoscopic skills acquired in SINERGIA virtual reality simulator show a faster learning rate during the first part of the training. Nevertheless, eight repetitions of the tasks are not enough to acquire all psychomotor skills that can be trained in SINERGIA. Therefore, and based on these results together with previous works, SINERGIA could be used as training tool with a properly designed training program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Hispanic Rite is the liturgy celebrated by Christians of the Iberian Peninsula before the imposition of the Roman Rite in the mid-eleventh century. As in other early Christian liturgies, music was the core of the Hispanic Rite. This music, known as Mozarabic Chant is one of the richest musical repertoires of the Middle Ages. Currently, a research project is underway involving the restoration of the Hispanic Rite sound, using techniques of acoustic virtual reality. The project aims to perform the auralization of the sound of Mozarabic Chant in his primitive environment, that is, taking into account the acoustic characteristics of the pre-Romanesque churches in their original state. For this purpose, anechoic recordings were made for a number of musical pieces representative of the Mozarabic Chant repertoire. In total eight (8) musical pieces have been recorded and interpreted, each of one, by six (6) different singers. The recordings were made using a spherical array composed by 32 microphones. This paper describes the more relevant aspects related to the recorded musical material, the technical specifications and installation details of the recording equipment, the data processing, and a summary of the results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Immersion and interaction have been identified as key factors influencing the quality of experience in stereoscopic video systems. An experimental prototype designed to explore the influence of these factors in 3D video applications is described here1. The focus is on the real-time insertion algorithm of new 3D models into the original video streams. Using this algorithm, our prototype is aimed to explore a new interaction paradigm ? similar to the augmented reality approach ? with 3D video applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los sistemas de proyección multi-proyector han adquirido gran popularidad en los últimos años para su uso en un amplio rango de aplicaciones como sistemas de realidad virtual, simuladores y visualización de datos. Esto es debido a que normalmente estas aplicaciones necesitan representar sus datos a muy alta resolución y a lo largo de una gran superficie. Este tipo de sistemas de proyección son baratos en comparación con las resoluciones que pueden conseguir, se pueden configurar para proyectar sobre prácticamente cualquier tipo de superficie, sea cual sea su forma, y son fácilmente escalables. Sin embargo, para hacer que este tipo de sistemas generen una imagen sin discontinuidades geométricas o colorimétricas requieren de un ajuste preciso. En la presente tesis se analizan en detalle todos los problemas a los que hay que enfrentarse a la hora de diseñar y calibrar un sistema de proyección de este tipo y se propone una metodología con una serie de optimizaciones para hacer el ajuste de estos sistemas más sencillo y rápido. Los resultados de esta metodología se muestran aplicados a la salida gráfica de un simulador de entrenamiento. Multi-projector display systems have gained high popularity over the past years for its use in a wide range of applications such as virtual reality systems, simulators or data visualization where a high resolution image over a large projection surface is required. Such systems are cheap for the resolutions they can provide, can be configured to project images on almost any kind of screen shapes and are easily scalable, but in order to provide a seamless image with no photometric discontinuities they require a precise geometric and colour correction. In this thesis, we analyze all the problems that have to be faced in order to design and calibrate a multi-projector display. We propose a calibration methodology with some optimizations that make the adjustment of this kind of displays easier and faster. The results of the implementation of this methodology on a training simulator are presented and discussed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

“Por lo tanto, la cristalización de polímeros se supone, y en las teorías se describe a menudo, como un proceso de múltiples pasos con muchos aspectos físico-químicos y estructurales influyendo en él. Debido a la propia estructura de la cadena, es fácil entender que un proceso que es termodinámicamente forzado a aumentar su ordenamiento local, se vea obstaculizado geométricamente y, por tanto, no puede conducirse a un estado de equilibrio final. Como resultado, se forman habitualmente estructuras de no equilibrio con diferentes características dependiendo de la temperatura, presión, cizallamiento y otros parámetros físico-químicos del sistema”. Estas palabras, pronunciadas recientemente por el profesor Bernhard Wunderlich, uno de los mas relevantes fisico-quimicos que han abordado en las ultimas décadas el estudio del estado físico de las macromoléculas, adelantan lo que de alguna manera se explicita en esta memoria y constituyen el “leitmotiv” de este trabajo de tesis. El mecanismo de la cristalización de polímeros esta aun bajo debate en la comunidad de la física de polímeros y la mayoría de los abordajes experimentales se explican a través de la teoría LH. Esta teoría clásica debida a Lauritzen y Hoffman (LH), y que es una generalización de la teoría de cristalización de una molécula pequeña desde la fase de vapor, describe satisfactoriamente muchas observaciones experimentales aunque esta lejos de explicar el complejo fenómeno de la cristalización de polímeros. De hecho, la formulación original de esta teoría en el National Bureau of Standards, a comienzos de la década de los 70, sufrió varias reformulaciones importantes a lo largo de la década de los 80, buscando su adaptación a los hallazgos experimentales. Así nació el régimen III de cristalización que posibilita la creacion de nichos moleculares en la superficie y que dio pie al paradigma ofrecido por Sadler y col., para justificar los experimentos que se obtenian por “scattering” de neutrones y otras técnicas como la técnica de “droplets” o enfriamiento rapido. Por encima de todo, el gran éxito de la teoría radica en que explica la dependencia inversa entre el tamaño del plegado molecular y el subenfriamiento, definido este ultimo como el intervalo de temperatura que media entre la temperatura de equilibrio y la temperatura de cristalización. El problema concreto que aborda esta tesis es el estudio de los procesos de ordenamiento de poliolefinas con distinto grado de ramificacion mediante simulaciones numéricas. Los copolimeros estudiados en esta tesis se consideran materiales modelo de gran homogeneidad molecular desde el punto de vista de la distribución de tamaños y de ramificaciones en la cadena polimérica. Se eligieron estas poliolefinas debido al gran interes experimental en conocer el cambio en las propiedades fisicas de los materiales dependiendo del tipo y cantidad de comonomero utilizado. Además, son modelos sobre los que existen una ingente cantidad de información experimental, que es algo que preocupa siempre al crear una realidad virtual como es la simulación. La experiencia en el grupo Biophym es que los resultados de simulación deben de tener siempre un correlato mas o menos próximo experimental y ese argumento se maneja a lo largo de esta memoria. Empíricamente, se conoce muy bien que las propiedades físicas de las poliolefinas, en suma dependen del tipo y de la cantidad de ramificaciones que presenta el material polimérico. Sin embargo, tal como se ha explicado no existen modelos teóricos adecuados que expliquen los mecanismos subyacentes de los efectos de las ramas. La memoria de este trabajo es amplia por la complejidad del tema. Se inicia con una extensa introducción sobre los conceptos básicos de una macromolecula que son relevantes para entender el contenido del resto de la memoria. Se definen los conceptos de macromolecula flexible, distribuciones y momentos, y su comportamiento en disolución y fundido con los correspondientes parametros caracteristicos. Se pone especial énfasis en el concepto de “entanglement” o enmaranamiento por considerarse clave a la hora de tratar macromoléculas con una longitud superior a la longitud critica de enmaranamiento. Finaliza esta introducción con una reseña sobre el estado del arte en la simulación de los procesos de cristalización. En un segundo capitulo del trabajo se expone detalladamente la metodología usada en cada grupo de casos. En el primer capitulo de resultados, se discuten los estudios de simulación en disolución diluida para sistemas lineales y ramificados de cadena única. Este caso mas simple depende claramente del potencial de torsión elegido tal como se discute a lo largo del texto. La formación de los núcleos “babys” propuestos por Muthukumar parece que son consecuencia del potencial de torsión, ya que este facilita los estados de torsión mas estables. Así que se propone el análisis de otros potenciales que son igualmente utilizados y los resultados obtenidos sobre la cristalización, discutidos en consecuencia. Seguidamente, en un segundo capitulo de resultados se estudian moleculas de alcanos de cadena larga lineales y ramificados en un fundido por simulaciones atomisticas como un modelo de polietileno. Los resultados atomisticos pese a ser de gran detalle no logran captar en su totalidad los efectos experimentales que se observan en los fundidos subenfriados en su etapa previa al estado ordenado. Por esta razon se discuten en los capítulos 3 y 4 de resultados sistemas de cadenas cortas y largas utilizando dos modelos de grano grueso (CG-PVA y CG-PE). El modelo CG-PE se desarrollo durante la tesis. El uso de modelos de grano grueso garantiza una mayor eficiencia computacional con respecto a los modelos atomísticos y son suficientes para mostrar los fenómenos a la escala relevante para la cristalización. En todos estos estudios mencionados se sigue la evolución de los procesos de ordenamiento y de fusión en simulaciones de relajación isoterma y no isoterma. Como resultado de los modelos de simulación, se han evaluado distintas propiedades fisicas como la longitud de segmento ordenado, la cristalinidad, temperaturas de fusion/cristalizacion, etc., lo que permite una comparación con los resultados experimentales. Se demuestra claramente que los sistemas ramificados retrasan y dificultan el orden de la cadena polimérica y por tanto, las regiones cristalinas ordenadas decrecen al crecer las ramas. Como una conclusión general parece mostrarse una tendencia a la formación de estructuras localmente ordenadas que crecen como bloques para completar el espacio de cristalización que puede alcanzarse a una temperatura y a una escala de tiempo determinada. Finalmente hay que señalar que los efectos observados, estan en concordancia con otros resultados tanto teoricos/simulacion como experimentales discutidos a lo largo de esta memoria. Su resumen se muestra en un capitulo de conclusiones y líneas futuras de investigación que se abren como consecuencia de esta memoria. Hay que mencionar que el ritmo de investigación se ha acentuado notablemente en el ultimo ano de trabajo, en parte debido a las ventajas notables obtenidas por el uso de la metodología de grano grueso que pese a ser muy importante para esta memoria no repercute fácilmente en trabajos publicables. Todo ello justifica que gran parte de los resultados esten en fase de publicación. Abstract “Polymer crystallization is therefore assumed, and in theories often described, to be a multi step process with many influencing aspects. Because of the chain structure, it is easy to understand that a process which is thermodynamically forced to increase local ordering but is geometrically hindered cannot proceed into a final equilibrium state. As a result, nonequilibrium structures with different characteristics are usually formed, which depend on temperature, pressure, shearing and other parameters”. These words, recently written by Professor Bernhard Wunderlich, one of the most prominent researchers in polymer physics, put somehow in value the "leitmotiv "of this thesis. The crystallization mechanism of polymers is still under debate in the physics community and most of the experimental findings are still explained by invoking the LH theory. This classical theory, which was initially formulated by Lauritzen and Hoffman (LH), is indeed a generalization of the crystallization theory for small molecules from the vapor phase. Even though it describes satisfactorily many experimental observations, it is far from explaining the complex phenomenon of polymer crystallization. This theory was firstly devised in the early 70s at the National Bureau of Standards. It was successively reformulated along the 80s to fit the experimental findings. Thus, the crystallization regime III was introduced into the theory in order to explain the results found in neutron scattering, droplet or quenching experiments. This concept defines the roughness of the crystallization surface leading to the paradigm proposed by Sadler et al. The great success of this theory is the ability to explain the inverse dependence of the molecular folding size on the supercooling, the latter defined as the temperature interval between the equilibrium temperature and the crystallization temperature. The main scope of this thesis is the study of ordering processes in polyolefins with different degree of branching by using computer simulations. The copolymers studied along this work are considered materials of high molecular homogeneity, from the point of view of both size and branching distributions of the polymer chain. These polyolefins were selected due to the great interest to understand their structure– property relationships. It is important to note that there is a vast amount of experimental data concerning these materials, which is essential to create a virtual reality as is the simulation. The Biophym research group has a wide experience in the correlation between simulation data and experimental results, being this idea highly alive along this work. Empirically, it is well-known that the physical properties of the polyolefins depend on the type and amount of branches presented in the polymeric material. However, there are not suitable models to explain the underlying mechanisms associated to branching. This report is extensive due to the complexity of the topic under study. It begins with a general introduction to the basics concepts of macromolecular physics. This chapter is relevant to understand the content of the present document. Some concepts are defined along this section, among others the flexibility of macromolecules, size distributions and moments, and the behavior in solution and melt along with their corresponding characteristic parameters. Special emphasis is placed on the concept of "entanglement" which is a key item when dealing with macromolecules having a molecular size greater than the critical entanglement length. The introduction finishes with a review of the state of art on the simulation of crystallization processes. The second chapter of the thesis describes, in detail, the computational methodology used in each study. In the first results section, we discuss the simulation studies in dilute solution for linear and short chain branched single chain models. The simplest case is clearly dependent on the selected torsion potential as it is discussed throughout the text. For example, the formation of baby nuclei proposed by Mutukhumar seems to result from the effects of the torsion potential. Thus, we propose the analysis of other torsion potentials that are also used by other research groups. The results obtained on crystallization processes are accordingly discussed. Then, in a second results section, we study linear and branched long-chain alkane molecules in a melt by atomistic simulations as a polyethylene-like model. In spite of the great detail given by atomistic simulations, they are not able to fully capture the experimental facts observed in supercooled melts, in particular the pre-ordered states. For this reason, we discuss short and long chains systems using two coarse-grained models (CG-PVA and CG-PE) in section 3 and 4 of chapter 2. The CG-PE model was developed during the thesis. The use of coarse-grained models ensures greater computational efficiency with respect to atomistic models and is enough to show the relevant scale phenomena for crystallization. In all the analysis we follow the evolution of the ordering and melting processes by both isothermal and non isothermal simulations. During this thesis we have obtained different physical properties such as stem length, crystallinity, melting/crystallization temperatures, and so on. We show that branches in the chains cause a delay in the crystallization and hinder the ordering of the polymer chain. Therefore, crystalline regions decrease in size as branching increases. As a general conclusion, it seems that there is a tendency in the macromolecular systems to form ordered structures, which can grown locally as blocks, occupying the crystallization space at a given temperature and time scale. Finally it should be noted that the observed effects are consistent with both, other theoretical/simulation and experimental results. The summary is provided in the conclusions chapter along with future research lines that open as result of this report. It should be mentioned that the research work has speeded up markedly in the last year, in part because of the remarkable benefits obtained by the use of coarse-grained methodology that despite being very important for this thesis work, is not easily publishable by itself. All this justify that most of the results are still in the publication phase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hoy en día, el desarrollo tecnológico en el campo de los sistemas inteligentes de transporte (ITS por sus siglas en inglés) ha permitido dotar a los vehículos con diversos sistemas de ayuda a la conducción (ADAS, del inglés advanced driver assistance system), mejorando la experiencia y seguridad de los pasajeros, en especial del conductor. La mayor parte de estos sistemas están pensados para advertir al conductor sobre ciertas situaciones de riesgo, como la salida involuntaria del carril o la proximidad de obstáculos en el camino. No obstante, también podemos encontrar sistemas que van un paso más allá y son capaces de cooperar con el conductor en el control del vehículo o incluso relegarlos de algunas tareas tediosas. Es en este último grupo donde se encuentran los sistemas de control electrónico de estabilidad (ESP - Electronic Stability Program), el antibloqueo de frenos (ABS - Anti-lock Braking System), el control de crucero (CC - Cruise Control) y los más recientes sistemas de aparcamiento asistido. Continuando con esta línea de desarrollo, el paso siguiente consiste en la supresión del conductor humano, desarrollando sistemas que sean capaces de conducir un vehículo de forma autónoma y con un rendimiento superior al del conductor. En este trabajo se presenta, en primer lugar, una arquitectura de control para la automatización de vehículos. Esta se compone de distintos componentes de hardware y software, agrupados de acuerdo a su función principal. El diseño de la arquitectura parte del trabajo previo desarrollado por el Programa AUTOPIA, aunque introduce notables aportaciones en cuanto a la eficiencia, robustez y escalabilidad del sistema. Ahondando un poco más en detalle, debemos resaltar el desarrollo de un algoritmo de localización basado en enjambres de partículas. Este está planteado como un método de filtrado y fusión de la información obtenida a partir de los distintos sensores embarcados en el vehículo, entre los que encontramos un receptor GPS (Global Positioning System), unidades de medición inercial (IMU – Inertial Measurement Unit) e información tomada directamente de los sensores embarcados por el fabricante, como la velocidad de las ruedas y posición del volante. Gracias a este método se ha conseguido resolver el problema de la localización, indispensable para el desarrollo de sistemas de conducción autónoma. Continuando con el trabajo de investigación, se ha estudiado la viabilidad de la aplicación de técnicas de aprendizaje y adaptación al diseño de controladores para el vehículo. Como punto de partida se emplea el método de Q-learning para la generación de un controlador borroso lateral sin ningún tipo de conocimiento previo. Posteriormente se presenta un método de ajuste on-line para la adaptación del control longitudinal ante perturbaciones impredecibles del entorno, como lo son los cambios en la inclinación del camino, fricción de las ruedas o peso de los ocupantes. Para finalizar, se presentan los resultados obtenidos durante un experimento de conducción autónoma en carreteras reales, el cual se llevó a cabo en el mes de Junio de 2012 desde la población de San Lorenzo de El Escorial hasta las instalaciones del Centro de Automática y Robótica (CAR) en Arganda del Rey. El principal objetivo tras esta demostración fue validar el funcionamiento, robustez y capacidad de la arquitectura propuesta para afrontar el problema de la conducción autónoma, bajo condiciones mucho más reales a las que se pueden alcanzar en las instalaciones de prueba. ABSTRACT Nowadays, the technological advances in the Intelligent Transportation Systems (ITS) field have led the development of several driving assistance systems (ADAS). These solutions are designed to improve the experience and security of all the passengers, especially the driver. For most of these systems, the main goal is to warn drivers about unexpected circumstances leading to risk situations such as involuntary lane departure or proximity to other vehicles. However, other ADAS go a step further, being able to cooperate with the driver in the control of the vehicle, or even overriding it on some tasks. Examples of this kind of systems are the anti-lock braking system (ABS), cruise control (CC) and the recently commercialised assisted parking systems. Within this research line, the next step is the development of systems able to replace the human drivers, improving the control and therefore, the safety and reliability of the vehicles. First of all, this dissertation presents a control architecture design for autonomous driving. It is made up of several hardware and software components, grouped according to their main function. The design of this architecture is based on the previous works carried out by the AUTOPIA Program, although notable improvements have been made regarding the efficiency, robustness and scalability of the system. It is also remarkable the work made on the development of a location algorithm for vehicles. The proposal is based on the emulation of the behaviour of biological swarms and its performance is similar to the well-known particle filters. The developed method combines information obtained from different sensors, including GPS, inertial measurement unit (IMU), and data from the original vehicle’s sensors on-board. Through this filtering algorithm the localization problem is properly managed, which is critical for the development of autonomous driving systems. The work deals also with the fuzzy control tuning system, a very time consuming task when done manually. An analysis of learning and adaptation techniques for the development of different controllers has been made. First, the Q-learning –a reinforcement learning method– has been applied to the generation of a lateral fuzzy controller from scratch. Subsequently, the development of an adaptation method for longitudinal control is presented. With this proposal, a final cruise control controller is able to deal with unpredictable environment disturbances, such as road slope, wheel’s friction or even occupants’ weight. As a testbed for the system, an autonomous driving experiment on real roads is presented. This experiment was carried out on June 2012, driving from San Lorenzo de El Escorial up to the Center for Automation and Robotics (CAR) facilities in Arganda del Rey. The main goal of the demonstration was validating the performance, robustness and viability of the proposed architecture to deal with the problem of autonomous driving under more demanding conditions than those achieved on closed test tracks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Virtual reality (VR) techniques to understand and obtain conclusions of data in an easy way are being used by the scientific community. However, these techniques are not used frequently for analyzing large amounts of data in life sciences, particularly in genomics, due to the high complexity of data (curse of dimensionality). Nevertheless, new approaches that allow to bring out the real important data characteristics, arise the possibility of constructing VR spaces to visually understand the intrinsic nature of data. It is well known the benefits of representing high dimensional data in tridimensional spaces by means of dimensionality reduction and transformation techniques, complemented with a strong component of interaction methods. Thus, a novel framework, designed for helping to visualize and interact with data about diseases, is presented. In this paper, the framework is applied to the Van't Veer breast cancer dataset is used, while oncologists from La Paz Hospital (Madrid) are interacting with the obtained results. That is to say a first attempt to generate a visually tangible model of breast cancer disease in order to support the experience of oncologists is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of data mining techniques for the gene profile discovery of diseases, such as cancer, is becoming usual in many researches. These techniques do not usually analyze the relationships between genes in depth, depending on the different variety of manifestations of the disease (related to patients). This kind of analysis takes a considerable amount of time and is not always the focus of the research. However, it is crucial in order to generate personalized treatments to fight the disease. Thus, this research focuses on finding a mechanism for gene profile analysis to be used by the medical and biologist experts. Results: In this research, the MedVir framework is proposed. It is an intuitive mechanism based on the visualization of medical data such as gene profiles, patients, clinical data, etc. MedVir, which is based on an Evolutionary Optimization technique, is a Dimensionality Reduction (DR) approach that presents the data in a three dimensional space. Furthermore, thanks to Virtual Reality technology, MedVir allows the expert to interact with the data in order to tailor it to the experience and knowledge of the expert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las tecnologías de realidad acústica virtual ofrecen una herramienta muy apropiada para la reconstrucción del patrimonio inmaterial del sonido de los recintos históricos. Este trabajo es parte de un proyecto de investigación cuyo objetivo es la restauración virtual del sonido del Antiguo Rito Hispánico y que consiste en la auralización del Canto Mozárabe en una serie de iglesias pre-Románicas de la península ibérica. En este caso se presentan los resultados más relevantes de las auralizaciones realizadas para la iglesia de Santa María de Melque. Para ello se ha elaborado un modelo acústico virtual de la iglesia en las condiciones que, según la documentación arqueológica, tenía el recinto original, se han realizado grabaciones anecoicas de una serie de piezas del repertorio primitivo del Canto Mozárabe y se han efectuado las auralizaciones correspondientes a diferentes configuraciones litúrgicas del Antiguo Rito Hispánico. ABSTRACT Acoustic Virtual Reality technology offers a highly appropriate tool for the reconstruction of the acoustic intangible heritage of the sound of historical enclosures. This work is part of a research project whose aim is the virtual restoration of the sound of the Old Hispanic Rite, auralizing the Mozarabic Chant in Pre-Romanesque churches of the Iberian Peninsula. This paper shows the most relevant results of the auralization of Santa María de Melque church. For that purpose, an acoustic virtual model has been created according to archaeological documentation of the original building conditions, anechoic recordings of several Early Mozarabic Chant musical pieces have been recorded and auralization corresponding to Old Hispanic liturgical Rite multiple settings has been completed.