924 resultados para simulation,virtual reality,opengl,library injection


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR) setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents two studies pertaining to the use of virtual characters applied in clinical forensic rehabilitation of sex offenders. The first study is about the validation of the perceived age of virtual characters designed to simulate primary and secondary sexual character of typical adult and child individuals. The second study puts to use these virtual characters in comparing a group of sex offenders and a group of non deviant individuals on their sexual arousal responses as recorded in virtual immersion. Finally, two clinical vignettes illustrating the use of made-to-measure virtual characters to more closely fit sexual preferences are presented in Discussion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article describes a series of experiments which were carried out to measure the sense of presence in auditory virtual environments. Within the study a comparison of self-created signals to signals created by the surrounding environment is drawn. Furthermore, it is investigated if the room characteristics of the simulated environment have consequences on the perception of presence during vocalization or when listening to speech. Finally the experiments give information about the influence of background signals on the sense of presence. In the experiments subjects rated the degree of perceived presence in an auditory virtual environment on a perceptual scale. It is described which parameters have the most influence on the perception of presence and which ones are of minor influence. The results show that on the one hand an external speaker has more influence on the sense of presence than an adequate presentation of one’s own voice. On the other hand both room reflections and adequately presented background signals significantly increase the perceived presence in the virtual environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. XSAMPL3D was designed to serve as action representation language in an imitation-based approach to character animation: First, a human demonstrates a sequence of object manipulations in an immersive Virtual Reality (VR) environment. From this demonstration, an XSAMPL3D description is automatically derived that represents the actions in terms of high-level action types and involved objects. The XSAMPL3D action description can then be used for the synthesis of animations where virtual humans of different body sizes and proportions reproduce the demonstrated action. Actions are encoded in a compact and human-readable XML-format. Thus, XSAMPL3D describtions are also amenable to manual authoring, e.g. for rapid prototyping of animations when no immersive VR environment is at the animator's disposal. However, when XSAMPL3D descriptions are derived from VR interactions, they can accomodate many details of the demonstrated action, such as motion trajectiories,hand shapes and other hand-object relations during grasping. Such detail would be hard to specify with manual motion authoring techniques only. Through the inclusion of language features that allow the representation of all relevant aspects of demonstrated object manipulations, XSAMPL3D is a suitable action representation language for the imitation-based approach to character animation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article begins with some recent considerations about real-time music, inspired by the latest contribution of French composer Philippe Manoury. Then, through the case study of the scenic performance La Traversée de la nuit, we analyse some perspectives for designing an Informed Virtual Environment dedicated to live show artistic domain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The characteristics of moving sound sources have strong implications on the listener's distance perception and the estimation of velocity. Modifications of the typical sound emissions as they are currently occurring due to the tendency towards electromobility have an impact on the pedestrian's safety in road traffic. Thus, investigations of the relevant cues for velocity and distance perception of moving sound sources are not only of interest for the psychoacoustic community, but also for several applications, like e.g. virtual reality, noise pollution and safety aspects of road traffic. This article describes a series of psychoacoustic experiments in this field. Dichotic and diotic stimuli of a set of real-life recordings taken from a passing passenger car and a motorcycle were presented to test subjects who in turn were asked to determine the velocity of the object and its minimal distance from the listener. The results of these psychoacoustic experiments show that the estimated velocity is strongly linked to the object's distance. Furthermore, it could be shown that binaural cues contribute significantly to the perception of velocity. In a further experiment, it was shown that - independently of the type of the vehicle - the main parameter for distance determination is the maximum sound pressure level at the listener's position. The article suggests a system architecture for the adequate consideration of moving sound sources in virtual auditory environments. Virtual environments can thus be used to investigate the influence of new vehicle powertrain concepts and the related sound emissions of these vehicles on the pedestrians' ability to estimate the distance and velocity of moving objects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When depicting both virtual and physical worlds, the viewer's impression of presence in these worlds is strongly linked to camera motion. Plausible and artist-controlled camera movement can substantially increase scene immersion. While physical camera motion exhibits subtle details of position, rotation, and acceleration, these details are often missing for virtual camera motion. In this work, we analyze camera movement using signal theory. Our system allows us to stylize a smooth user-defined virtual base camera motion by enriching it with plausible details. A key component of our system is a database of videos filmed by physical cameras. These videos are analyzed with a camera-motion estimation algorithm (structure-from-motion) and labeled manually with a specific style. By considering spectral properties of location, orientation and acceleration, our solution learns camera motion details. Consequently, an arbitrary virtual base motion, defined in any conventional animation package, can be automatically modified according to a user-selected style. In an animation package the camera motion base path is typically defined by the user via function curves. Another possibility is to obtain the camera path by using a mixed reality camera in motion capturing studio. As shown in our experiments, the resulting shots are still fully artist-controlled, but appear richer and more physically plausible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial tracking is one of the most challenging and important parts of Mixed Reality environments. Many applications, especially in the domain of Augmented Reality, rely on the fusion of several tracking systems in order to optimize the overall performance. While the topic of spatial tracking sensor fusion has already seen considerable interest, most results only deal with the integration of carefully arranged setups as opposed to dynamic sensor fusion setups. A crucial prerequisite for correct sensor fusion is the temporal alignment of the tracking data from several sensors. Tracking sensors are typically encountered in Mixed Reality applications, are generally not synchronized. We present a general method to calibrate the temporal offset between different sensors by the Time Delay Estimation method which can be used to perform on-line temporal calibration. By applying Time Delay Estimation on the tracking data, we show that the temporal offset between generic Mixed Reality spatial tracking sensors can be calibrated. To show the correctness and the feasibility of this approach, we have examined different variations of our method and evaluated various combinations of tracking sensors. We furthermore integrated this time synchronization method into our UBITRACK Mixed Reality tracking framework to provide facilities for calibration and real-time data alignment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three-dimensional (3D) immersive virtual worlds have been touted as being capable of facilitating highly interactive, engaging, multimodal learning experiences. Much of the evidence gathered to support these claims has been anecdotal but the potential that these environments hold to solve traditional problems in online and technology-mediated education—primarily learner isolation and student disengagement—has resulted in considerable investments in virtual world platforms like Second Life, OpenSimulator, and Open Wonderland by both professors and institutions. To justify this ongoing and sustained investment, institutions and proponents of simulated learning environments must assemble a robust body of evidence that illustrates the most effective use of this powerful learning tool. In this authoritative collection, a team of international experts outline the emerging trends and developments in the use of 3D virtual worlds for teaching and learning. They explore aspects of learner interaction with virtual worlds, such as user wayfinding in Second Life, communication modes and perceived presence, and accessibility issues for elderly or disabled learners. They also examine advanced technologies that hold potential for the enhancement of learner immersion and discuss best practices in the design and implementation of virtual world-based learning interventions and tasks. By evaluating and documenting different methods, approaches, and strategies, the contributors to Learning in Virtual Worlds offer important information and insight to both scholars and practitioners in the field. AU Press is an open access publisher and the book is available for free in PDF format as well as for purchase on our website: http://bit.ly/1W4yTRA

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aims at assessing the skill of several climate field reconstruction techniques (CFR) to reconstruct past precipitation over continental Europe and the Mediterranean at seasonal time scales over the last two millennia from proxy records. A number of pseudoproxy experiments are performed within the virtual reality ofa regional paleoclimate simulation at 45 km resolution to analyse different aspects of reconstruction skill. Canonical Correlation Analysis (CCA), two versions of an Analog Method (AM) and Bayesian hierarchical modeling (BHM) are applied to reconstruct precipitation from a synthetic network of pseudoproxies that are contaminated with various types of noise. The skill of the derived reconstructions is assessed through comparison with precipitation simulated by the regional climate model. Unlike BHM, CCA systematically underestimates the variance. The AM can be adjusted to overcome this shortcoming, presenting an intermediate behaviour between the two aforementioned techniques. However, a trade-off between reconstruction-target correlations and reconstructed variance is the drawback of all CFR techniques. CCA (BHM) presents the largest (lowest) skill in preserving the temporal evolution, whereas the AM can be tuned to reproduce better correlation at the expense of losing variance. While BHM has been shown to perform well for temperatures, it relies heavily on prescribed spatial correlation lengths. While this assumption is valid for temperature, it is hardly warranted for precipitation. In general, none of the methods outperforms the other. All experiments agree that a dense and regularly distributed proxy network is required to reconstruct precipitation accurately, reflecting its high spatial and temporal variability. This is especially true in summer, when a specifically short de-correlation distance from the proxy location is caused by localised summertime convective precipitation events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las técnicas de cirugía de mínima invasión (CMI) se están consolidando hoy en día como alternativa a la cirugía tradicional, debido a sus numerosos beneficios para los pacientes. Este cambio de paradigma implica que los cirujanos deben aprender una serie de habilidades distintas de aquellas requeridas en cirugía abierta. El entrenamiento y evaluación de estas habilidades se ha convertido en una de las mayores preocupaciones en los programas de formación de cirujanos, debido en gran parte a la presión de una sociedad que exige cirujanos bien preparados y una reducción en el número de errores médicos. Por tanto, se está prestando especial atención a la definición de nuevos programas que permitan el entrenamiento y la evaluación de las habilidades psicomotoras en entornos seguros antes de que los nuevos cirujanos puedan operar sobre pacientes reales. Para tal fin, hospitales y centros de formación están gradualmente incorporando instalaciones de entrenamiento donde los residentes puedan practicar y aprender sin riesgos. Es cada vez más común que estos laboratorios dispongan de simuladores virtuales o simuladores físicos capaces de registrar los movimientos del instrumental de cada residente. Estos simuladores ofrecen una gran variedad de tareas de entrenamiento y evaluación, así como la posibilidad de obtener información objetiva de los ejercicios. Los diferentes estudios de validación llevados a cabo dan muestra de su utilidad; pese a todo, los niveles de evidencia presentados son en muchas ocasiones insuficientes. Lo que es más importante, no existe un consenso claro a la hora de definir qué métricas son más útiles para caracterizar la pericia quirúrgica. El objetivo de esta tesis doctoral es diseñar y validar un marco de trabajo conceptual para la definición y validación de entornos para la evaluación de habilidades en CMI, en base a un modelo en tres fases: pedagógica (tareas y métricas a emplear), tecnológica (tecnologías de adquisición de métricas) y analítica (interpretación de la competencia en base a las métricas). Para tal fin, se describe la implementación práctica de un entorno basado en (1) un sistema de seguimiento de instrumental fundamentado en el análisis del vídeo laparoscópico; y (2) la determinación de la pericia en base a métricas de movimiento del instrumental. Para la fase pedagógica se diseñó e implementó un conjunto de tareas para la evaluación de habilidades psicomotoras básicas, así como una serie de métricas de movimiento. La validación de construcción llevada a cabo sobre ellas mostró buenos resultados para tiempo, camino recorrido, profundidad, velocidad media, aceleración media, economía de área y economía de volumen. Adicionalmente, los resultados obtenidos en la validación de apariencia fueron en general positivos en todos los grupos considerados (noveles, residentes, expertos). Para la fase tecnológica, se introdujo el EVA Tracking System, una solución para el seguimiento del instrumental quirúrgico basado en el análisis del vídeo endoscópico. La precisión del sistema se evaluó a 16,33ppRMS para el seguimiento 2D de la herramienta en la imagen; y a 13mmRMS para el seguimiento espacial de la misma. La validación de construcción con una de las tareas de evaluación mostró buenos resultados para tiempo, camino recorrido, profundidad, velocidad media, aceleración media, economía de área y economía de volumen. La validación concurrente con el TrEndo® Tracking System por su parte presentó valores altos de correlación para 8 de las 9 métricas analizadas. Finalmente, para la fase analítica se comparó el comportamiento de tres clasificadores supervisados a la hora de determinar automáticamente la pericia quirúrgica en base a la información de movimiento del instrumental, basados en aproximaciones lineales (análisis lineal discriminante, LDA), no lineales (máquinas de soporte vectorial, SVM) y difusas (sistemas adaptativos de inferencia neurodifusa, ANFIS). Los resultados muestran que en media SVM presenta un comportamiento ligeramente superior: 78,2% frente a los 71% y 71,7% obtenidos por ANFIS y LDA respectivamente. Sin embargo las diferencias estadísticas medidas entre los tres no fueron demostradas significativas. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la definición de sistemas de evaluación de habilidades para cirugía de mínima invasión, a la utilidad del análisis de vídeo como fuente de información y a la importancia de la información de movimiento de instrumental a la hora de caracterizar la pericia quirúrgica. Basándose en estos cimientos, se han de abrir nuevos campos de investigación que contribuyan a la definición de programas de formación estructurados y objetivos, que puedan garantizar la acreditación de cirujanos sobradamente preparados y promocionen la seguridad del paciente en el quirófano. Abstract Minimally invasive surgery (MIS) techniques have become a standard in many surgical sub-specialties, due to their many benefits for patients. However, this shift in paradigm implies that surgeons must acquire a complete different set of skills than those normally attributed to open surgery. Training and assessment of these skills has become a major concern in surgical learning programmes, especially considering the social demand for better-prepared professionals and for the decrease of medical errors. Therefore, much effort is being put in the definition of structured MIS learning programmes, where practice with real patients in the operating room (OR) can be delayed until the resident can attest for a minimum level of psychomotor competence. To this end, skills’ laboratory settings are being introduced in hospitals and training centres where residents may practice and be assessed on their psychomotor skills. Technological advances in the field of tracking technologies and virtual reality (VR) have enabled the creation of new learning systems such as VR simulators or enhanced box trainers. These systems offer a wide range of tasks, as well as the capability of registering objective data on the trainees’ performance. Validation studies give proof of their usefulness; however, levels of evidence reported are in many cases low. More importantly, there is still no clear consensus on topics such as the optimal metrics that must be used to assess competence, the validity of VR simulation, the portability of tracking technologies into real surgeries (for advanced assessment) or the degree to which the skills measured and obtained in laboratory environments transfer to the OR. The purpose of this PhD is to design and validate a conceptual framework for the definition and validation of MIS assessment environments based on a three-pillared model defining three main stages: pedagogical (tasks and metrics to employ), technological (metric acquisition technologies) and analytical (interpretation of competence based on metrics). To this end, a practical implementation of the framework is presented, focused on (1) a video-based tracking system and (2) the determination of surgical competence based on the laparoscopic instruments’ motionrelated data. The pedagogical stage’s results led to the design and implementation of a set of basic tasks for MIS psychomotor skills’ assessment, as well as the definition of motion analysis parameters (MAPs) to measure performance on said tasks. Validation yielded good construct results for parameters such as time, path length, depth, average speed, average acceleration, economy of area and economy of volume. Additionally, face validation results showed positive acceptance on behalf of the experts, residents and novices. For the technological stage the EVA Tracking System is introduced. EVA provides a solution for tracking laparoscopic instruments from the analysis of the monoscopic video image. Accuracy tests for the system are presented, which yielded an average RMSE of 16.33pp for 2D tracking of the instrument on the image and of 13mm for 3D spatial tracking. A validation experiment was conducted using one of the tasks and the most relevant MAPs. Construct validation showed significant differences for time, path length, depth, average speed, average acceleration, economy of area and economy of volume; especially between novices and residents/experts. More importantly, concurrent validation with the TrEndo® Tracking System presented high correlation values (>0.7) for 8 of the 9 MAPs proposed. Finally, the analytical stage allowed comparing the performance of three different supervised classification strategies in the determination of surgical competence based on motion-related information. The three classifiers were based on linear (linear discriminant analysis, LDA), non-linear (support vector machines, SVM) and fuzzy (adaptive neuro fuzzy inference systems, ANFIS) approaches. Results for SVM show slightly better performance than the other two classifiers: on average, accuracy for LDA, SVM and ANFIS was of 71.7%, 78.2% and 71% respectively. However, when confronted, no statistical significance was found between any of the three. Overall, this PhD corroborates the investigated research hypotheses regarding the definition of MIS assessment systems, the use of endoscopic video analysis as the main source of information and the relevance of motion analysis in the determination of surgical competence. New research fields in the training and assessment of MIS surgeons can be proposed based on these foundations, in order to contribute to the definition of structured and objective learning programmes that guarantee the accreditation of well-prepared professionals and the promotion of patient safety in the OR.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

“Por lo tanto, la cristalización de polímeros se supone, y en las teorías se describe a menudo, como un proceso de múltiples pasos con muchos aspectos físico-químicos y estructurales influyendo en él. Debido a la propia estructura de la cadena, es fácil entender que un proceso que es termodinámicamente forzado a aumentar su ordenamiento local, se vea obstaculizado geométricamente y, por tanto, no puede conducirse a un estado de equilibrio final. Como resultado, se forman habitualmente estructuras de no equilibrio con diferentes características dependiendo de la temperatura, presión, cizallamiento y otros parámetros físico-químicos del sistema”. Estas palabras, pronunciadas recientemente por el profesor Bernhard Wunderlich, uno de los mas relevantes fisico-quimicos que han abordado en las ultimas décadas el estudio del estado físico de las macromoléculas, adelantan lo que de alguna manera se explicita en esta memoria y constituyen el “leitmotiv” de este trabajo de tesis. El mecanismo de la cristalización de polímeros esta aun bajo debate en la comunidad de la física de polímeros y la mayoría de los abordajes experimentales se explican a través de la teoría LH. Esta teoría clásica debida a Lauritzen y Hoffman (LH), y que es una generalización de la teoría de cristalización de una molécula pequeña desde la fase de vapor, describe satisfactoriamente muchas observaciones experimentales aunque esta lejos de explicar el complejo fenómeno de la cristalización de polímeros. De hecho, la formulación original de esta teoría en el National Bureau of Standards, a comienzos de la década de los 70, sufrió varias reformulaciones importantes a lo largo de la década de los 80, buscando su adaptación a los hallazgos experimentales. Así nació el régimen III de cristalización que posibilita la creacion de nichos moleculares en la superficie y que dio pie al paradigma ofrecido por Sadler y col., para justificar los experimentos que se obtenian por “scattering” de neutrones y otras técnicas como la técnica de “droplets” o enfriamiento rapido. Por encima de todo, el gran éxito de la teoría radica en que explica la dependencia inversa entre el tamaño del plegado molecular y el subenfriamiento, definido este ultimo como el intervalo de temperatura que media entre la temperatura de equilibrio y la temperatura de cristalización. El problema concreto que aborda esta tesis es el estudio de los procesos de ordenamiento de poliolefinas con distinto grado de ramificacion mediante simulaciones numéricas. Los copolimeros estudiados en esta tesis se consideran materiales modelo de gran homogeneidad molecular desde el punto de vista de la distribución de tamaños y de ramificaciones en la cadena polimérica. Se eligieron estas poliolefinas debido al gran interes experimental en conocer el cambio en las propiedades fisicas de los materiales dependiendo del tipo y cantidad de comonomero utilizado. Además, son modelos sobre los que existen una ingente cantidad de información experimental, que es algo que preocupa siempre al crear una realidad virtual como es la simulación. La experiencia en el grupo Biophym es que los resultados de simulación deben de tener siempre un correlato mas o menos próximo experimental y ese argumento se maneja a lo largo de esta memoria. Empíricamente, se conoce muy bien que las propiedades físicas de las poliolefinas, en suma dependen del tipo y de la cantidad de ramificaciones que presenta el material polimérico. Sin embargo, tal como se ha explicado no existen modelos teóricos adecuados que expliquen los mecanismos subyacentes de los efectos de las ramas. La memoria de este trabajo es amplia por la complejidad del tema. Se inicia con una extensa introducción sobre los conceptos básicos de una macromolecula que son relevantes para entender el contenido del resto de la memoria. Se definen los conceptos de macromolecula flexible, distribuciones y momentos, y su comportamiento en disolución y fundido con los correspondientes parametros caracteristicos. Se pone especial énfasis en el concepto de “entanglement” o enmaranamiento por considerarse clave a la hora de tratar macromoléculas con una longitud superior a la longitud critica de enmaranamiento. Finaliza esta introducción con una reseña sobre el estado del arte en la simulación de los procesos de cristalización. En un segundo capitulo del trabajo se expone detalladamente la metodología usada en cada grupo de casos. En el primer capitulo de resultados, se discuten los estudios de simulación en disolución diluida para sistemas lineales y ramificados de cadena única. Este caso mas simple depende claramente del potencial de torsión elegido tal como se discute a lo largo del texto. La formación de los núcleos “babys” propuestos por Muthukumar parece que son consecuencia del potencial de torsión, ya que este facilita los estados de torsión mas estables. Así que se propone el análisis de otros potenciales que son igualmente utilizados y los resultados obtenidos sobre la cristalización, discutidos en consecuencia. Seguidamente, en un segundo capitulo de resultados se estudian moleculas de alcanos de cadena larga lineales y ramificados en un fundido por simulaciones atomisticas como un modelo de polietileno. Los resultados atomisticos pese a ser de gran detalle no logran captar en su totalidad los efectos experimentales que se observan en los fundidos subenfriados en su etapa previa al estado ordenado. Por esta razon se discuten en los capítulos 3 y 4 de resultados sistemas de cadenas cortas y largas utilizando dos modelos de grano grueso (CG-PVA y CG-PE). El modelo CG-PE se desarrollo durante la tesis. El uso de modelos de grano grueso garantiza una mayor eficiencia computacional con respecto a los modelos atomísticos y son suficientes para mostrar los fenómenos a la escala relevante para la cristalización. En todos estos estudios mencionados se sigue la evolución de los procesos de ordenamiento y de fusión en simulaciones de relajación isoterma y no isoterma. Como resultado de los modelos de simulación, se han evaluado distintas propiedades fisicas como la longitud de segmento ordenado, la cristalinidad, temperaturas de fusion/cristalizacion, etc., lo que permite una comparación con los resultados experimentales. Se demuestra claramente que los sistemas ramificados retrasan y dificultan el orden de la cadena polimérica y por tanto, las regiones cristalinas ordenadas decrecen al crecer las ramas. Como una conclusión general parece mostrarse una tendencia a la formación de estructuras localmente ordenadas que crecen como bloques para completar el espacio de cristalización que puede alcanzarse a una temperatura y a una escala de tiempo determinada. Finalmente hay que señalar que los efectos observados, estan en concordancia con otros resultados tanto teoricos/simulacion como experimentales discutidos a lo largo de esta memoria. Su resumen se muestra en un capitulo de conclusiones y líneas futuras de investigación que se abren como consecuencia de esta memoria. Hay que mencionar que el ritmo de investigación se ha acentuado notablemente en el ultimo ano de trabajo, en parte debido a las ventajas notables obtenidas por el uso de la metodología de grano grueso que pese a ser muy importante para esta memoria no repercute fácilmente en trabajos publicables. Todo ello justifica que gran parte de los resultados esten en fase de publicación. Abstract “Polymer crystallization is therefore assumed, and in theories often described, to be a multi step process with many influencing aspects. Because of the chain structure, it is easy to understand that a process which is thermodynamically forced to increase local ordering but is geometrically hindered cannot proceed into a final equilibrium state. As a result, nonequilibrium structures with different characteristics are usually formed, which depend on temperature, pressure, shearing and other parameters”. These words, recently written by Professor Bernhard Wunderlich, one of the most prominent researchers in polymer physics, put somehow in value the "leitmotiv "of this thesis. The crystallization mechanism of polymers is still under debate in the physics community and most of the experimental findings are still explained by invoking the LH theory. This classical theory, which was initially formulated by Lauritzen and Hoffman (LH), is indeed a generalization of the crystallization theory for small molecules from the vapor phase. Even though it describes satisfactorily many experimental observations, it is far from explaining the complex phenomenon of polymer crystallization. This theory was firstly devised in the early 70s at the National Bureau of Standards. It was successively reformulated along the 80s to fit the experimental findings. Thus, the crystallization regime III was introduced into the theory in order to explain the results found in neutron scattering, droplet or quenching experiments. This concept defines the roughness of the crystallization surface leading to the paradigm proposed by Sadler et al. The great success of this theory is the ability to explain the inverse dependence of the molecular folding size on the supercooling, the latter defined as the temperature interval between the equilibrium temperature and the crystallization temperature. The main scope of this thesis is the study of ordering processes in polyolefins with different degree of branching by using computer simulations. The copolymers studied along this work are considered materials of high molecular homogeneity, from the point of view of both size and branching distributions of the polymer chain. These polyolefins were selected due to the great interest to understand their structure– property relationships. It is important to note that there is a vast amount of experimental data concerning these materials, which is essential to create a virtual reality as is the simulation. The Biophym research group has a wide experience in the correlation between simulation data and experimental results, being this idea highly alive along this work. Empirically, it is well-known that the physical properties of the polyolefins depend on the type and amount of branches presented in the polymeric material. However, there are not suitable models to explain the underlying mechanisms associated to branching. This report is extensive due to the complexity of the topic under study. It begins with a general introduction to the basics concepts of macromolecular physics. This chapter is relevant to understand the content of the present document. Some concepts are defined along this section, among others the flexibility of macromolecules, size distributions and moments, and the behavior in solution and melt along with their corresponding characteristic parameters. Special emphasis is placed on the concept of "entanglement" which is a key item when dealing with macromolecules having a molecular size greater than the critical entanglement length. The introduction finishes with a review of the state of art on the simulation of crystallization processes. The second chapter of the thesis describes, in detail, the computational methodology used in each study. In the first results section, we discuss the simulation studies in dilute solution for linear and short chain branched single chain models. The simplest case is clearly dependent on the selected torsion potential as it is discussed throughout the text. For example, the formation of baby nuclei proposed by Mutukhumar seems to result from the effects of the torsion potential. Thus, we propose the analysis of other torsion potentials that are also used by other research groups. The results obtained on crystallization processes are accordingly discussed. Then, in a second results section, we study linear and branched long-chain alkane molecules in a melt by atomistic simulations as a polyethylene-like model. In spite of the great detail given by atomistic simulations, they are not able to fully capture the experimental facts observed in supercooled melts, in particular the pre-ordered states. For this reason, we discuss short and long chains systems using two coarse-grained models (CG-PVA and CG-PE) in section 3 and 4 of chapter 2. The CG-PE model was developed during the thesis. The use of coarse-grained models ensures greater computational efficiency with respect to atomistic models and is enough to show the relevant scale phenomena for crystallization. In all the analysis we follow the evolution of the ordering and melting processes by both isothermal and non isothermal simulations. During this thesis we have obtained different physical properties such as stem length, crystallinity, melting/crystallization temperatures, and so on. We show that branches in the chains cause a delay in the crystallization and hinder the ordering of the polymer chain. Therefore, crystalline regions decrease in size as branching increases. As a general conclusion, it seems that there is a tendency in the macromolecular systems to form ordered structures, which can grown locally as blocks, occupying the crystallization space at a given temperature and time scale. Finally it should be noted that the observed effects are consistent with both, other theoretical/simulation and experimental results. The summary is provided in the conclusions chapter along with future research lines that open as result of this report. It should be mentioned that the research work has speeded up markedly in the last year, in part because of the remarkable benefits obtained by the use of coarse-grained methodology that despite being very important for this thesis work, is not easily publishable by itself. All this justify that most of the results are still in the publication phase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Virtual reality (VR) techniques to understand and obtain conclusions of data in an easy way are being used by the scientific community. However, these techniques are not used frequently for analyzing large amounts of data in life sciences, particularly in genomics, due to the high complexity of data (curse of dimensionality). Nevertheless, new approaches that allow to bring out the real important data characteristics, arise the possibility of constructing VR spaces to visually understand the intrinsic nature of data. It is well known the benefits of representing high dimensional data in tridimensional spaces by means of dimensionality reduction and transformation techniques, complemented with a strong component of interaction methods. Thus, a novel framework, designed for helping to visualize and interact with data about diseases, is presented. In this paper, the framework is applied to the Van't Veer breast cancer dataset is used, while oncologists from La Paz Hospital (Madrid) are interacting with the obtained results. That is to say a first attempt to generate a visually tangible model of breast cancer disease in order to support the experience of oncologists is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of data mining techniques for the gene profile discovery of diseases, such as cancer, is becoming usual in many researches. These techniques do not usually analyze the relationships between genes in depth, depending on the different variety of manifestations of the disease (related to patients). This kind of analysis takes a considerable amount of time and is not always the focus of the research. However, it is crucial in order to generate personalized treatments to fight the disease. Thus, this research focuses on finding a mechanism for gene profile analysis to be used by the medical and biologist experts. Results: In this research, the MedVir framework is proposed. It is an intuitive mechanism based on the visualization of medical data such as gene profiles, patients, clinical data, etc. MedVir, which is based on an Evolutionary Optimization technique, is a Dimensionality Reduction (DR) approach that presents the data in a three dimensional space. Furthermore, thanks to Virtual Reality technology, MedVir allows the expert to interact with the data in order to tailor it to the experience and knowledge of the expert.