947 resultados para Genotype - Environment interaction
Resumo:
The interaction of a comet with the solar wind undergoes various stages as the comet’s activity varies along its orbit. For a comet like 67P/Churyumov–Gerasimenko, the target comet of ESA’s Rosetta mission, the various features include the formation of a Mach cone, the bow shock, and close to perihelion even a diamagnetic cavity. There are different approaches to simulate this complex interplay between the solar wind and the comet’s extended neutral gas coma which include magnetohydrodynamics (MHD) and hybrid-type models. The first treats the plasma as fluids (one fluid in basic single fluid MHD) and the latter treats the ions as individual particles under the influence of the local electric and magnetic fields. The electrons are treated as a charge-neutralizing fluid in both cases. Given the different approaches both models yield different results, in particular for a low production rate comet. In this paper we will show that these differences can be reduced when using a multifluid instead of a single-fluid MHD model and increase the resolution of the Hybrid model. We will show that some major features obtained with a hybrid type approach like the gyration of the cometary heavy ions and the formation of the Mach cone can be partially reproduced with the multifluid-type model.
Resumo:
The Jovian moon, Europa, hosts a thin neutral gas atmosphere, which is tightly coupled to Jupiter's magnetosphere. Magnetospheric ions impacting the surface sputter off neutral atoms, which, upon ionization, carry currents that modify the magnetic field around the moon. The magnetic field in the plasma is also affected by Europa's induced magnetic field. In this paper we investigate the environment of Europa using our multifluid MHD model and focus on the effects introduced by both the magnetospheric and the pickup ion populations. The model self-consistently derives the electron temperature that governs the electron impact ionization process, which is the major source of ionization in this environment. The resulting magnetic field is compared to measurements performed by the Galileo magnetometer, the bulk properties of the modeled thermal plasma population is compared to the Galileo Plasma Subsystem observations, and the modeled surface precipitation fluxes are compared to Galileo Ultraviolet Spectrometer observations. The model shows good agreement with the measured magnetic field and reproduces the basic features of the plasma interaction observed at the moon for both the E4 and the E26 flybys of the Galileo spacecraft. The simulation also produces perturbations asymmetric about the flow direction that account for observed asymmetries.
Resumo:
Periodic comets move around the Sun on elliptical orbits. As such comet 67P/Churyumov-Gerasimenko (hereafter 67P) spends a portion of time in the inner solar system where it is exposed to increased solar insolation. Therefore given the change in heliocentric distance, in case of 67P from aphelion at 5.68 AU to perihelion at ~1.24 AU, the comet’s activity—the production of neutral gas and dust—undergoes significant variations. As a consequence, during the inbound portion, the mass loading of the solar wind increases and extends to larger spatial scales. This paper investigates how this interaction changes the character of the plasma environment of the comet by means of multifluid MHD simulations. The multifluid MHD model is capable of separating the dynamics of the solar wind ions and the pick-up ions created through photoionization and electron impact ionization in the coma of the comet. We show how two of the major boundaries, the bow shock and the diamagnetic cavity, form and develop as the comet moves through the inner solar system. Likewise for 67P, although most likely shifted back in time with respect to perihelion passage, this process is reversed on the outbound portion of the orbit. The presented model herein is able to reproduce some of the key features previously only accessible to particle-based models that take full account of the ions’ gyration. The results shown herein are in decent agreement to these hybrid-type kinetic simulations.
Resumo:
Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.
Resumo:
The efficiency of sputtered refractory elements by H+ and He++ solar wind ions from Mercury's surface and their contribution to the exosphere are studied for various solar wind conditions. A 3D solar wind-planetary interaction hybrid model is used for the evaluation of precipitation maps of the sputter agents on Mercury's surface. By assuming a global mineralogical surface composition, the related sputter yields are calculated by means of the 2013 SRIM code and are coupled with a 3D exosphere model. Because of Mercury's magnetic field, for quiet and nominal solar wind conditions the plasma can only precipitate around the polar areas, while for extreme solar events (fast solar wind, coronal mass ejections, interplanetary magnetic clouds) the solar wind plasma has access to the entire dayside. In that case the release of particles form the planet's surface can result in an exosphere density increase of more than one order of magnitude. The corresponding escape rates are also about an order of magnitude higher. Moreover, the amount of He++ ions in the precipitating solar plasma flow enhances also the release of sputtered elements from the surface in the exosphere. A comparison of our model results with MESSENGER observations of sputtered Mg and Ca elements in the exosphere shows a reasonable quantitative agreement. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
This study addressed two purposes: (1) to determine the effect of person-environment fit on the psychological well-being of psychiatric aides and (2) to determine what role the coping resources of social support and control have on the above relationship. Two hundred and ten psychiatric aides working in a state hospital in Texas responded to a questionnaire pertaining to these issues.^ Person-environment fit, as a measure of occupational stress, was assessed through a modified version of the Work Environment Scale (WES). The WES subscales used in this study were: involvement, autonomy, job pressure, job clarity, and physical comfort. Psychological well-being was measured with the General Well-Being Schedule which was developed by the National Center for Health Statistics. Co-worker and supervisor support were measured through the WES and finally, control was assessed through Rotter's Locus of Control Scale.^ The results of this study were as follows: (1) all person-environment (p-e) dimensions appeared to have linear relationships with psychological well-being; (2) the p-e fit - well-being relationship did not appear to be confounded by demographic factors; (3) all p-e fit dimensions were significantly related to well-being except for autonomy; (4) p-e fit was more strongly related to well-being than the environmental measure alone; (5) supervisor support and non-work related support were found to have additive effects on the relationship between p-e fit and well-being, however no interaction or buffering effects were observed; (6) locus of control was found to have additive effects in the prediction of well-being and showed interactive effects with work pressure, involvement and physical comfort; and (7) the testing of the overall study model which included many of the components mentioned above yielded an R('2) = .27.^ Implications of these findings are discussed, future research suggested and applications proposed. ^
Resumo:
Introduction Gene expression is an important process whereby the genotype controls an individual cell’s phenotype. However, even genetically identical cells display a variety of phenotypes, which may be attributed to differences in their environment. Yet, even after controlling for these two factors, individual phenotypes still diverge due to noisy gene expression. Synthetic gene expression systems allow investigators to isolate, control, and measure the effects of noise on cell phenotypes. I used mathematical and computational methods to design, study, and predict the behavior of synthetic gene expression systems in S. cerevisiae, which were affected by noise. Methods I created probabilistic biochemical reaction models from known behaviors of the tetR and rtTA genes, gene products, and their gene architectures. I then simplified these models to account for essential behaviors of gene expression systems. Finally, I used these models to predict behaviors of modified gene expression systems, which were experimentally verified. Results Cell growth, which is often ignored when formulating chemical kinetics models, was essential for understanding gene expression behavior. Models incorporating growth effects were used to explain unexpected reductions in gene expression noise, design a set of gene expression systems with “linear” dose-responses, and quantify the speed with which cells explored their fitness landscapes due to noisy gene expression. Conclusions Models incorporating noisy gene expression and cell division were necessary to design, understand, and predict the behaviors of synthetic gene expression systems. The methods and models developed here will allow investigators to more efficiently design new gene expression systems, and infer gene expression properties of TetR based systems.
Resumo:
My dissertation focuses on developing methods for gene-gene/environment interactions and imprinting effect detections for human complex diseases and quantitative traits. It includes three sections: (1) generalizing the Natural and Orthogonal interaction (NOIA) model for the coding technique originally developed for gene-gene (GxG) interaction and also to reduced models; (2) developing a novel statistical approach that allows for modeling gene-environment (GxE) interactions influencing disease risk, and (3) developing a statistical approach for modeling genetic variants displaying parent-of-origin effects (POEs), such as imprinting. In the past decade, genetic researchers have identified a large number of causal variants for human genetic diseases and traits by single-locus analysis, and interaction has now become a hot topic in the effort to search for the complex network between multiple genes or environmental exposures contributing to the outcome. Epistasis, also known as gene-gene interaction is the departure from additive genetic effects from several genes to a trait, which means that the same alleles of one gene could display different genetic effects under different genetic backgrounds. In this study, we propose to implement the NOIA model for association studies along with interaction for human complex traits and diseases. We compare the performance of the new statistical models we developed and the usual functional model by both simulation study and real data analysis. Both simulation and real data analysis revealed higher power of the NOIA GxG interaction model for detecting both main genetic effects and interaction effects. Through application on a melanoma dataset, we confirmed the previously identified significant regions for melanoma risk at 15q13.1, 16q24.3 and 9p21.3. We also identified potential interactions with these significant regions that contribute to melanoma risk. Based on the NOIA model, we developed a novel statistical approach that allows us to model effects from a genetic factor and binary environmental exposure that are jointly influencing disease risk. Both simulation and real data analyses revealed higher power of the NOIA model for detecting both main genetic effects and interaction effects for both quantitative and binary traits. We also found that estimates of the parameters from logistic regression for binary traits are no longer statistically uncorrelated under the alternative model when there is an association. Applying our novel approach to a lung cancer dataset, we confirmed four SNPs in 5p15 and 15q25 region to be significantly associated with lung cancer risk in Caucasians population: rs2736100, rs402710, rs16969968 and rs8034191. We also validated that rs16969968 and rs8034191 in 15q25 region are significantly interacting with smoking in Caucasian population. Our approach identified the potential interactions of SNP rs2256543 in 6p21 with smoking on contributing to lung cancer risk. Genetic imprinting is the most well-known cause for parent-of-origin effect (POE) whereby a gene is differentially expressed depending on the parental origin of the same alleles. Genetic imprinting affects several human disorders, including diabetes, breast cancer, alcoholism, and obesity. This phenomenon has been shown to be important for normal embryonic development in mammals. Traditional association approaches ignore this important genetic phenomenon. In this study, we propose a NOIA framework for a single locus association study that estimates both main allelic effects and POEs. We develop statistical (Stat-POE) and functional (Func-POE) models, and demonstrate conditions for orthogonality of the Stat-POE model. We conducted simulations for both quantitative and qualitative traits to evaluate the performance of the statistical and functional models with different levels of POEs. Our results showed that the newly proposed Stat-POE model, which ensures orthogonality of variance components if Hardy-Weinberg Equilibrium (HWE) or equal minor and major allele frequencies is satisfied, had greater power for detecting the main allelic additive effect than a Func-POE model, which codes according to allelic substitutions, for both quantitative and qualitative traits. The power for detecting the POE was the same for the Stat-POE and Func-POE models under HWE for quantitative traits.
Resumo:
A maritime construction is usually a slender line in the ocean.It is usual to see just its narrow surface strip and not analyse the large amount of submerged material the latter is supporting.Without doubt,it is the ground to which a notable load is transmitted in an environment subjected to periodic,alternating stresses,dynamic forces which the sea's media constitute. Both an outer and inner maritime construction works in a complex fashion.A granular solid(breakwater)breathes with the incident wave flow,dissipating part of the wave energy between its gaps.The backflow tries to extract the different items from the solid block,setting a balance between effective and neutral tensions that follow Terzaghui's principle. On some occasions,fluidification of the armour layer has caused the breakwater to collapse(Sines,Portugal,February 1978).On others,siphoning or liquefaction of sand supporting monoliths(vertical breakwaters)lead them to destruction or collapse(New Barcelona Harbour Mouth,Spain,November 2001). This is why the ground-force-structure interaction is a complicated analysis with joint design tools still in an incipient state. The purpose of this article is to describe two singular failures in inner maritime constructions in Spain deriving from ground problems(Malaga,July 2004and Barcelona,January 2007).They occurred recently and the causes are the subject of reflection and analysis.
Resumo:
Cultural content on the Web is available in various domains (cultural objects, datasets, geospatial data, moving images, scholarly texts and visual resources), concerns various topics, is written in different languages, targeted to both laymen and experts, and provided by different communities (libraries, archives museums and information industry) and individuals (Figure 1). The integration of information technologies and cultural heritage content on the Web is expected to have an impact on everyday life from the point of view of institutions, communities and individuals. In particular, collaborative environment scan recreate 3D navigable worlds that can offer new insights into our cultural heritage (Chan 2007). However, the main barrier is to find and relate cultural heritage information by end-users of cultural contents, as well as by organisations and communities managing and producing them. In this paper, we explore several visualisation techniques for supporting cultural interfaces, where the role of metadata is essential for supporting the search and communication among end-users (Figure 2). A conceptual framework was developed to integrate the data, purpose, technology, impact, and form components of a collaborative environment, Our preliminary results show that collaborative environments can help with cultural heritage information sharing and communication tasks because of the way in which they provide a visual context to end-users. They can be regarded as distributed virtual reality systems that offer graphically realised, potentially infinite, digital information landscapes. Moreover, collaborative environments also provide a new way of interaction between an end-user and a cultural heritage data set. Finally, the visualisation of metadata of a dataset plays an important role in helping end-users in their search for heritage contents on the Web.
Resumo:
Interaction with smart objects can be accomplished with different technologies, such as tangible interfaces or touch computing, among others. Some of them require the object to be especially designed to be 'smart', and some other are limited in the variety and complexity of the possible actions. This paper describes a user-smart object interaction model and prototype based on the well known event-condition-action (ECA) reasoning, which can work, to a degree, independently of the intelligence embedded into the smart object. It has been designed for mobile devices to act as mediators between users and smart objects and provides an intuitive means for personalization of object's behavior. When the user is close to an object, this one publishes its 'event & action' capabilities to the user's device. The user may accept the object's module offering, which will enable him to configure and control that object, but also its actions with respect to other elements of the environment or the virtual world. The modular ECA interaction model facilitates the integration of different types of objects in a smart space, giving the user full control of their capabilities and facilitating creative mash-uping to build customized functionalities that combine physical and virtual actions
Resumo:
At present, many countries allow citizens or entities to interact with the government outside the telematic environment through a legal representative who is granted powers of representation. However, if the interaction takes place through the Internet, only primitive mechanisms of representation are available, and these are mainly based on non-dynamic offline processes that do not enable quick and easy identity delegation. This paper proposes a system of dynamic delegation of identity between two generic entities that can solve the problem of delegated access to the telematic services provided by public authorities. The solution herein is based on the generation of a delegation token created from a proxy certificate that allows the delegating entity to delegate identity to another on the basis of a subset of its attributes as delegator, while also establishing in the delegation token itself restrictions on the services accessible to the delegated entity and the validity period of delegation. Further, the paper presents the mechanisms needed to either revoke a delegation token or to check whether a delegation token has been revoked. Implications for theory and practice and suggestions for future research are discussed.
Resumo:
Los sensores inerciales (acelerómetros y giróscopos) se han ido introduciendo poco a poco en dispositivos que usamos en nuestra vida diaria gracias a su minituarización. Hoy en día todos los smartphones contienen como mínimo un acelerómetro y un magnetómetro, siendo complementados en losmás modernos por giróscopos y barómetros. Esto, unido a la proliferación de los smartphones ha hecho viable el diseño de sistemas basados en las medidas de sensores que el usuario lleva colocados en alguna parte del cuerpo (que en un futuro estarán contenidos en tejidos inteligentes) o los integrados en su móvil. El papel de estos sensores se ha convertido en fundamental para el desarrollo de aplicaciones contextuales y de inteligencia ambiental. Algunos ejemplos son el control de los ejercicios de rehabilitación o la oferta de información referente al sitio turístico que se está visitando. El trabajo de esta tesis contribuye a explorar las posibilidades que ofrecen los sensores inerciales para el apoyo a la detección de actividad y la mejora de la precisión de servicios de localización para peatones. En lo referente al reconocimiento de la actividad que desarrolla un usuario, se ha explorado el uso de los sensores integrados en los dispositivos móviles de última generación (luz y proximidad, acelerómetro, giróscopo y magnetómetro). Las actividades objetivo son conocidas como ‘atómicas’ (andar a distintas velocidades, estar de pie, correr, estar sentado), esto es, actividades que constituyen unidades de actividades más complejas como pueden ser lavar los platos o ir al trabajo. De este modo, se usan algoritmos de clasificación sencillos que puedan ser integrados en un móvil como el Naïve Bayes, Tablas y Árboles de Decisión. Además, se pretende igualmente detectar la posición en la que el usuario lleva el móvil, no sólo con el objetivo de utilizar esa información para elegir un clasificador entrenado sólo con datos recogidos en la posición correspondiente (estrategia que mejora los resultados de estimación de la actividad), sino también para la generación de un evento que puede producir la ejecución de una acción. Finalmente, el trabajo incluye un análisis de las prestaciones de la clasificación variando el tipo de parámetros y el número de sensores usados y teniendo en cuenta no sólo la precisión de la clasificación sino también la carga computacional. Por otra parte, se ha propuesto un algoritmo basado en la cuenta de pasos utilizando informaiii ción proveniente de un acelerómetro colocado en el pie del usuario. El objetivo final es detectar la actividad que el usuario está haciendo junto con la estimación aproximada de la distancia recorrida. El algoritmo de cuenta pasos se basa en la detección de máximos y mínimos usando ventanas temporales y umbrales sin requerir información específica del usuario. El ámbito de seguimiento de peatones en interiores es interesante por la falta de un estándar de localización en este tipo de entornos. Se ha diseñado un filtro extendido de Kalman centralizado y ligeramente acoplado para fusionar la información medida por un acelerómetro colocado en el pie del usuario con medidas de posición. Se han aplicado también diferentes técnicas de corrección de errores como las de velocidad cero que se basan en la detección de los instantes en los que el pie está apoyado en el suelo. Los resultados han sido obtenidos en entornos interiores usando las posiciones estimadas por un sistema de triangulación basado en la medida de la potencia recibida (RSS) y GPS en exteriores. Finalmente, se han implementado algunas aplicaciones que prueban la utilidad del trabajo desarrollado. En primer lugar se ha considerado una aplicación de monitorización de actividad que proporciona al usuario información sobre el nivel de actividad que realiza durante un período de tiempo. El objetivo final es favorecer el cambio de comportamientos sedentarios, consiguiendo hábitos saludables. Se han desarrollado dos versiones de esta aplicación. En el primer caso se ha integrado el algoritmo de cuenta pasos en una plataforma OSGi móvil adquiriendo los datos de un acelerómetro Bluetooth colocado en el pie. En el segundo caso se ha creado la misma aplicación utilizando las implementaciones de los clasificadores en un dispositivo Android. Por otro lado, se ha planteado el diseño de una aplicación para la creación automática de un diario de viaje a partir de la detección de eventos importantes. Esta aplicación toma como entrada la información procedente de la estimación de actividad y de localización además de información almacenada en bases de datos abiertas (fotos, información sobre sitios) e información sobre sensores reales y virtuales (agenda, cámara, etc.) del móvil. Abstract Inertial sensors (accelerometers and gyroscopes) have been gradually embedded in the devices that people use in their daily lives thanks to their miniaturization. Nowadays all smartphones have at least one embedded magnetometer and accelerometer, containing the most upto- date ones gyroscopes and barometers. This issue, together with the fact that the penetration of smartphones is growing steadily, has made possible the design of systems that rely on the information gathered by wearable sensors (in the future contained in smart textiles) or inertial sensors embedded in a smartphone. The role of these sensors has become key to the development of context-aware and ambient intelligent applications. Some examples are the performance of rehabilitation exercises, the provision of information related to the place that the user is visiting or the interaction with objects by gesture recognition. The work of this thesis contributes to explore to which extent this kind of sensors can be useful to support activity recognition and pedestrian tracking, which have been proven to be essential for these applications. Regarding the recognition of the activity that a user performs, the use of sensors embedded in a smartphone (proximity and light sensors, gyroscopes, magnetometers and accelerometers) has been explored. The activities that are detected belong to the group of the ones known as ‘atomic’ activities (e.g. walking at different paces, running, standing), that is, activities or movements that are part of more complex activities such as doing the dishes or commuting. Simple, wellknown classifiers that can run embedded in a smartphone have been tested, such as Naïve Bayes, Decision Tables and Trees. In addition to this, another aim is to estimate the on-body position in which the user is carrying the mobile phone. The objective is not only to choose a classifier that has been trained with the corresponding data in order to enhance the classification but also to start actions. Finally, the performance of the different classifiers is analysed, taking into consideration different features and number of sensors. The computational and memory load of the classifiers is also measured. On the other hand, an algorithm based on step counting has been proposed. The acceleration information is provided by an accelerometer placed on the foot. The aim is to detect the activity that the user is performing together with the estimation of the distance covered. The step counting strategy is based on detecting minima and its corresponding maxima. Although the counting strategy is not innovative (it includes time windows and amplitude thresholds to prevent under or overestimation) no user-specific information is required. The field of pedestrian tracking is crucial due to the lack of a localization standard for this kind of environments. A loosely-coupled centralized Extended Kalman Filter has been proposed to perform the fusion of inertial and position measurements. Zero velocity updates have been applied whenever the foot is detected to be placed on the ground. The results have been obtained in indoor environments using a triangulation algorithm based on RSS measurements and GPS outdoors. Finally, some applications have been designed to test the usefulness of the work. The first one is called the ‘Activity Monitor’ whose aim is to prevent sedentary behaviours and to modify habits to achieve desired objectives of activity level. Two different versions of the application have been implemented. The first one uses the activity estimation based on the step counting algorithm, which has been integrated in an OSGi mobile framework acquiring the data from a Bluetooth accelerometer placed on the foot of the individual. The second one uses activity classifiers embedded in an Android smartphone. On the other hand, the design of a ‘Travel Logbook’ has been planned. The input of this application is the information provided by the activity and localization modules, external databases (e.g. pictures, points of interest, weather) and mobile embedded and virtual sensors (agenda, camera, etc.). The aim is to detect important events in the journey and gather the information necessary to store it as a journal page.
Resumo:
The design and development of spoken interaction systems has been a thoroughly studied research scope for the last decades. The aim is to obtain systems with the ability to interact with human agents with a high degree of naturalness and efficiency, allowing them to carry out the actions they desire using speech, as it is the most natural means of communication between humans. To achieve that degree of naturalness, it is not enough to endow systems with the ability to accurately understand the user’s utterances and to properly react to them, even considering the information provided by the user in his or her previous interactions. The system has also to be aware of the evolution of the conditions under which the interaction takes place, in order to act the most coherent way as possible at each moment. Consequently, one of the most important features of the system is that it has to be context-aware. This context awareness of the system can be reflected in the modification of the behaviour of the system taking into account the current situation of the interaction. For instance, the system should decide which action it has to carry out, or the way to perform it, depending on the user that requests it, on the way that the user addresses the system, on the characteristics of the environment in which the interaction takes place, and so on. In other words, the system has to adapt its behaviour to these evolving elements of the interaction. Moreover that adaptation has to be carried out, if possible, in such a way that the user: i) does not perceive that the system has to make any additional effort, or to devote interaction time to perform tasks other than carrying out the requested actions, and ii) does not have to provide the system with any additional information to carry out the adaptation, which could imply a lesser efficiency of the interaction, since users should devote several interactions only to allow the system to become adapted. In the state-of-the-art spoken dialogue systems, researchers have proposed several disparate strategies to adapt the elements of the system to different conditions of the interaction (such as the acoustic characteristics of a specific user’s speech, the actions previously requested, and so on). Nevertheless, to our knowledge there is not any consensus on the procedures to carry out these adaptation. The approaches are to an extent unrelated from one another, in the sense that each one considers different pieces of information, and the treatment of that information is different taking into account the adaptation carried out. In this regard, the main contributions of this Thesis are the following ones: Definition of a contextualization framework. We propose a unified approach that can cover any strategy to adapt the behaviour of a dialogue system to the conditions of the interaction (i.e. the context). In our theoretical definition of the contextualization framework we consider the system’s context as all the sources of variability present at any time of the interaction, either those ones related to the environment in which the interaction takes place, or to the human agent that addresses the system at each moment. Our proposal relies on three aspects that any contextualization approach should fulfill: plasticity (i.e. the system has to be able to modify its behaviour in the most proactive way taking into account the conditions under which the interaction takes place), adaptivity (i.e. the system has also to be able to consider the most appropriate sources of information at each moment, both environmental and user- and dialogue-dependent, to effectively adapt to the conditions aforementioned), and transparency (i.e. the system has to carry out the contextualizaton-related tasks in such a way that the user neither perceives them nor has to do any effort in providing the system with any information that it needs to perform that contextualization). Additionally, we could include a generality aspect to our proposed framework: the main features of the framework should be easy to adopt in any dialogue system, regardless of the solution proposed to manage the dialogue. Once we define the theoretical basis of our contextualization framework, we propose two cases of study on its application in a spoken dialogue system. We focus on two aspects of the interaction: the contextualization of the speech recognition models, and the incorporation of user-specific information into the dialogue flow. One of the modules of a dialogue system that is more prone to be contextualized is the speech recognition system. This module makes use of several models to emit a recognition hypothesis from the user’s speech signal. Generally speaking, a recognition system considers two types of models: an acoustic one (that models each of the phonemes that the recognition system has to consider) and a linguistic one (that models the sequences of words that make sense for the system). In this work we contextualize the language model of the recognition system in such a way that it takes into account the information provided by the user in both his or her current utterance and in the previous ones. These utterances convey information useful to help the system in the recognition of the next utterance. The contextualization approach that we propose consists of a dynamic adaptation of the language model that is used by the recognition system. We carry out this adaptation by means of a linear interpolation between several models. Instead of training the best interpolation weights, we make them dependent on the conditions of the dialogue. In our approach, the system itself will obtain these weights as a function of the reliability of the different elements of information available, such as the semantic concepts extracted from the user’s utterance, the actions that he or she wants to carry out, the information provided in the previous interactions, and so on. One of the aspects more frequently addressed in Human-Computer Interaction research is the inclusion of user specific characteristics into the information structures managed by the system. The idea is to take into account the features that make each user different from the others in order to offer to each particular user different services (or the same service, but in a different way). We could consider this approach as a user-dependent contextualization of the system. In our work we propose the definition of a user model that contains all the information of each user that could be potentially useful to the system at a given moment of the interaction. In particular we will analyze the actions that each user carries out throughout his or her interaction. The objective is to determine which of these actions become the preferences of that user. We represent the specific information of each user as a feature vector. Each of the characteristics that the system will take into account has a confidence score associated. With these elements, we propose a probabilistic definition of a user preference, as the action whose likelihood of being addressed by the user is greater than the one for the rest of actions. To include the user dependent information into the dialogue flow, we modify the information structures on which the dialogue manager relies to retrieve information that could be needed to solve the actions addressed by the user. Usage preferences become another source of contextual information that will be considered by the system towards a more efficient interaction (since the new information source will help to decrease the need of the system to ask users for additional information, thus reducing the number of turns needed to carry out a specific action). To test the benefits of the contextualization framework that we propose, we carry out an evaluation of the two strategies aforementioned. We gather several performance metrics, both objective and subjective, that allow us to compare the improvements of a contextualized system against the baseline one. We will also gather the user’s opinions as regards their perceptions on the behaviour of the system, and its degree of adaptation to the specific features of each interaction. Resumen El diseño y el desarrollo de sistemas de interacción hablada ha sido objeto de profundo estudio durante las pasadas décadas. El propósito es la consecución de sistemas con la capacidad de interactuar con agentes humanos con un alto grado de eficiencia y naturalidad. De esta manera, los usuarios pueden desempeñar las tareas que deseen empleando la voz, que es el medio de comunicación más natural para los humanos. A fin de alcanzar el grado de naturalidad deseado, no basta con dotar a los sistemas de la abilidad de comprender las intervenciones de los usuarios y reaccionar a ellas de manera apropiada (teniendo en consideración, incluso, la información proporcionada en previas interacciones). Adicionalmente, el sistema ha de ser consciente de las condiciones bajo las cuales transcurre la interacción, así como de la evolución de las mismas, de tal manera que pueda actuar de la manera más coherente en cada instante de la interacción. En consecuencia, una de las características primordiales del sistema es que debe ser sensible al contexto. Esta capacidad del sistema de conocer y emplear el contexto de la interacción puede verse reflejada en la modificación de su comportamiento debida a las características actuales de la interacción. Por ejemplo, el sistema debería decidir cuál es la acción más apropiada, o la mejor manera de llevarla a término, dependiendo del usuario que la solicita, del modo en el que lo hace, etcétera. En otras palabras, el sistema ha de adaptar su comportamiento a tales elementos mutables (o dinámicos) de la interacción. Dos características adicionales son requeridas a dicha adaptación: i) el usuario no ha de percibir que el sistema dedica recursos (temporales o computacionales) a realizar tareas distintas a las que aquél le solicita, y ii) el usuario no ha de dedicar esfuerzo alguno a proporcionar al sistema información adicional para llevar a cabo la interacción. Esto último implicaría una menor eficiencia de la interacción, puesto que los usuarios deberían dedicar parte de la misma a proporcionar información al sistema para su adaptación, sin ningún beneficio inmediato. En los sistemas de diálogo hablado propuestos en la literatura, se han propuesto diferentes estrategias para llevar a cabo la adaptación de los elementos del sistema a las diferentes condiciones de la interacción (tales como las características acústicas del habla de un usuario particular, o a las acciones a las que se ha referido con anterioridad). Sin embargo, no existe una estrategia fija para proceder a dicha adaptación, sino que las mismas no suelen guardar una relación entre sí. En este sentido, cada una de ellas tiene en cuenta distintas fuentes de información, la cual es tratada de manera diferente en función de las características de la adaptación buscada. Teniendo en cuenta lo anterior, las contribuciones principales de esta Tesis son las siguientes: Definición de un marco de contextualización. Proponemos un criterio unificador que pueda cubrir cualquier estrategia de adaptación del comportamiento de un sistema de diálogo a las condiciones de la interacción (esto es, el contexto de la misma). En nuestra definición teórica del marco de contextualización consideramos el contexto del sistema como todas aquellas fuentes de variabilidad presentes en cualquier instante de la interacción, ya estén relacionadas con el entorno en el que tiene lugar la interacción, ya dependan del agente humano que se dirige al sistema en cada momento. Nuestra propuesta se basa en tres aspectos que cualquier estrategia de contextualización debería cumplir: plasticidad (es decir, el sistema ha de ser capaz de modificar su comportamiento de la manera más proactiva posible, teniendo en cuenta las condiciones en las que tiene lugar la interacción), adaptabilidad (esto es, el sistema ha de ser capaz de considerar la información oportuna en cada instante, ya dependa del entorno o del usuario, de tal manera que adecúe su comportamiento de manera eficaz a las condiciones mencionadas), y transparencia (que implica que el sistema ha de desarrollar las tareas relacionadas con la contextualización de tal manera que el usuario no perciba la manera en que dichas tareas se llevan a cabo, ni tampoco deba proporcionar al sistema con información adicional alguna). De manera adicional, incluiremos en el marco propuesto el aspecto de la generalidad: las características del marco de contextualización han de ser portables a cualquier sistema de diálogo, con independencia de la solución propuesta en los mismos para gestionar el diálogo. Una vez hemos definido las características de alto nivel de nuestro marco de contextualización, proponemos dos estrategias de aplicación del mismo a un sistema de diálogo hablado. Nos centraremos en dos aspectos de la interacción a adaptar: los modelos empleados en el reconocimiento de habla, y la incorporación de información específica de cada usuario en el flujo de diálogo. Uno de los módulos de un sistema de diálogo más susceptible de ser contextualizado es el sistema de reconocimiento de habla. Este módulo hace uso de varios modelos para generar una hipótesis de reconocimiento a partir de la señal de habla. En general, un sistema de reconocimiento emplea dos tipos de modelos: uno acústico (que modela cada uno de los fonemas considerados por el reconocedor) y uno lingüístico (que modela las secuencias de palabras que tienen sentido desde el punto de vista de la interacción). En este trabajo contextualizamos el modelo lingüístico del reconocedor de habla, de tal manera que tenga en cuenta la información proporcionada por el usuario, tanto en su intervención actual como en las previas. Estas intervenciones contienen información (semántica y/o discursiva) que puede contribuir a un mejor reconocimiento de las subsiguientes intervenciones del usuario. La estrategia de contextualización propuesta consiste en una adaptación dinámica del modelo de lenguaje empleado en el reconocedor de habla. Dicha adaptación se lleva a cabo mediante una interpolación lineal entre diferentes modelos. En lugar de entrenar los mejores pesos de interpolación, proponemos hacer los mismos dependientes de las condiciones actuales de cada diálogo. El propio sistema obtendrá estos pesos como función de la disponibilidad y relevancia de las diferentes fuentes de información disponibles, tales como los conceptos semánticos extraídos a partir de la intervención del usuario, o las acciones que el mismo desea ejecutar. Uno de los aspectos más comúnmente analizados en la investigación de la Interacción Persona-Máquina es la inclusión de las características específicas de cada usuario en las estructuras de información empleadas por el sistema. El objetivo es tener en cuenta los aspectos que diferencian a cada usuario, de tal manera que el sistema pueda ofrecer a cada uno de ellos el servicio más apropiado (o un mismo servicio, pero de la manera más adecuada a cada usuario). Podemos considerar esta estrategia como una contextualización dependiente del usuario. En este trabajo proponemos la definición de un modelo de usuario que contenga toda la información relativa a cada usuario, que pueda ser potencialmente utilizada por el sistema en un momento determinado de la interacción. En particular, analizaremos aquellas acciones que cada usuario decide ejecutar a lo largo de sus diálogos con el sistema. Nuestro objetivo es determinar cuáles de dichas acciones se convierten en las preferencias de cada usuario. La información de cada usuario quedará representada mediante un vector de características, cada una de las cuales tendrá asociado un valor de confianza. Con ambos elementos proponemos una definición probabilística de una preferencia de uso, como aquella acción cuya verosimilitud es mayor que la del resto de acciones solicitadas por el usuario. A fin de incluir la información dependiente de usuario en el flujo de diálogo, llevamos a cabo una modificación de las estructuras de información en las que se apoya el gestor de diálogo para recuperar información necesaria para resolver ciertos diálogos. En dicha modificación las preferencias de cada usuario pasarán a ser una fuente adicional de información contextual, que será tenida en cuenta por el sistema en aras de una interacción más eficiente (puesto que la nueva fuente de información contribuirá a reducir la necesidad del sistema de solicitar al usuario información adicional, dando lugar en consecuencia a una reducción del número de intervenciones necesarias para llevar a cabo una acción determinada). Para determinar los beneficios de las aplicaciones del marco de contextualización propuesto, llevamos a cabo una evaluación de un sistema de diálogo que incluye las estrategias mencionadas. Hemos recogido diversas métricas, tanto objetivas como subjetivas, que nos permiten determinar las mejoras aportadas por un sistema contextualizado en comparación con el sistema sin contextualizar. De igual manera, hemos recogido las opiniones de los participantes en la evaluación acerca de su percepción del comportamiento del sistema, y de su capacidad de adaptación a las condiciones concretas de cada interacción.
Resumo:
Background: Cognitive skills training for minimally invasive surgery has traditionally relied upon diverse tools, such as seminars or lectures. Web technologies for e-learning have been adopted to provide ubiquitous training and serve as structured repositories for the vast amount of laparoscopic video sources available. However, these technologies fail to offer such features as formative and summative evaluation, guided learning, or collaborative interaction between users. Methodology: The "TELMA" environment is presented as a new technology-enhanced learning platform that increases the user's experience using a four-pillared architecture: (1) an authoring tool for the creation of didactic contents; (2) a learning content and knowledge management system that incorporates a modular and scalable system to capture, catalogue, search, and retrieve multimedia content; (3) an evaluation module that provides learning feedback to users; and (4) a professional network for collaborative learning between users. Face validation of the environment and the authoring tool are presented. Results: Face validation of TELMA reveals the positive perception of surgeons regarding the implementation of TELMA and their willingness to use it as a cognitive skills training tool. Preliminary validation data also reflect the importance of providing an easy-to-use, functional authoring tool to create didactic content. Conclusion: The TELMA environment is currently installed and used at the Jesús Usón Minimally Invasive Surgery Centre and several other Spanish hospitals. Face validation results ascertain the acceptance and usefulness of this new minimally invasive surgery training environment.