938 resultados para implied volatility function models
Resumo:
B cells mediate immune responses via the secretion of antibody and interactions with other immune cell populations through antigen presentation, costimulation, and cytokine secretion. Although B cells are primarily believed to promote immune responses using the mechanisms described above, some unique regulatory B cell populations that negatively influence inflammation have also been described. Among these is a rare interleukin (IL)-10-producing B lymphocyte subset termed “B10 cells.” B cell-derived IL-10 can inhibit various arms of the immune system, including polarization of Th1/Th2 cell subsets, antigen presentation and cytokine production by monocytes and macrophages, and activation of regulatory T cells. Further studies in numerous autoimmune and inflammatory models of disease have confirmed the ability of B10 cells to negatively regulate inflammation in an IL-10-dependent manner. Although IL-10 is indispensable to the effector functions of B10 cells, how this specialized B cell population is selected in vivo to produce IL-10 is unknown. Some studies have demonstrated a link between B cell receptor (BCR)-derived signals and the acquisition of IL-10 competence. Additionally, whether antigen-BCR interactions are required for B cell IL-10 production during homeostasis as well as active immune responses is a matter of debate. Therefore, the goal of this thesis is to determine the importance of antigen-driven signals during B10 cell development in vivo and during B10 cell-mediated immunosuppression.
Chapter 3 of the dissertation explored the BCR repertoire of spleen and peritoneal cavity B10 cells using single-cell sequencing to lay the foundation for studies to understand the full range of antigens that may be involved in B10 cell selection. In both the spleen and peritoneal cavity B10 cells studied, BCR gene utilization was diverse, and the expressed BCR transcripts were largely unmutated. Thus, B10 cells are likely capable of responding to a wide range of foreign and self-antigens in vivo.
Studies in Chapter 4 determined the predominant antigens that drive B cell IL-10 secretion during homeostasis. A novel in vitro B cell expansion system was used to isolate B cells actively expressing IL-10 in vivo and probe the reactivities of their secreted monoclonal antibodies. B10 cells were found to produce polyreactive antibodies that bound multiple self-antigens. Therefore, in the absence of overarching active immune responses, B cell IL-10 is secreted following interactions with self-antigens.
Chapter 5 of this dissertation investigated whether foreign antigens are capable of driving B10 cell expansion and effector activity during an active immune response. In a model of contact-induced hypersensitivity, in vitro B cell expansion was again used to isolate antigen-specific B10 clones, which were required for optimal immunosuppression.
The studies described in this dissertation shed light on the relative contributions of BCR-derived signals during B10 cell development and effector function. Furthermore, these investigations demonstrate that B10 cells respond to both foreign and self-antigens, which has important implications for the potential manipulation of B10 cells for human therapy. Therefore, B10 cells represent a polyreactive B cell population that provides antigen-specific regulation of immune responses via the production of IL-10.
Resumo:
Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.
Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.
Resumo:
The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.
The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.
We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.
Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.
Resumo:
This review summarizes evidence of dysregulated reward circuitry function in a range of neurodevelopmental and psychiatric disorders and genetic syndromes. First, the contribution of identifying a core mechanistic process across disparate disorders to disease classification is discussed, followed by a review of the neurobiology of reward circuitry. We next consider preclinical animal models and clinical evidence of reward-pathway dysfunction in a range of disorders, including psychiatric disorders (i.e., substance-use disorders, affective disorders, eating disorders, and obsessive compulsive disorders), neurodevelopmental disorders (i.e., schizophrenia, attention-deficit/hyperactivity disorder, autism spectrum disorders, Tourette's syndrome, conduct disorder/oppositional defiant disorder), and genetic syndromes (i.e., Fragile X syndrome, Prader-Willi syndrome, Williams syndrome, Angelman syndrome, and Rett syndrome). We also provide brief overviews of effective psychopharmacologic agents that have an effect on the dopamine system in these disorders. This review concludes with methodological considerations for future research designed to more clearly probe reward-circuitry dysfunction, with the ultimate goal of improved intervention strategies.
Resumo:
Improving the representation of the hydrological cycle in Atmospheric General Circulation Models (AGCMs) is one of the main challenges in modeling the Earth's climate system. One way to evaluate model performance is to simulate the transport of water isotopes. Among those available, tritium (HTO) is an extremely valuable tracer, because its content in the different reservoirs involved in the water cycle (stratosphere, troposphere, ocean) varies by order of magnitude. Previous work incorporated natural tritium into LMDZ-iso, a version of the LMDZ general circulation model enhanced by water isotope diagnostics. Here for the first time, the anthropogenic tritium injected by each of the atmospheric nuclear-bomb tests between 1945 and 1980 has been first estimated and further implemented in the model; it creates an opportunity to evaluate certain aspects of LDMZ over several decades by following the bomb-tritium transient signal through the hydrological cycle. Simulations of tritium in water vapor and precipitation for the period 1950-2008, with both natural and anthropogenic components, are presented in this study. LMDZ-iso satisfactorily reproduces the general shape of the temporal evolution of tritium. However, LMDZ-iso simulates too high a bomb-tritium peak followed by too strong a decrease of tritium in precipitation. The too diffusive vertical advection in AGCMs crucially affects the residence time of tritium in the stratosphere. This insight into model performance demonstrates that the implementation of tritium in an AGCM provides a new and valuable test of the modeled atmospheric transport, complementing water stable isotope modeling.
Resumo:
Stroke is a leading cause of death and permanent disability worldwide, affecting millions of individuals. Traditional clinical scores for assessment of stroke-related impairments are inherently subjective and limited by inter-rater and intra-rater reliability, as well as floor and ceiling effects. In contrast, robotic technologies provide objective, highly repeatable tools for quantification of neurological impairments following stroke. KINARM is an exoskeleton robotic device that provides objective, reliable tools for assessment of sensorimotor, proprioceptive and cognitive brain function by means of a battery of behavioral tasks. As such, KINARM is particularly useful for assessment of neurological impairments following stroke. This thesis introduces a computational framework for assessment of neurological impairments using the data provided by KINARM. This is done by achieving two main objectives. First, to investigate how robotic measurements can be used to estimate current and future abilities to perform daily activities for subjects with stroke. We are able to predict clinical scores related to activities of daily living at present and future time points using a set of robotic biomarkers. The findings of this analysis provide a proof of principle that robotic evaluation can be an effective tool for clinical decision support and target-based rehabilitation therapy. The second main objective of this thesis is to address the emerging problem of long assessment time, which can potentially lead to fatigue when assessing subjects with stroke. To address this issue, we examine two time reduction strategies. The first strategy focuses on task selection, whereby KINARM tasks are arranged in a hierarchical structure so that an earlier task in the assessment procedure can be used to decide whether or not subsequent tasks should be performed. The second strategy focuses on time reduction on the longest two individual KINARM tasks. Both reduction strategies are shown to provide significant time savings, ranging from 30% to 90% using task selection and 50% using individual task reductions, thereby establishing a framework for reduction of assessment time on a broader set of KINARM tasks. All in all, findings of this thesis establish an improved platform for diagnosis and prognosis of stroke using robot-based biomarkers.
Resumo:
Choanoflagellates are the closest single-celled relatives of animals and provide fascinating insights into developmental processes in animals. Two species, the choanoflagellates Monosiga brevicollis and Salpingoeca rosetta are emerging as promising model organisms to reveal the evolutionary origin of key animal innovations. In this review, we highlight how choanoflagellates are used to study the origin of multicellularity in animals. The newly available genomic resources and functional techniques provide important insights into the function of choanoflagellate pre- and postsynaptic proteins, cell-cell adhesion and signaling molecules and the evolution of animal filopodia and thus underscore the relevance of choanoflagellate models for evolutionary biology, neurobiology and cell biology research.
Resumo:
Choanoflagellates are the closest single-celled relatives of animals and provide fascinating insights into developmental processes in animals. Two species, the choanoflagellates Monosiga brevicollis and Salpingoeca rosetta are emerging as promising model organisms to reveal the evolutionary origin of key animal innovations. In this review, we highlight how choanoflagellates are used to study the origin of multicellularity in animals. The newly available genomic resources and functional techniques provide important insights into the function of choanoflagellate pre- and postsynaptic proteins, cell-cell adhesion and signaling molecules and the evolution of animal filopodia and thus underscore the relevance of choanoflagellate models for evolutionary biology, neurobiology and cell biology research.
Resumo:
In establishing the reliability of performance-related design methods for concrete – which are relevant for resistance against chloride-induced corrosion - long-term experience of local materials and practices and detailed knowledge of the ambient and local micro-climate are critical. Furthermore, in the development of analytical models for performance-based design, calibration against test data representative of actual conditions in practice is required. To this end, the current study presents results from full-scale, concrete pier-stems under long-term exposure to a marine environment with work focussing on XS2 (below mid-tide level) in which the concrete is regarded as fully saturated and XS3 (tidal, splash and spray) in which the concrete is in an unsaturated condition. These exposures represent zones where concrete structures are most susceptible to ionic ingress and deterioration. Chloride profiles and chloride transport behaviour are studied using both an empirical model (erfc function) and a physical model (ClinConc). The time dependency of surface chloride concentration (Cs) and apparent diffusivity (Da) were established for the empirical model whereas, in the ClinConc model (originally based on saturated concrete), two new environmental factors were introduced for the XS3 environmental exposure zone. Although the XS3 is considered as one environmental exposure zone according to BS EN 206-1:2013, the work has highlighted that even within this zone, significant changes in chloride ingress are evident. This study aims to update the parameters of both models for predicting the long term transport behaviour of concrete subjected to environmental exposure classes XS2 and XS3.
Resumo:
Cette thèse se compose de trois articles sur les politiques budgétaires et monétaires optimales. Dans le premier article, J'étudie la détermination conjointe de la politique budgétaire et monétaire optimale dans un cadre néo-keynésien avec les marchés du travail frictionnels, de la monnaie et avec distortion des taux d'imposition du revenu du travail. Dans le premier article, je trouve que lorsque le pouvoir de négociation des travailleurs est faible, la politique Ramsey-optimale appelle à un taux optimal d'inflation annuel significativement plus élevé, au-delà de 9.5%, qui est aussi très volatile, au-delà de 7.4%. Le gouvernement Ramsey utilise l'inflation pour induire des fluctuations efficaces dans les marchés du travail, malgré le fait que l'évolution des prix est coûteuse et malgré la présence de la fiscalité du travail variant dans le temps. Les résultats quantitatifs montrent clairement que le planificateur s'appuie plus fortement sur l'inflation, pas sur l'impôts, pour lisser les distorsions dans l'économie au cours du cycle économique. En effet, il ya un compromis tout à fait clair entre le taux optimal de l'inflation et sa volatilité et le taux d'impôt sur le revenu optimal et sa variabilité. Le plus faible est le degré de rigidité des prix, le plus élevé sont le taux d'inflation optimal et la volatilité de l'inflation et le plus faible sont le taux d'impôt optimal sur le revenu et la volatilité de l'impôt sur le revenu. Pour dix fois plus petit degré de rigidité des prix, le taux d'inflation optimal et sa volatilité augmentent remarquablement, plus de 58% et 10%, respectivement, et le taux d'impôt optimal sur le revenu et sa volatilité déclinent de façon spectaculaire. Ces résultats sont d'une grande importance étant donné que dans les modèles frictionnels du marché du travail sans politique budgétaire et monnaie, ou dans les Nouveaux cadres keynésien même avec un riche éventail de rigidités réelles et nominales et un minuscule degré de rigidité des prix, la stabilité des prix semble être l'objectif central de la politique monétaire optimale. En l'absence de politique budgétaire et la demande de monnaie, le taux d'inflation optimal tombe très proche de zéro, avec une volatilité environ 97 pour cent moins, compatible avec la littérature. Dans le deuxième article, je montre comment les résultats quantitatifs impliquent que le pouvoir de négociation des travailleurs et les coûts de l'aide sociale de règles monétaires sont liées négativement. Autrement dit, le plus faible est le pouvoir de négociation des travailleurs, le plus grand sont les coûts sociaux des règles de politique monétaire. Toutefois, dans un contraste saisissant par rapport à la littérature, les règles qui régissent à la production et à l'étroitesse du marché du travail entraînent des coûts de bien-être considérablement plus faible que la règle de ciblage de l'inflation. C'est en particulier le cas pour la règle qui répond à l'étroitesse du marché du travail. Les coûts de l'aide sociale aussi baisse remarquablement en augmentant la taille du coefficient de production dans les règles monétaires. Mes résultats indiquent qu'en augmentant le pouvoir de négociation du travailleur au niveau Hosios ou plus, les coûts de l'aide sociale des trois règles monétaires diminuent significativement et la réponse à la production ou à la étroitesse du marché du travail n'entraîne plus une baisse des coûts de bien-être moindre que la règle de ciblage de l'inflation, qui est en ligne avec la littérature existante. Dans le troisième article, je montre d'abord que la règle Friedman dans un modèle monétaire avec une contrainte de type cash-in-advance pour les entreprises n’est pas optimale lorsque le gouvernement pour financer ses dépenses a accès à des taxes à distorsion sur la consommation. Je soutiens donc que, la règle Friedman en présence de ces taxes à distorsion est optimale si nous supposons un modèle avec travaie raw-efficace où seule le travaie raw est soumis à la contrainte de type cash-in-advance et la fonction d'utilité est homothétique dans deux types de main-d'oeuvre et séparable dans la consommation. Lorsque la fonction de production présente des rendements constants à l'échelle, contrairement au modèle des produits de trésorerie de crédit que les prix de ces deux produits sont les mêmes, la règle Friedman est optimal même lorsque les taux de salaire sont différents. Si la fonction de production des rendements d'échelle croissant ou decroissant, pour avoir l'optimalité de la règle Friedman, les taux de salaire doivent être égales.
Resumo:
This work presents a computational, called MOMENTS, code developed to be used in process control to determine a characteristic transfer function to industrial units when radiotracer techniques were been applied to study the unit´s performance. The methodology is based on the measuring the residence time distribution function (RTD) and calculate the first and second temporal moments of the tracer data obtained by two scintillators detectors NaI positioned to register a complete tracer movement inside the unit. Non linear regression technique has been used to fit various mathematical models and a statistical test was used to select the best result to the transfer function. Using the code MOMENTS, twelve different models can be used to fit a curve and calculate technical parameters to the unit.
Resumo:
Ce projet illustre cinq études, mettant l'emphase sur le développement d'une nouvelle approche diagnostique cardiovasculaire afin d'évaluer le niveau d’oxygène contenu dans le myocarde ainsi que sa fonction microvasculaire. En combinant une séquence de résonance magnétique cardiovasculaire (RMC) pouvant détecter le niveau d’oxygène (OS), des manœuvres respiratoires ainsi que des analyses de gaz artériels peuvent être utilisés comme procédure non invasive destinée à induire une réponse vasoactive afin d’évaluer la réserve d'oxygénation, une mesure clé de la fonction vasculaire. Le nombre de tests diagnostiques cardiaques prescrits ainsi que les interventions, sont en pleine expansion. L'imagerie et tests non invasifs sont souvent effectués avant l’utilisation de procédures invasives. L'imagerie cardiaque permet d’évaluer la présence ou absence de sténoses coronaires, un important facteur économique dans notre système de soins de santé. Les techniques d'imagerie non invasives fournissent de l’information précise afin d’identifier la présence et l’emplacement du déficit de perfusion chez les patients présentant des symptômes d'ischémie myocardique. Néanmoins, plusieurs techniques actuelles requièrent la nécessité de radiation, d’agents de contraste ou traceurs, sans oublier des protocoles de stress pharmacologiques ou physiques. L’imagerie RMC peut identifier une sténose coronaire significative sans radiation. De nouvelles tendances d’utilisation de RMC visent à développer des techniques diagnostiques qui ne requièrent aucun facteur de stress pharmacologiques ou d’agents de contraste. L'objectif principal de ce projet était de développer et tester une nouvelle technique diagnostique afin d’évaluer la fonction vasculaire coronarienne en utilisant l' OS-RMC, en combinaison avec des manœuvres respiratoires comme stimulus vasoactif. Ensuite, les objectifs, secondaires étaient d’utilisés l’OS-RMC pour évaluer l'oxygénation du myocarde et la réponse coronaire en présence de gaz artériels altérés. Suite aux manœuvres respiratoires la réponse vasculaire a été validée chez un modèle animal pour ensuite être utilisé chez deux volontaires sains et finalement dans une population de patients atteints de maladies cardiovasculaires. Chez le modèle animal, les manœuvres respiratoires ont pu induire un changement significatif, mesuré intrusivement par débit sanguin coronaire. Il a été démontré qu’en présence d'une sténose coronarienne hémodynamiquement significative, l’OS-RMC pouvait détecter un déficit en oxygène du myocarde. Chez l’homme sain, l'application de cette technique en comparaison avec l'adénosine (l’agent standard) pour induire une vasodilatation coronarienne et les manœuvres respiratoires ont pu induire une réponse plus significative en oxygénation dans un myocarde sain. Finalement, nous avons utilisé les manœuvres respiratoires parmi un groupe de patients atteint de maladies coronariennes. Leurs myocardes étant altérées par une sténose coronaire, en conséquence modifiant ainsi leur réponse en oxygénation. Par la suite nous avons évalué les effets des gaz artériels sanguins sur l'oxygénation du myocarde. Ils démontrent que la réponse coronarienne est atténuée au cours de l’hyperoxie, suite à un stimuli d’apnée. Ce phénomène provoque une réduction globale du débit sanguin coronaire et un déficit d'oxygénation dans le modèle animal ayant une sténose lorsqu’un supplément en oxygène est donné. En conclusion, ce travail a permis d'améliorer notre compréhension des nouvelles techniques diagnostiques en imagerie cardiovasculaire. Par ailleurs, nous avons démontré que la combinaison de manœuvres respiratoires et l’imagerie OS-RMC peut fournir une méthode non-invasive et rentable pour évaluer la fonction vasculaire coronarienne régionale et globale.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08