23 resultados para Stochastic Differential Equations, Parameter Estimation, Maximum Likelihood, Simulation, Moments
em Université de Lausanne, Switzerland
Resumo:
Biochemical systems are commonly modelled by systems of ordinary differential equations (ODEs). A particular class of such models called S-systems have recently gained popularity in biochemical system modelling. The parameters of an S-system are usually estimated from time-course profiles. However, finding these estimates is a difficult computational problem. Moreover, although several methods have been recently proposed to solve this problem for ideal profiles, relatively little progress has been reported for noisy profiles. We describe a special feature of a Newton-flow optimisation problem associated with S-system parameter estimation. This enables us to significantly reduce the search space, and also lends itself to parameter estimation for noisy data. We illustrate the applicability of our method by applying it to noisy time-course data synthetically produced from previously published 4- and 30-dimensional S-systems. In addition, we propose an extension of our method that allows the detection of network topologies for small S-systems. We introduce a new method for estimating S-system parameters from time-course profiles. We show that the performance of this method compares favorably with competing methods for ideal profiles, and that it also allows the determination of parameters for noisy profiles.
Resumo:
We extend PML theory to account for information on the conditional moments up to order four, but without assuming a parametric model, to avoid a risk of misspecification of the conditional distribution. The key statistical tool is the quartic exponential family, which allows us to generalize the PML2 and QGPML1 methods proposed in Gourieroux et al. (1984) to PML4 and QGPML2 methods, respectively. An asymptotic theory is developed. The key numerical tool that we use is the Gauss-Freud integration scheme that solves a computational problem that has previously been raised in several fields. Simulation exercises demonstrate the feasibility and robustness of the methods [Authors]
Resumo:
SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.
Resumo:
Part I of this series of articles focused on the construction of graphical probabilistic inference procedures, at various levels of detail, for assessing the evidential value of gunshot residue (GSR) particle evidence. The proposed models - in the form of Bayesian networks - address the issues of background presence of GSR particles, analytical performance (i.e., the efficiency of evidence searching and analysis procedures) and contamination. The use and practical implementation of Bayesian networks for case pre-assessment is also discussed. This paper, Part II, concentrates on Bayesian parameter estimation. This topic complements Part I in that it offers means for producing estimates useable for the numerical specification of the proposed probabilistic graphical models. Bayesian estimation procedures are given a primary focus of attention because they allow the scientist to combine (his/her) prior knowledge about the problem of interest with newly acquired experimental data. The present paper also considers further topics such as the sensitivity of the likelihood ratio due to uncertainty in parameters and the study of likelihood ratio values obtained for members of particular populations (e.g., individuals with or without exposure to GSR).
Resumo:
In Quantitative Microbial Risk Assessment, it is vital to understand how lag times of individual cells are distributed over a bacterial population. Such identified distributions can be used to predict the time by which, in a growth-supporting environment, a few pathogenic cells can multiply to a poisoning concentration level. We model the lag time of a single cell, inoculated into a new environment, by the delay of the growth function characterizing the generated subpopulation. We introduce an easy-to-implement procedure, based on the method of moments, to estimate the parameters of the distribution of single cell lag times. The advantage of the method is especially apparent for cases where the initial number of cells is small and random, and the culture is detectable only in the exponential growth phase.
Resumo:
As a thorough aggregation of probability and graph theory, Bayesian networks currently enjoy widespread interest as a means for studying factors that affect the coherent evaluation of scientific evidence in forensic science. Paper I of this series of papers intends to contribute to the discussion of Bayesian networks as a framework that is helpful for both illustrating and implementing statistical procedures that are commonly employed for the study of uncertainties (e.g. the estimation of unknown quantities). While the respective statistical procedures are widely described in literature, the primary aim of this paper is to offer an essentially non-technical introduction on how interested readers may use these analytical approaches - with the help of Bayesian networks - for processing their own forensic science data. Attention is mainly drawn to the structure and underlying rationale of a series of basic and context-independent network fragments that users may incorporate as building blocs while constructing larger inference models. As an example of how this may be done, the proposed concepts will be used in a second paper (Part II) for specifying graphical probability networks whose purpose is to assist forensic scientists in the evaluation of scientific evidence encountered in the context of forensic document examination (i.e. results of the analysis of black toners present on printed or copied documents).
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
BACKGROUND: In vitro aggregating brain cell cultures containing all types of brain cells have been shown to be useful for neurotoxicological investigations. The cultures are used for the detection of nervous system-specific effects of compounds by measuring multiple endpoints, including changes in enzyme activities. Concentration-dependent neurotoxicity is determined at several time points. METHODS: A Markov model was set up to describe the dynamics of brain cell populations exposed to potentially neurotoxic compounds. Brain cells were assumed to be either in a healthy or stressed state, with only stressed cells being susceptible to cell death. Cells may have switched between these states or died with concentration-dependent transition rates. Since cell numbers were not directly measurable, intracellular lactate dehydrogenase (LDH) activity was used as a surrogate. Assuming that changes in cell numbers are proportional to changes in intracellular LDH activity, stochastic enzyme activity models were derived. Maximum likelihood and least squares regression techniques were applied for estimation of the transition rates. Likelihood ratio tests were performed to test hypotheses about the transition rates. Simulation studies were used to investigate the performance of the transition rate estimators and to analyze the error rates of the likelihood ratio tests. The stochastic time-concentration activity model was applied to intracellular LDH activity measurements after 7 and 14 days of continuous exposure to propofol. The model describes transitions from healthy to stressed cells and from stressed cells to death. RESULTS: The model predicted that propofol would affect stressed cells more than healthy cells. Increasing propofol concentration from 10 to 100 μM reduced the mean waiting time for transition to the stressed state by 50%, from 14 to 7 days, whereas the mean duration to cellular death reduced more dramatically from 2.7 days to 6.5 hours. CONCLUSION: The proposed stochastic modeling approach can be used to discriminate between different biological hypotheses regarding the effect of a compound on the transition rates. The effects of different compounds on the transition rate estimates can be quantitatively compared. Data can be extrapolated at late measurement time points to investigate whether costs and time-consuming long-term experiments could possibly be eliminated.
Resumo:
Despite their limited proliferation capacity, regulatory T cells (T(regs)) constitute a population maintained over the entire lifetime of a human organism. The means by which T(regs) sustain a stable pool in vivo are controversial. Using a mathematical model, we address this issue by evaluating several biological scenarios of the origins and the proliferation capacity of two subsets of T(regs): precursor CD4(+)CD25(+)CD45RO(-) and mature CD4(+)CD25(+)CD45RO(+) cells. The lifelong dynamics of T(regs) are described by a set of ordinary differential equations, driven by a stochastic process representing the major immune reactions involving these cells. The model dynamics are validated using data from human donors of different ages. Analysis of the data led to the identification of two properties of the dynamics: (1) the equilibrium in the CD4(+)CD25(+)FoxP3(+)T(regs) population is maintained over both precursor and mature T(regs) pools together, and (2) the ratio between precursor and mature T(regs) is inverted in the early years of adulthood. Then, using the model, we identified three biologically relevant scenarios that have the above properties: (1) the unique source of mature T(regs) is the antigen-driven differentiation of precursors that acquire the mature profile in the periphery and the proliferation of T(regs) is essential for the development and the maintenance of the pool; there exist other sources of mature T(regs), such as (2) a homeostatic density-dependent regulation or (3) thymus- or effector-derived T(regs), and in both cases, antigen-induced proliferation is not necessary for the development of a stable pool of T(regs). This is the first time that a mathematical model built to describe the in vivo dynamics of regulatory T cells is validated using human data. The application of this model provides an invaluable tool in estimating the amount of regulatory T cells as a function of time in the blood of patients that received a solid organ transplant or are suffering from an autoimmune disease.
Resumo:
Resume : Mieux comprendre les stromatolithes et les tapis microbiens est un sujet important en biogéosciences puisque cela aide à l'étude des premières formes de vie sur Terre, a mieux cerner l'écologie des communautés microbiennes et la contribution des microorganismes a la biominéralisation, et même à poser certains fondements dans les recherches en exobiologie. D'autre part, la modélisation est un outil puissant utilisé dans les sciences naturelles pour appréhender différents phénomènes de façon théorique. Les modèles sont généralement construits sur un système d'équations différentielles et les résultats sont obtenus en résolvant ce système. Les logiciels disponibles pour implémenter les modèles incluent les logiciels mathématiques et les logiciels généraux de simulation. L'objectif principal de cette thèse est de développer des modèles et des logiciels pour aider a comprendre, via la simulation, le fonctionnement des stromatolithes et des tapis microbiens. Ces logiciels ont été développés en C++ en ne partant d'aucun pré-requis de façon a privilégier performance et flexibilité maximales. Cette démarche permet de construire des modèles bien plus spécifiques et plus appropriés aux phénomènes a modéliser. Premièrement, nous avons étudié la croissance et la morphologie des stromatolithes. Nous avons construit un modèle tridimensionnel fondé sur l'agrégation par diffusion limitée. Le modèle a été implémenté en deux applications C++: un moteur de simulation capable d'exécuter un batch de simulations et de produire des fichiers de résultats, et un outil de visualisation qui permet d'analyser les résultats en trois dimensions. Après avoir vérifié que ce modèle peut en effet reproduire la croissance et la morphologie de plusieurs types de stromatolithes, nous avons introduit un processus de sédimentation comme facteur externe. Ceci nous a mené a des résultats intéressants, et permis de soutenir l'hypothèse que la morphologie des stromatolithes pourrait être le résultat de facteurs externes autant que de facteurs internes. Ceci est important car la classification des stromatolithes est généralement fondée sur leur morphologie, imposant que la forme d'un stromatolithe est dépendante de facteurs internes uniquement (c'est-à-dire les tapis microbiens). Les résultats avancés dans ce mémoire contredisent donc ces assertions communément admises. Ensuite, nous avons décidé de mener des recherches plus en profondeur sur les aspects fonctionnels des tapis microbiens. Nous avons construit un modèle bidimensionnel de réaction-diffusion fondé sur la simulation discrète. Ce modèle a été implémenté dans une application C++ qui permet de paramétrer et exécuter des simulations. Nous avons ensuite pu comparer les résultats de simulation avec des données du monde réel et vérifier que le modèle peut en effet imiter le comportement de certains tapis microbiens. Ainsi, nous avons pu émettre et vérifier des hypothèses sur le fonctionnement de certains tapis microbiens pour nous aider à mieux en comprendre certains aspects, comme la dynamique des éléments, en particulier le soufre et l'oxygène. En conclusion, ce travail a abouti à l'écriture de logiciels dédiés à la simulation de tapis microbiens d'un point de vue tant morphologique que fonctionnel, suivant deux approches différentes, l'une holistique, l'autre plus analytique. Ces logiciels sont gratuits et diffusés sous licence GPL (General Public License). Abstract : Better understanding of stromatolites and microbial mats is an important topic in biogeosciences as it helps studying the early forms of life on Earth, provides clues re- garding the ecology of microbial ecosystems and their contribution to biomineralization, and gives basis to a new science, exobiology. On the other hand, modelling is a powerful tool used in natural sciences for the theoretical approach of various phenomena. Models are usually built on a system of differential equations and results are obtained by solving that system. Available software to implement models includes mathematical solvers and general simulation software. The main objective of this thesis is to develop models and software able to help to understand the functioning of stromatolites and microbial mats. Software was developed in C++ from scratch for maximum performance and flexibility. This allows to build models much more specific to a phenomenon rather than general software. First, we studied stromatolite growth and morphology. We built a three-dimensional model based on diffusion-limited aggregation. The model was implemented in two C++ applications: a simulator engine, which can run a batch of simulations and produce result files, and a Visualization tool, which allows results to be analysed in three dimensions. After verifying that our model can indeed reproduce the growth and morphology of several types of stromatolites, we introduced a sedimentation process as an external factor. This lead to interesting results, and allowed to emit the hypothesis that stromatolite morphology may be the result of external factors as much as internal factors. This is important as stromatolite classification is usually based on their morphology, imposing that a stromatolite shape is dependant on internal factors only (i.e. the microbial mat). This statement is contradicted by our findings, Second, we decided to investigate deeper the functioning of microbial mats, We built a two-dimensional reaction-diffusion model based on discrete simulation, The model was implemented in a C++ application that allows setting and running simulations. We could then compare simulation results with real world data and verify that our model can indeed mimic the behaviour of some microbial mats. Thus, we have proposed and verified hypotheses regarding microbial mats functioning in order to help to better understand them, e.g. the cycle of some elements such as oxygen or sulfur. ln conclusion, this PhD provides a simulation software, dealing with two different approaches. This software is free and available under a GPL licence.
Resumo:
Gene duplication and neofunctionalization are known to be important processes in the evolution of phenotypic complexity. They account for important evolutionary novelties that confer ecological adaptation, such as the major histocompatibility complex (MHC), a multigene family crucial to the vertebrate immune system. In birds, two MHC class II β (MHCIIβ) exon 3 lineages have been recently characterized, and two hypotheses for the evolutionary history of MHCIIβ lineages were proposed. These lineages could have arisen either by 1) an ancient duplication and subsequent divergence of one paralog or by 2) recent parallel duplications followed by functional convergence. Here, we compiled a data set consisting of 63 MHCIIβ exon 3 sequences from six avian orders to distinguish between these hypotheses and to understand the role of selection in the divergent evolution of the two avian MHCIIβ lineages. Based on phylogenetic reconstructions and simulations, we show that a unique duplication event preceding the major avian radiations gave rise to two ancestral MHCIIβ lineages that were each likely lost once later during avian evolution. Maximum likelihood estimation shows that following the ancestral duplication, positive selection drove a radical shift from basic to acidic amino acid composition of a protein domain facing the α-chain in the MHCII α β-heterodimer. Structural analyses of the MHCII α β-heterodimer highlight that three of these residues are potentially involved in direct interactions with the α-chain, suggesting that the shift following duplication may have been accompanied by coevolution of the interacting α- and β-chains. These results provide new insights into the long-term evolutionary relationships among avian MHC genes and open interesting perspectives for comparative and population genomic studies of avian MHC evolution.
Resumo:
To date, state-of-the-art seismic material parameter estimates from multi-component sea-bed seismic data are based on the assumption that the sea-bed consists of a fully elastic half-space. In reality, however, the shallow sea-bed generally consists of soft, unconsolidated sediments that are characterized by strong to very strong seismic attenuation. To explore the potential implications, we apply a state-of-the-art elastic decomposition algorithm to synthetic data for a range of canonical sea-bed models consisting of a viscoelastic half-space of varying attenuation. We find that in the presence of strong seismic attenuation, as quantified by Q-values of 10 or less, significant errors arise in the conventional elastic estimation of seismic properties. Tests on synthetic data indicate that these errors can be largely avoided by accounting for the inherent attenuation of the seafloor when estimating the seismic parameters. This can be achieved by replacing the real-valued expressions for the elastic moduli in the governing equations in the parameter estimation by their complex-valued viscoelastic equivalents. The practical application of our parameter procedure yields realistic estimates of the elastic seismic material properties of the shallow sea-bed, while the corresponding Q-estimates seem to be biased towards too low values, particularly for S-waves. Given that the estimation of inelastic material parameters is notoriously difficult, particularly in the immediate vicinity of the sea-bed, this is expected to be of interest and importance for civil and ocean engineering purposes.
Resumo:
Aims: Plasma concentrations of imatinib differ largely between patients despite same dosage, owing to large inter-individual variability in pharmacokinetic (PK) parameters. As the drug concentration at the end of the dosage interval (Cmin) correlates with treatment response and tolerability, monitoring of Cmin is suggested for therapeutic drug monitoring (TDM) of imatinib. Due to logistic difficulties, random sampling during the dosage interval is however often performed in clinical practice, thus rendering the respective results not informative regarding Cmin values.Objectives: (I) To extrapolate randomly measured imatinib concentrations to more informative Cmin using classical Bayesian forecasting. (II) To extend the classical Bayesian method to account for correlation between PK parameters. (III) To evaluate the predictive performance of both methods.Methods: 31 paired blood samples (random and trough levels) were obtained from 19 cancer patients under imatinib. Two Bayesian maximum a posteriori (MAP) methods were implemented: (A) a classical method ignoring correlation between PK parameters, and (B) an extended one accounting for correlation. Both methods were applied to estimate individual PK parameters, conditional on random observations and covariate-adjusted priors from a population PK model. The PK parameter estimates were used to calculate trough levels. Relative prediction errors (PE) were analyzed to evaluate accuracy (one-sample t-test) and to compare precision between the methods (F-test to compare variances).Results: Both Bayesian MAP methods allowed non-biased predictions of individual Cmin compared to observations: (A) - 7% mean PE (CI95% - 18 to 4 %, p = 0.15) and (B) - 4% mean PE (CI95% - 18 to 10 %, p = 0.69). Relative standard deviations of actual observations from predictions were 22% (A) and 30% (B), i.e. comparable to the intraindividual variability reported. Precision was not improved by taking into account correlation between PK parameters (p = 0.22).Conclusion: Clinical interpretation of randomly measured imatinib concentrations can be assisted by Bayesian extrapolation to maximum likelihood Cmin. Classical Bayesian estimation can be applied for TDM without the need to include correlation between PK parameters. Both methods could be adapted in the future to evaluate other individual pharmacokinetic measures correlated to clinical outcomes, such as area under the curve(AUC).