982 resultados para Gel Dosimetry, Monte Carlo Modelling
Resumo:
OpenMI is a widely used standard allowing exchange of data between integrated models, which has mostly been applied to dynamic, deterministic models. Within the FP7 UncertWeb project we are developing mechanisms and tools to support the management of uncertainty in environmental models. In this paper we explore the integration of the UncertWeb framework with OpenMI, to assess the issues that arise when propagating uncertainty in OpenMI model compositions, and the degree of integration possible with UncertWeb tools. In particular we develop an uncertainty-enabled model for a simple Lotka-Volterra system with an interface conforming to the OpenMI standard, exploring uncertainty in the initial predator and prey levels, and the parameters of the model equations. We use the Elicitator tool developed within UncertWeb to identify the initial condition uncertainties, and show how these can be integrated, using UncertML, with simple Monte Carlo propagation mechanisms. The mediators we develop for OpenMI models are generic and produce standard Web services that expose the OpenMI models to a Web based framework. We discuss what further work is needed to allow a more complete system to be developed and show how this might be used practically.
Resumo:
Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: (1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (E LUMO) via QSAR modelling and analysis; (2) to validate the models by using internal and external cross-validation techniques; (3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl ) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: (1) Linear or Multi-linear Regression (MLR); (2) Partial Least Squares (PLS); and (3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: (1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; (2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; (3) E LUMO are shown to correlate highly with the NCl for several classes of DBPs; and (4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.
Resumo:
Survival models deals with the modelling of time to event data. In certain situations, a share of the population can no longer be subjected to the event occurrence. In this context, the cure fraction models emerged. Among the models that incorporate a fraction of cured one of the most known is the promotion time model. In the present study we discuss hypothesis testing in the promotion time model with Weibull distribution for the failure times of susceptible individuals. Hypothesis testing in this model may be performed based on likelihood ratio, gradient, score or Wald statistics. The critical values are obtained from asymptotic approximations, which may result in size distortions in nite sample sizes. This study proposes bootstrap corrections to the aforementioned tests and Bartlett bootstrap to the likelihood ratio statistic in Weibull promotion time model. Using Monte Carlo simulations we compared the nite sample performances of the proposed corrections in contrast with the usual tests. The numerical evidence favors the proposed corrected tests. At the end of the work an empirical application is presented.
Resumo:
The primary objective is to investigate the main factors contributing to GMS expenditure on pharmaceutical prescribing and projecting this expenditure to 2026. This study is located in the area of pharmacoeconomic cost containment and projections literature. The thesis has five main aims: 1. To determine the main factors contributing to GMS expenditure on pharmaceutical prescribing. 2. To develop a model to project GMS prescribing expenditure in five year intervals to 2026, using 2006 Central Statistics Office (CSO) Census data and 2007 Health Service Executive{Primary Care Reimbursement Service (HSE{PCRS) sample data. 3. To develop a model to project GMS prescribing expenditure in five year intervals to 2026, using 2012 HSE{PCRS population data, incorporating cost containment measures, and 2011 CSO Census data. 4. To investigate the impact of demographic factors and the pharmacology of drugs (Anatomical Therapeutic Chemical (ATC)) on GMS expenditure. 5. To explore the consequences of GMS policy changes on prescribing expenditure and behaviour between 2008 and 2014. The thesis is centered around three published articles and is located between the end of a booming Irish economy in 2007, a recession from 2008{2013, to the beginning of a recovery in 2014. The literature identified a number of factors influencing pharmaceutical expenditure, including population growth, population aging, changes in drug utilisation and drug therapies, age, gender and location. The literature identified the methods previously used in predictive modelling and consequently, the Monte Carlo Simulation (MCS) model was used to simulate projected expenditures to 2026. Also, the literature guided the use of Ordinary Least Squares (OLS) regression in determining demographic and pharmacology factors influencing prescribing expenditure. The study commences against a backdrop of growing GMS prescribing costs, which has risen from e250 million in 1998 to over e1 billion by 2007. Using a sample 2007 HSE{PCRS prescribing data (n=192,000) and CSO population data from 2008, (Conway et al., 2014) estimated GMS prescribing expenditure could rise to e2 billion by2026. The cogency of these findings was impacted by the global economic crisis of 2008, which resulted in a sharp contraction in the Irish economy, mounting fiscal deficits resulting in Ireland's entry to a bailout programme. The sustainability of funding community drug schemes, such as the GMS, came under the spotlight of the EU, IMF, ECB (Trioka), who set stringent targets for reducing drug costs, as conditions of the bailout programme. Cost containment measures included: the introduction of income eligibility limits for GP visit cards and medical cards for those aged 70 and over, introduction of co{payments for prescription items, reductions in wholesale mark{up and pharmacy dispensing fees. Projections for GMS expenditure were reevaluated using 2012 HSE{PCRS prescribing population data and CSO population data based on Census 2011. Taking into account both cost containment measures and revised population predictions, GMS expenditure is estimated to increase by 64%, from e1.1 billion in 2016 to e1.8 billion by 2026, (ConwayLenihan and Woods, 2015). In the final paper, a cross{sectional study was carried out on HSE{PCRS population prescribing database (n=1.63 million claimants) to investigate the impact of demographic factors, and the pharmacology of the drugs, on GMS prescribing expenditure. Those aged over 75 (ẞ = 1:195) and cardiovascular prescribing (ẞ = 1:193) were the greatest contributors to annual GMS prescribing costs. Respiratory drugs (Montelukast) recorded the highest proportion and expenditure for GMS claimants under the age of 15. Drugs prescribed for the nervous system (Escitalopram, Olanzapine and Pregabalin) were highest for those between 16 and 64 years with cardiovascular drugs (Statins) were highest for those aged over 65. Females are more expensive than males and are prescribed more items across the four ATC groups, except among children under 11, (ConwayLenihan et al., 2016). This research indicates that growth in the proportion of the elderly claimants and associated levels of cardiovascular prescribing, particularly for statins, will present difficulties for Ireland in terms of cost containment. Whilst policies aimed at cost containment (co{payment charges, generic substitution, reference pricing, adjustments to GMS eligibility) can be used to curtail expenditure, health promotional programs and educational interventions should be given equal emphasis. Also policies intended to affect physicians prescribing behaviour include guidelines, information (about price and less expensive alternatives) and feedback, and the use of budgetary restrictions could yield savings.
Resumo:
Les modèles incrémentaux sont des modèles statistiques qui ont été développés initialement dans le domaine du marketing. Ils sont composés de deux groupes, un groupe contrôle et un groupe traitement, tous deux comparés par rapport à une variable réponse binaire (le choix de réponses est « oui » ou « non »). Ces modèles ont pour but de détecter l’effet du traitement sur les individus à l’étude. Ces individus n’étant pas tous des clients, nous les appellerons : « prospects ». Cet effet peut être négatif, nul ou positif selon les caractéristiques des individus composants les différents groupes. Ce mémoire a pour objectif de comparer des modèles incrémentaux d’un point de vue bayésien et d’un point de vue fréquentiste. Les modèles incrémentaux utilisés en pratique sont ceux de Lo (2002) et de Lai (2004). Ils sont initialement réalisés d’un point de vue fréquentiste. Ainsi, dans ce mémoire, l’approche bayésienne est utilisée et comparée à l’approche fréquentiste. Les simulations sont e ectuées sur des données générées avec des régressions logistiques. Puis, les paramètres de ces régressions sont estimés avec des simulations Monte-Carlo dans l’approche bayésienne et comparés à ceux obtenus dans l’approche fréquentiste. L’estimation des paramètres a une influence directe sur la capacité du modèle à bien prédire l’effet du traitement sur les individus. Nous considérons l’utilisation de trois lois a priori pour l’estimation des paramètres de façon bayésienne. Elles sont choisies de manière à ce que les lois a priori soient non informatives. Les trois lois utilisées sont les suivantes : la loi bêta transformée, la loi Cauchy et la loi normale. Au cours de l’étude, nous remarquerons que les méthodes bayésiennes ont un réel impact positif sur le ciblage des individus composant les échantillons de petite taille.
Resumo:
Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: 1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (ELUMO) via QSAR modelling and analysis; 2) to validate the models by using internal and external cross-validation techniques; 3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: 1) Linear or Multi-linear Regression (MLR); 2) Partial Least Squares (PLS); and 3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: 1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; 2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; 3) ELUMO are shown to correlate highly with the NCl for several classes of DBPs; and 4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.
Resumo:
Loess is the most important collapsible soil; possibly the only engineering soil in which real collapse occurs. A real collapse involves a diminution in volume - it would be an open metastable packing being reduced to a more closely packed, more stable structure. Metastability is at the heart of the collapsible soils problem. To envisage and to model the collapse process in a metastable medium, knowledge is required about the nature and shape of the particles, the types of packings they assume (real and ideal), and the nature of the collapse process - a packing transition upon a change to the effective stress in a media of double porosity. Particle packing science has made little progress in geoscience discipline - since the initial packing paradigms set by Graton and Fraser (1935) - nevertheless is relatively well-established in the soft matter physics discipline. The collapse process can be represented by mathematical modelling of packing – including the Monte Carlo simulations - but relating representation to process remains difficult. This paper revisits the problem of sudden packing transition from a micro-physico-mechanical viewpoint (i.e. collapse imetan terms of structure-based effective stress). This cross-disciplinary approach helps in generalization on collapsible soils to be made that suggests loess is the only truly collapsible soil, because it is only loess which is so totally influenced by the packing essence of the formation process.
Resumo:
In the deregulated Power markets it is necessary to have a appropriate Transmission Pricing methodology that also takes into account “Congestion and Reliability”, in order to ensure an economically viable, equitable, and congestion free power transfer capability, with high reliability and security. This thesis presents results of research conducted on the development of a Decision Making Framework (DMF) of concepts and data analytic and modelling methods for the Reliability benefits Reflective Optimal “cost evaluation for the calculation of Transmission Cost” for composite power systems, using probabilistic methods. The methodology within the DMF devised and reported in this thesis, utilises a full AC Newton-Raphson load flow and a Monte-Carlo approach to determine, Reliability Indices which are then used for the proposed Meta-Analytical Probabilistic Approach (MAPA) for the evaluation and calculation of the Reliability benefit Reflective Optimal Transmission Cost (ROTC), of a transmission system. This DMF includes methods for transmission line embedded cost allocation among transmission transactions, accounting for line capacity-use as well as congestion costing that can be used for pricing using application of Power Transfer Distribution Factor (PTDF) as well as Bialek’s method to determine a methodology which consists of a series of methods and procedures as explained in detail in the thesis for the proposed MAPA for ROTC. The MAPA utilises the Bus Data, Generator Data, Line Data, Reliability Data and Customer Damage Function (CDF) Data for the evaluation of Congestion, Transmission and Reliability costing studies using proposed application of PTDF and other established/proven methods which are then compared, analysed and selected according to the area/state requirements and then integrated to develop ROTC. Case studies involving standard 7-Bus, IEEE 30-Bus and 146-Bus Indian utility test systems are conducted and reported throughout in the relevant sections of the dissertation. There are close correlation between results obtained through proposed application of PTDF method with the Bialek’s and different MW-Mile methods. The novel contributions of this research work are: firstly the application of PTDF method developed for determination of Transmission and Congestion costing, which are further compared with other proved methods. The viability of developed method is explained in the methodology, discussion and conclusion chapters. Secondly the development of comprehensive DMF which helps the decision makers to analyse and decide the selection of a costing approaches according to their requirements. As in the DMF all the costing approaches have been integrated to achieve ROTC. Thirdly the composite methodology for calculating ROTC has been formed into suits of algorithms and MATLAB programs for each part of the DMF, which are further described in the methodology section. Finally the dissertation concludes with suggestions for Future work.
Resumo:
Les modèles incrémentaux sont des modèles statistiques qui ont été développés initialement dans le domaine du marketing. Ils sont composés de deux groupes, un groupe contrôle et un groupe traitement, tous deux comparés par rapport à une variable réponse binaire (le choix de réponses est « oui » ou « non »). Ces modèles ont pour but de détecter l’effet du traitement sur les individus à l’étude. Ces individus n’étant pas tous des clients, nous les appellerons : « prospects ». Cet effet peut être négatif, nul ou positif selon les caractéristiques des individus composants les différents groupes. Ce mémoire a pour objectif de comparer des modèles incrémentaux d’un point de vue bayésien et d’un point de vue fréquentiste. Les modèles incrémentaux utilisés en pratique sont ceux de Lo (2002) et de Lai (2004). Ils sont initialement réalisés d’un point de vue fréquentiste. Ainsi, dans ce mémoire, l’approche bayésienne est utilisée et comparée à l’approche fréquentiste. Les simulations sont e ectuées sur des données générées avec des régressions logistiques. Puis, les paramètres de ces régressions sont estimés avec des simulations Monte-Carlo dans l’approche bayésienne et comparés à ceux obtenus dans l’approche fréquentiste. L’estimation des paramètres a une influence directe sur la capacité du modèle à bien prédire l’effet du traitement sur les individus. Nous considérons l’utilisation de trois lois a priori pour l’estimation des paramètres de façon bayésienne. Elles sont choisies de manière à ce que les lois a priori soient non informatives. Les trois lois utilisées sont les suivantes : la loi bêta transformée, la loi Cauchy et la loi normale. Au cours de l’étude, nous remarquerons que les méthodes bayésiennes ont un réel impact positif sur le ciblage des individus composant les échantillons de petite taille.
Resumo:
Understanding how virus strains offer protection against closely related emerging strains is vital for creating effective vaccines. For many viruses, including Foot-and-Mouth Disease Virus (FMDV) and the Influenza virus where multiple serotypes often co-circulate, in vitro testing of large numbers of vaccines can be infeasible. Therefore the development of an in silico predictor of cross-protection between strains is important to help optimise vaccine choice. Vaccines will offer cross-protection against closely related strains, but not against those that are antigenically distinct. To be able to predict cross-protection we must understand the antigenic variability within a virus serotype, distinct lineages of a virus, and identify the antigenic residues and evolutionary changes that cause the variability. In this thesis we present a family of sparse hierarchical Bayesian models for detecting relevant antigenic sites in virus evolution (SABRE), as well as an extended version of the method, the extended SABRE (eSABRE) method, which better takes into account the data collection process. The SABRE methods are a family of sparse Bayesian hierarchical models that use spike and slab priors to identify sites in the viral protein which are important for the neutralisation of the virus. In this thesis we demonstrate how the SABRE methods can be used to identify antigenic residues within different serotypes and show how the SABRE method outperforms established methods, mixed-effects models based on forward variable selection or l1 regularisation, on both synthetic and viral datasets. In addition we also test a number of different versions of the SABRE method, compare conjugate and semi-conjugate prior specifications and an alternative to the spike and slab prior; the binary mask model. We also propose novel proposal mechanisms for the Markov chain Monte Carlo (MCMC) simulations, which improve mixing and convergence over that of the established component-wise Gibbs sampler. The SABRE method is then applied to datasets from FMDV and the Influenza virus in order to identify a number of known antigenic residue and to provide hypotheses of other potentially antigenic residues. We also demonstrate how the SABRE methods can be used to create accurate predictions of the important evolutionary changes of the FMDV serotypes. In this thesis we provide an extended version of the SABRE method, the eSABRE method, based on a latent variable model. The eSABRE method takes further into account the structure of the datasets for FMDV and the Influenza virus through the latent variable model and gives an improvement in the modelling of the error. We show how the eSABRE method outperforms the SABRE methods in simulation studies and propose a new information criterion for selecting the random effects factors that should be included in the eSABRE method; block integrated Widely Applicable Information Criterion (biWAIC). We demonstrate how biWAIC performs equally to two other methods for selecting the random effects factors and combine it with the eSABRE method to apply it to two large Influenza datasets. Inference in these large datasets is computationally infeasible with the SABRE methods, but as a result of the improved structure of the likelihood, we are able to show how the eSABRE method offers a computational improvement, leading it to be used on these datasets. The results of the eSABRE method show that we can use the method in a fully automatic manner to identify a large number of antigenic residues on a variety of the antigenic sites of two Influenza serotypes, as well as making predictions of a number of nearby sites that may also be antigenic and are worthy of further experiment investigation.
Resumo:
El presente documento analiza los determinantes del margen de intermediación para el sistema financiero colombiano entre 1989 y 2003. Bajo una estimación dinámica de los efectos generados por variables específicas de actividad, impuestos y estructura de mercado, se presenta un seguimiento del margen de intermediación financiero, para un período que presenta elementos de liberalización y crisis.
Resumo:
This work assessed the environmental impacts of the production and use of 1 MJ of hydrous ethanol (E100) in Brazil in prospective scenarios (2020-2030), considering the deployment of technologies currently under development and better agricultural practices. The life cycle assessment technique was employed using the CML method for the life cycle impact assessment and the Monte Carlo method for the uncertainty analysis. Abiotic depletion, global warming, human toxicity, ecotoxicity, photochemical oxidation, acidification, and eutrophication were the environmental impacts categories analyzed. Results indicate that the proposed improvements (especially no-til farming-scenarios s2 and s4) would lead to environmental benefits in prospective scenarios compared to the current ethanol production (scenario s0). Combined first and second generation ethanol production (scenarios s3 and s4) would require less agricultural land but would not perform better than the projected first generation ethanol, although the uncertainties are relatively high. The best use of 1 ha of sugar cane was also assessed, considering the displacement of the conventional products by ethanol and electricity. No-til practices combined with the production of first generation ethanol and electricity (scenario s2) would lead to the largest mitigation effects for global warming and abiotic depletion. For the remaining categories, emissions would not be mitigated with the utilization of the sugar cane products. However, this conclusion is sensitive to the displaced electricity sources.
Resumo:
Often in biomedical research, we deal with continuous (clustered) proportion responses ranging between zero and one quantifying the disease status of the cluster units. Interestingly, the study population might also consist of relatively disease-free as well as highly diseased subjects, contributing to proportion values in the interval [0, 1]. Regression on a variety of parametric densities with support lying in (0, 1), such as beta regression, can assess important covariate effects. However, they are deemed inappropriate due to the presence of zeros and/or ones. To evade this, we introduce a class of general proportion density, and further augment the probabilities of zero and one to this general proportion density, controlling for the clustering. Our approach is Bayesian and presents a computationally convenient framework amenable to available freeware. Bayesian case-deletion influence diagnostics based on q-divergence measures are automatic from the Markov chain Monte Carlo output. The methodology is illustrated using both simulation studies and application to a real dataset from a clinical periodontology study.