975 resultados para markov chains monte carlo methods
Resumo:
Background: The Lescol Intervention Prevention Study (LIPS) was a multinational randomized controlled trial that showed a 47% reduction in the relative risk of cardiac death and a 22% reduction in major adverse cardiac events (MACEs) from the routine use of fluvastatin, compared with controls, in patients undergoing percutaneous coronary intervention (PCI, defined as angioplasty with or without stents). In this study, MACEs included cardiac death, nonfatal myocardial infarction, and subsequent PCI and coronary artery bypass graft. Diabetes was the greatest risk factor for MACEs. Objective: This study estimated the cost-effectiveness of fluvastatin when used for secondary prevention of MACEs after PCI in people with diabetes. Methods: A post hoc subgroup analysis of patients with diabetes from the LIPS was used to estimate the effectiveness of fluvastatin in reducing myocardial infarction, revascularization, and cardiac death. A probabilistic Markov model was developed using United Kingdom resource and cost data to estimate the additional costs and quality-adjusted life-years (QALYs) gained over 10 years from the perspective of the British National Health Service. The model contained 6 health states, and the transition probabilities were derived from the LIPS data. Crossover from fluvastatin to other lipid-lowering drugs, withdrawal from fluvastatin, and the use of lipid-lowering drugs in the control group were included. Results: In the subgroup of 202 patients with diabetes in the LIPS trial, 18 (15.0%) of 120 fluvastatin patients and 21 (25.6%) of 82 control participants were insulin dependent (P = NS). Compared with the control group, patients treated with fluvastatin can expect to gain an additional mean (SD) of 0.196 (0.139) QALY per patient over 10 years (P < 0.001) and will cost the health service an additional mean (SD) of 10 (E448) (P = NS) (mean [SD] US $16 [$689]). The additional cost per QALY gained was;(51 (US $78). The key determinants of cost-effectiveness included the probabilities of repeat interventions, cardiac death, the cost of fluvastatin, and the time horizon used for the evaluation. Conclusion: Fluvastatin was an economically efficient treatment to prevent MACEs in these patients with diabetes undergoing PCI.
Resumo:
A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.
Resumo:
This work presents a two-dimensional approach of risk assessment method based on the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The risk is calculated using Monte Carlo simulation methods whereby synthetic contaminant source terms were generated to the same distribution as historically occurring pollution events or a priori potential probability distribution. The spatial and temporal distributions of the generated contaminant concentrations at pre-defined monitoring points within the aquifer were then simulated from repeated realisations using integrated mathematical models. The number of times when user defined ranges of concentration magnitudes were exceeded is quantified as risk. The utilities of the method were demonstrated using hypothetical scenarios, and the risk of pollution from a number of sources all occurring by chance together was evaluated. The results are presented in the form of charts and spatial maps. The generated risk maps show the risk of pollution at each observation borehole, as well as the trends within the study area. This capability to generate synthetic pollution events from numerous potential sources of pollution based on historical frequency of their occurrence proved to be a great asset to the method, and a large benefit over the contemporary methods.
Resumo:
2000 Mathematics Subject Classification: primary: 60J80, 60J85, secondary: 62M09, 92D40
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Reliability and dependability modeling can be employed during many stages of analysis of a computing system to gain insights into its critical behaviors. To provide useful results, realistic models of systems are often necessarily large and complex. Numerical analysis of these models presents a formidable challenge because the sizes of their state-space descriptions grow exponentially in proportion to the sizes of the models. On the other hand, simulation of the models requires analysis of many trajectories in order to compute statistically correct solutions. This dissertation presents a novel framework for performing both numerical analysis and simulation. The new numerical approach computes bounds on the solutions of transient measures in large continuous-time Markov chains (CTMCs). It extends existing path-based and uniformization-based methods by identifying sets of paths that are equivalent with respect to a reward measure and related to one another via a simple structural relationship. This relationship makes it possible for the approach to explore multiple paths at the same time,· thus significantly increasing the number of paths that can be explored in a given amount of time. Furthermore, the use of a structured representation for the state space and the direct computation of the desired reward measure (without ever storing the solution vector) allow it to analyze very large models using a very small amount of storage. Often, path-based techniques must compute many paths to obtain tight bounds. In addition to presenting the basic path-based approach, we also present algorithms for computing more paths and tighter bounds quickly. One resulting approach is based on the concept of path composition whereby precomputed subpaths are composed to compute the whole paths efficiently. Another approach is based on selecting important paths (among a set of many paths) for evaluation. Many path-based techniques suffer from having to evaluate many (unimportant) paths. Evaluating the important ones helps to compute tight bounds efficiently and quickly.
Resumo:
The following thesis aims to investigate the issues concerning the maintenance of a Machine Learning model over time, both about the versioning of the model itself and the data on which it is trained and about data monitoring tools and their distribution. The themes of Data Drift and Concept Drift were then explored and the performance of some of the most popular techniques in the field of Anomaly detection, such as VAE, PCA, and Monte Carlo Dropout, were evaluated.
Resumo:
The main goal of this paper is to establish some equivalence results on stability, recurrence, and ergodicity between a piecewise deterministic Markov process ( PDMP) {X( t)} and an embedded discrete-time Markov chain {Theta(n)} generated by a Markov kernel G that can be explicitly characterized in terms of the three local characteristics of the PDMP, leading to tractable criterion results. First we establish some important results characterizing {Theta(n)} as a sampling of the PDMP {X( t)} and deriving a connection between the probability of the first return time to a set for the discrete-time Markov chains generated by G and the resolvent kernel R of the PDMP. From these results we obtain equivalence results regarding irreducibility, existence of sigma-finite invariant measures, and ( positive) recurrence and ( positive) Harris recurrence between {X( t)} and {Theta(n)}, generalizing the results of [ F. Dufour and O. L. V. Costa, SIAM J. Control Optim., 37 ( 1999), pp. 1483-1502] in several directions. Sufficient conditions in terms of a modified Foster-Lyapunov criterion are also presented to ensure positive Harris recurrence and ergodicity of the PDMP. We illustrate the use of these conditions by showing the ergodicity of a capacity expansion model.
Resumo:
We propose a statistical model to account for the gel-fluid anomalous phase transitions in charged bilayer- or lamellae-forming ionic lipids. The model Hamiltonian comprises effective attractive interactions to describe neutral-lipid membranes as well as the effect of electrostatic repulsions of the discrete ionic charges on the lipid headgroups. The latter can be counterion dissociated (charged) or counterion associated (neutral), while the lipid acyl chains may be in gel (low-temperature or high-lateral-pressure) or fluid (high-temperature or low-lateral-pressure) states. The system is modeled as a lattice gas with two distinct particle types-each one associated, respectively, with the polar-headgroup and the acyl-chain states-which can be mapped onto an Ashkin-Teller model with the inclusion of cubic terms. The model displays a rich thermodynamic behavior in terms of the chemical potential of counterions (related to added salt concentration) and lateral pressure. In particular, we show the existence of semidissociated thermodynamic phases related to the onset of charge order in the system. This type of order stems from spatially ordered counterion association to the lipid headgroups, in which charged and neutral lipids alternate in a checkerboard-like order. Within the mean-field approximation, we predict that the acyl-chain order-disorder transition is discontinuous, with the first-order line ending at a critical point, as in the neutral case. Moreover, the charge order gives rise to continuous transitions, with the associated second-order lines joining the aforementioned first-order line at critical end points. We explore the thermodynamic behavior of some physical quantities, like the specific heat at constant lateral pressure and the degree of ionization, associated with the fraction of charged lipid headgroups.
Resumo:
The electronic properties of liquid ammonia are investigated by a sequential molecular dynamics/quantum mechanics approach. Quantum mechanics calculations for the liquid phase are based on a reparametrized hybrid exchange-correlation functional that reproduces the electronic properties of ammonia clusters [(NH(3))(n); n=1-5]. For these small clusters, electron binding energies based on Green's function or electron propagator theory, coupled cluster with single, double, and perturbative triple excitations, and density functional theory (DFT) are compared. Reparametrized DFT results for the dipole moment, electron binding energies, and electronic density of states of liquid ammonia are reported. The calculated average dipole moment of liquid ammonia (2.05 +/- 0.09 D) corresponds to an increase of 27% compared to the gas phase value and it is 0.23 D above a prediction based on a polarizable model of liquid ammonia [Deng , J. Chem. Phys. 100, 7590 (1994)]. Our estimate for the ionization potential of liquid ammonia is 9.74 +/- 0.73 eV, which is approximately 1.0 eV below the gas phase value for the isolated molecule. The theoretical vertical electron affinity of liquid ammonia is predicted as 0.16 +/- 0.22 eV, in good agreement with the experimental result for the location of the bottom of the conduction band (-V(0)=0.2 eV). Vertical ionization potentials and electron affinities correlate with the total dipole moment of ammonia aggregates. (c) 2008 American Institute of Physics.
Resumo:
This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP`s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space a""e (n) . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter epsilon > 0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as epsilon goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as epsilon goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.
Resumo:
This paper analyzes the complexity-performance trade-off of several heuristic near-optimum multiuser detection (MuD) approaches applied to the uplink of synchronous single/multiple-input multiple-output multicarrier code division multiple access (S/MIMO MC-CDMA) systems. Genetic algorithm (GA), short term tabu search (STTS) and reactive tabu search (RTS), simulated annealing (SA), particle swarm optimization (PSO), and 1-opt local search (1-LS) heuristic multiuser detection algorithms (Heur-MuDs) are analyzed in details, using a single-objective antenna-diversity-aided optimization approach. Monte- Carlo simulations show that, after convergence, the performances reached by all near-optimum Heur-MuDs are similar. However, the computational complexities may differ substantially, depending on the system operation conditions. Their complexities are carefully analyzed in order to obtain a general complexity-performance framework comparison and to show that unitary Hamming distance search MuD (uH-ds) approaches (1-LS, SA, RTS and STTS) reach the best convergence rates, and among them, the 1-LS-MuD provides the best trade-off between implementation complexity and bit error rate (BER) performance.
Resumo:
In the protein folding problem, solvent-mediated forces are commonly represented by intra-chain pairwise contact energy. Although this approximation has proven to be useful in several circumstances, it is limited in some other aspects of the problem. Here we show that it is possible to achieve two models to represent the chain-solvent system. one of them with implicit and other with explicit solvent, such that both reproduce the same thermodynamic results. Firstly, lattice models treated by analytical methods, were used to show that the implicit and explicitly representation of solvent effects can be energetically equivalent only if local solvent properties are time and spatially invariant. Following, applying the same reasoning Used for the lattice models, two inter-consistent Monte Carlo off-lattice models for implicit and explicit solvent are constructed, being that now in the latter the solvent properties are allowed to fluctuate. Then, it is shown that the chain configurational evolution as well as the globule equilibrium conformation are significantly distinct for implicit and explicit solvent systems. Actually, strongly contrasting with the implicit solvent version, the explicit solvent model predicts: (i) a malleable globule, in agreement with the estimated large protein-volume fluctuations; (ii) thermal conformational stability, resembling the conformational hear resistance of globular proteins, in which radii of gyration are practically insensitive to thermal effects over a relatively wide range of temperatures; and (iii) smaller radii of gyration at higher temperatures, indicating that the chain conformational entropy in the unfolded state is significantly smaller than that estimated from random coil configurations. Finally, we comment on the meaning of these results with respect to the understanding of the folding process. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone of the sparse routines consists of matrix-free Krylov subspace projection methods (Arnoldi and Lanczos processes), and that is why the toolkit is capable of coping with sparse matrices of large dimension. The software handles real and complex matrices and provides specific routines for symmetric and Hermitian matrices. The computation of matrix exponentials is a numerical issue of critical importance in the area of Markov chains and furthermore, the computed solution is subject to probabilistic constraints. In addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of Markov chains.