969 resultados para Markov chain Monte Carlo techniques
Resumo:
A simulation model adopting a health system perspective showed population-based screening with DXA, followed by alendronate treatment of persons with osteoporosis, or with anamnestic fracture and osteopenia, to be cost-effective in Swiss postmenopausal women from age 70, but not in men. INTRODUCTION: We assessed the cost-effectiveness of a population-based screen-and-treat strategy for osteoporosis (DXA followed by alendronate treatment if osteoporotic, or osteopenic in the presence of fracture), compared to no intervention, from the perspective of the Swiss health care system. METHODS: A published Markov model assessed by first-order Monte Carlo simulation was refined to reflect the diagnostic process and treatment effects. Women and men entered the model at age 50. Main screening ages were 65, 75, and 85 years. Age at bone densitometry was flexible for persons fracturing before the main screening age. Realistic assumptions were made with respect to persistence with intended 5 years of alendronate treatment. The main outcome was cost per quality-adjusted life year (QALY) gained. RESULTS: In women, costs per QALY were Swiss francs (CHF) 71,000, CHF 35,000, and CHF 28,000 for the main screening ages of 65, 75, and 85 years. The threshold of CHF 50,000 per QALY was reached between main screening ages 65 and 75 years. Population-based screening was not cost-effective in men. CONCLUSION: Population-based DXA screening, followed by alendronate treatment in the presence of osteoporosis, or of fracture and osteopenia, is a cost-effective option in Swiss postmenopausal women after age 70.
Resumo:
Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.
Resumo:
In this dissertation, the problem of creating effective large scale Adaptive Optics (AO) systems control algorithms for the new generation of giant optical telescopes is addressed. The effectiveness of AO control algorithms is evaluated in several respects, such as computational complexity, compensation error rejection and robustness, i.e. reasonable insensitivity to the system imperfections. The results of this research are summarized as follows: 1. Robustness study of Sparse Minimum Variance Pseudo Open Loop Controller (POLC) for multi-conjugate adaptive optics (MCAO). The AO system model that accounts for various system errors has been developed and applied to check the stability and performance of the POLC algorithm, which is one of the most promising approaches for the future AO systems control. It has been shown through numerous simulations that, despite the initial assumption that the exact system knowledge is necessary for the POLC algorithm to work, it is highly robust against various system errors. 2. Predictive Kalman Filter (KF) and Minimum Variance (MV) control algorithms for MCAO. The limiting performance of the non-dynamic Minimum Variance and dynamic KF-based phase estimation algorithms for MCAO has been evaluated by doing Monte-Carlo simulations. The validity of simple near-Markov autoregressive phase dynamics model has been tested and its adequate ability to predict the turbulence phase has been demonstrated both for single- and multiconjugate AO. It has also been shown that there is no performance improvement gained from the use of the more complicated KF approach in comparison to the much simpler MV algorithm in the case of MCAO. 3. Sparse predictive Minimum Variance control algorithm for MCAO. The temporal prediction stage has been added to the non-dynamic MV control algorithm in such a way that no additional computational burden is introduced. It has been confirmed through simulations that the use of phase prediction makes it possible to significantly reduce the system sampling rate and thus overall computational complexity while both maintaining the system stable and effectively compensating for the measurement and control latencies.
Resumo:
BEAMnrc, a code for simulating medical linear accelerators based on EGSnrc, has been bench-marked and used extensively in the scientific literature and is therefore often considered to be the gold standard for Monte Carlo simulations for radiotherapy applications. However, its long computation times make it too slow for the clinical routine and often even for research purposes without a large investment in computing resources. VMC++ is a much faster code thanks to the intensive use of variance reduction techniques and a much faster implementation of the condensed history technique for charged particle transport. A research version of this code is also capable of simulating the full head of linear accelerators operated in photon mode (excluding multileaf collimators, hard and dynamic wedges). In this work, a validation of the full head simulation at 6 and 18 MV is performed, simulating with VMC++ and BEAMnrc the addition of one head component at a time and comparing the resulting phase space files. For the comparison, photon and electron fluence, photon energy fluence, mean energy, and photon spectra are considered. The largest absolute differences are found in the energy fluences. For all the simulations of the different head components, a very good agreement (differences in energy fluences between VMC++ and BEAMnrc <1%) is obtained. Only a particular case at 6 MV shows a somewhat larger energy fluence difference of 1.4%. Dosimetrically, these phase space differences imply an agreement between both codes at the <1% level, making VMC++ head module suitable for full head simulations with considerable gain in efficiency and without loss of accuracy.
Resumo:
Civil infrastructure provides essential services for the development of both society and economy. It is very important to manage systems efficiently to ensure sound performance. However, there are challenges in information extraction from available data, which also necessitates the establishment of methodologies and frameworks to assist stakeholders in the decision making process. This research proposes methodologies to evaluate systems performance by maximizing the use of available information, in an effort to build and maintain sustainable systems. Under the guidance of problem formulation from a holistic view proposed by Mukherjee and Muga, this research specifically investigates problem solving methods that measure and analyze metrics to support decision making. Failures are inevitable in system management. A methodology is developed to describe arrival pattern of failures in order to assist engineers in failure rescues and budget prioritization especially when funding is limited. It reveals that blockage arrivals are not totally random. Smaller meaningful subsets show good random behavior. Additional overtime failure rate is analyzed by applying existing reliability models and non-parametric approaches. A scheme is further proposed to depict rates over the lifetime of a given facility system. Further analysis of sub-data sets is also performed with the discussion of context reduction. Infrastructure condition is another important indicator of systems performance. The challenges in predicting facility condition are the transition probability estimates and model sensitivity analysis. Methods are proposed to estimate transition probabilities by investigating long term behavior of the model and the relationship between transition rates and probabilities. To integrate heterogeneities, model sensitivity is performed for the application of non-homogeneous Markov chains model. Scenarios are investigated by assuming transition probabilities follow a Weibull regressed function and fall within an interval estimate. For each scenario, multiple cases are simulated using a Monte Carlo simulation. Results show that variations on the outputs are sensitive to the probability regression. While for the interval estimate, outputs have similar variations to the inputs. Life cycle cost analysis and life cycle assessment of a sewer system are performed comparing three different pipe types, which are reinforced concrete pipe (RCP) and non-reinforced concrete pipe (NRCP), and vitrified clay pipe (VCP). Life cycle cost analysis is performed for material extraction, construction and rehabilitation phases. In the rehabilitation phase, Markov chains model is applied in the support of rehabilitation strategy. In the life cycle assessment, the Economic Input-Output Life Cycle Assessment (EIO-LCA) tools are used in estimating environmental emissions for all three phases. Emissions are then compared quantitatively among alternatives to support decision making.
Resumo:
For half a century the integrated circuits (ICs) that make up the heart of electronic devices have been steadily improving by shrinking at an exponential rate. However, as the current crop of ICs get smaller and the insulating layers involved become thinner, electrons leak through due to quantum mechanical tunneling. This is one of several issues which will bring an end to this incredible streak of exponential improvement of this type of transistor device, after which future improvements will have to come from employing fundamentally different transistor architecture rather than fine tuning and miniaturizing the metal-oxide-semiconductor field effect transistors (MOSFETs) in use today. Several new transistor designs, some designed and built here at Michigan Tech, involve electrons tunneling their way through arrays of nanoparticles. We use a multi-scale approach to model these devices and study their behavior. For investigating the tunneling characteristics of the individual junctions, we use a first-principles approach to model conduction between sub-nanometer gold particles. To estimate the change in energy due to the movement of individual electrons, we use the finite element method to calculate electrostatic capacitances. The kinetic Monte Carlo method allows us to use our knowledge of these details to simulate the dynamics of an entire device— sometimes consisting of hundreds of individual particles—and watch as a device ‘turns on’ and starts conducting an electric current. Scanning tunneling microscopy (STM) and the closely related scanning tunneling spectroscopy (STS) are a family of powerful experimental techniques that allow for the probing and imaging of surfaces and molecules at atomic resolution. However, interpretation of the results often requires comparison with theoretical and computational models. We have developed a new method for calculating STM topographs and STS spectra. This method combines an established method for approximating the geometric variation of the electronic density of states, with a modern method for calculating spin-dependent tunneling currents, offering a unique balance between accuracy and accessibility.
Resumo:
The physics of the operation of singe-electron tunneling devices (SEDs) and singe-electron tunneling transistors (SETs), especially of those with multiple nanometer-sized islands, has remained poorly understood in spite of some intensive experimental and theoretical research. This computational study examines the current-voltage (IV) characteristics of multi-island single-electron devices using a newly developed multi-island transport simulator (MITS) that is based on semi-classical tunneling theory and kinetic Monte Carlo simulation. The dependence of device characteristics on physical device parameters is explored, and the physical mechanisms that lead to the Coulomb blockade (CB) and Coulomb staircase (CS) characteristics are proposed. Simulations using MITS demonstrate that the overall IV characteristics in a device with a random distribution of islands are a result of a complex interplay among those factors that affect the tunneling rates that are fixed a priori (e.g. island sizes, island separations, temperature, gate bias, etc.), and the evolving charge state of the system, which changes as the source-drain bias (VSD) is changed. With increasing VSD, a multi-island device has to overcome multiple discrete energy barriers (up-steps) before it reaches the threshold voltage (Vth). Beyond Vth, current flow is rate-limited by slow junctions, which leads to the CS structures in the IV characteristic. Each step in the CS is characterized by a unique distribution of island charges with an associated distribution of tunneling probabilities. MITS simulation studies done on one-dimensional (1D) disordered chains show that longer chains are better suited for switching applications as Vth increases with increasing chain length. They are also able to retain CS structures at higher temperatures better than shorter chains. In sufficiently disordered 2D systems, we demonstrate that there may exist a dominant conducting path (DCP) for conduction, which makes the 2D device behave as a quasi-1D device. The existence of a DCP is sensitive to the device structure, but is robust with respect to changes in temperature, gate bias, and VSD. A side gate in 1D and 2D systems can effectively control Vth. We argue that devices with smaller island sizes and narrower junctions may be better suited for practical applications, especially at room temperature.
Resumo:
During the past decade microbeam radiation therapy has evolved from preclinical studies to a stage in which clinical trials can be planned, using spatially fractionated, highly collimated and high intensity beams like those generated at the x-ray ID17 beamline of the European Synchrotron Radiation Facility. The production of such microbeams typically between 25 and 100 microm full width at half maximum (FWHM) values and 100-400 microm center-to-center (c-t-c) spacings requires a multislit collimator either with fixed or adjustable microbeam width. The mechanical regularity of such devices is the most important property required to produce an array of identical microbeams. That ensures treatment reproducibility and reliable use of Monte Carlo-based treatment planning systems. New high precision wire cutting techniques allow the fabrication of these collimators made of tungsten carbide. We present a variable slit width collimator as well as a single slit device with a fixed setting of 50 microm FWHM and 400 microm c-t-c, both able to cover irradiation fields of 50 mm width, deemed to meet clinical requirements. Important improvements have reduced the standard deviation of 5.5 microm to less than 1 microm for a nominal FWHM value of 25 microm. The specifications of both devices, the methods used to measure these characteristics, and the results are presented.
Resumo:
The aim of our study was to develop a modeling framework suitable to quantify the incidence, absolute number and economic impact of osteoporosis-attributable hip, vertebral and distal forearm fractures, with a particular focus on change over time, and with application to the situation in Switzerland from 2000 to 2020. A Markov process model was developed and analyzed by Monte Carlo simulation. A demographic scenario provided by the Swiss Federal Statistical Office and various Swiss and international data sources were used as model inputs. Demographic and epidemiologic input parameters were reproduced correctly, confirming the internal validity of the model. The proportion of the Swiss population aged 50 years or over will rise from 33.3% in 2000 to 41.3% in 2020. At the total population level, osteoporosis-attributable incidence will rise from 1.16 to 1.54 per 1,000 person-years in the case of hip fracture, from 3.28 to 4.18 per 1,000 person-years in the case of radiographic vertebral fracture, and from 0.59 to 0.70 per 1,000 person-years in the case of distal forearm fracture. Osteoporosis-attributable hip fracture numbers will rise from 8,375 to 11,353, vertebral fracture numbers will rise from 23,584 to 30,883, and distal forearm fracture numbers will rise from 4,209 to 5,186. Population-level osteoporosis-related direct medical inpatient costs per year will rise from 713.4 million Swiss francs (CHF) to CHF946.2 million. These figures correspond to 1.6% and 2.2% of Swiss health care expenditures in 2000. The modeling framework described can be applied to a wide variety of settings. It can be used to assess the impact of new prevention, diagnostic and treatment strategies. In Switzerland incidences of osteoporotic hip, vertebral and distal forearm fracture will rise by 33%, 27%, and 19%, respectively, between 2000 and 2020, if current prevention and treatment patterns are maintained. Corresponding absolute fracture numbers will rise by 36%, 31%, and 23%. Related direct medical inpatient costs are predicted to increase by 33%; however, this estimate is subject to uncertainty due to limited availability of input data.
Resumo:
Transparent and translucent objects involve both light reflection and transmission at surfaces. This paper presents a physically based transmission model of rough surface. The surface is assumed to be locally smooth, and statistical techniques is applied to calculate light transmission through a local illumination area. We have obtained an analytical expression for single scattering. The analytical model has been compared to our Monte Carlo simulations as well as to the previous simulations, and good agreements have been achieved. The presented model has potential applications for realistic rendering of transparent and translucent objects.
Resumo:
During the second half of the 20th century untreated sewage released from housing and industry into natural waters led to a degradation of many freshwater lakes and reservoirs worldwide. In order to mitigate eutrophication, wastewater treatment plants, including Fe-induced phosphorus precipitation, were implemented throughout the industrialized world, leading to reoligotrophication in many freshwater lakes. To understand and assess the effects of reoligotrophication on primary productivity, we analyzed 28 years of 14C assimilation rates, as well as other biotic and abiotic parameters, such as global radiation, nutrient concentrations and plankton densities in peri-alpine Lake Lucerne, Switzerland. Using a simple productivity-light relationship, we estimated continuous primary production and discussed the relation between productivity and observed limnological parameters. Furthermore, we assessed the uncertainty of our modeling approach based on monthly 14C assimilation measurements using Monte Carlo simulations. Results confirm that monthly sampling of productivity is sufficient for identifying long-term trends in productivity and that conservation management has successfully improved water quality during the past three decades via reducing nutrients and primary production in the lake. However, even though nutrient concentrations have remained constant in recent years, annual primary production varies significantly from year to year. Despite the fact that nutrient concentrations have decreased by more than an order of magnitude, primary production has decreased only slightly. These results suggest that primary production correlates well to nutrients availability but meteorological conditions lead to interannual variability regardless of the trophic status of the lake. Accordingly, in oligotrophic freshwaters meteorological forcing may reduce productivity impacting on the entire food chain of the ecosystem.
Resumo:
A three-dimensional model has been proposed that uses Monte Carlo and fast Fourier transform convolution techniques to calculate the dose distribution from a fast neutron beam. This method transports scattered neutrons and photons in the forward, lateral, and backward directions and protons, electrons, and positrons in the forward and lateral directions by convolving energy spread kernels with initial interaction available energy distributions. The primary neutron and photon spectrums have been derived from narrow beam attenuation measurements. The positions and strengths of the effective primary neutron, scattered neutron, and photon sources have been derived from dual ion chamber measurements. The size of the effective primary neutron source has been measured using a copper activation technique. Heterogeneous tissue calculations require a weighted sum of two convolutions for each component since the kernels must be invariant for FFT convolution. Comparisons between calculations and measurements were performed for several water and heterogeneous phantom geometries. ^
Resumo:
Complete NotI, SfiI, XbaI and BlnI cleavage maps of Escherichia coli K-12 strain MG1655 were constructed. Techniques used included: CHEF pulsed field gel electrophoresis; transposon mutagenesis; fragment hybridization to the ordered $\lambda$ library of Kohara et al.; fragment and cosmid hybridization to Southern blots; correlation of fragments and cleavage sites with EcoMap, a sequence-modified version of the genomic restriction map of Kohara et al.; and correlation of cleavage sites with DNA sequence databases. In all, 105 restriction sites were mapped and correlated with the EcoMap coordinate system.^ NotI, SfiI, XbaI and BlnI restriction patterns of five commonly used E. coli K-12 strains were compared to those of MG1655. The variability between strains, some of which are separated by numerous steps of mutagenic treatment, is readily detectable by pulsed-field gel electrophoresis. A model is presented to account for the difference between the strains on the basis of simple insertions, deletions, and in one case an inversion. Insertions and deletions ranged in size from 1 kb to 86 kb. Several of the larger features have previously been characterized and some of the smaller rearrangements can potentially account for previously reported genetic features of these strains.^ Some aspects of the frequency and distribution of NotI, SfiI, XbaI and BlnI cleavage sites were analyzed using a method based on Markov chain theory. Overlaps of Dam and Dcm methylase sites with XbaI and SfiI cleavage sites were examined. The one XbaI-Dam overlap in the database is in accord with the expected frequency of this overlap. The occurrence of certain types of SfiI-Dcm overlaps are overrepresented. Of the four subtypes of SfiI-Dcm overlap, only one has a partial inhibitory effect on the activity of SfiI. Recognition sites for all four enzymes are rarer than expected based on oligonucleotide frequency data, with this effect being much stronger for XbaI and BlnI than for NotI and SfiI. The latter two enzyme sites are rare mainly due to apparent negative selection against GGCC (both) and CGGCCG (NotI). The former two enzyme sites are rare mainly due to effects of the VSP repair system on certain di-tri- and tetranucleotides, most notably CTAG. Models are proposed to explain several of the anomalies of oligonucleotide distribution in E. coli, and the biological significance of the systems that produce these anomalies is discussed. ^
Resumo:
This paper reports a comparison of three modeling strategies for the analysis of hospital mortality in a sample of general medicine inpatients in a Department of Veterans Affairs medical center. Logistic regression, a Markov chain model, and longitudinal logistic regression were evaluated on predictive performance as measured by the c-index and on accuracy of expected numbers of deaths compared to observed. The logistic regression used patient information collected at admission; the Markov model was comprised of two absorbing states for discharge and death and three transient states reflecting increasing severity of illness as measured by laboratory data collected during the hospital stay; longitudinal regression employed Generalized Estimating Equations (GEE) to model covariance structure for the repeated binary outcome. Results showed that the logistic regression predicted hospital mortality as well as the alternative methods but was limited in scope of application. The Markov chain provides insights into how day to day changes of illness severity lead to discharge or death. The longitudinal logistic regression showed that increasing illness trajectory is associated with hospital mortality. The conclusion is reached that for standard applications in modeling hospital mortality, logistic regression is adequate, but for new challenges facing health services research today, alternative methods are equally predictive, practical, and can provide new insights. ^
Resumo:
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of sqrt(s) = 7 TeV corresponding to an integrated luminosity of 38 inverse pb. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0.4 or R=0.6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pt > 20 GeV and pseudorapidities eta<4.5. The JES systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams. The JES uncertainty is less than 2.5% in the central calorimeter region (eta<0.8) for jets with 60 < pt < 800 GeV, and is maximally 14% for pt < 30 GeV in the most forward region 3.2