972 resultados para Monte Carlo codes
Resumo:
1. Previous glucagon receptor gene (GCGR) studies have shown a Gly40Ser mutation to be more prevalent in essential hypertension and to affect glucagon binding affinity to its receptor. An Alu-repeat poly(A) polymorphism colocalized to GCGR was used in the present study to test for association and linkage in hypertension as well as association in obesity development. 2. Using a cross-sectional approach, 85 hypertensives and 95 normotensives were genotyped using polymerase chain reaction primers flanking the Alu-repeat. Both hypertensive and normotensive populations were subdivided into lean and obese categories based on body mass index (BMI) to determine involvement of this variant in obesity. For the linkage study, 89 Australian Caucasian hypertension affected sibships (174 sibpairs) were genotyped and the results were analysed using GENE-HUNTER, Mapmaker Sibs, ERPA and SPLINK (all freely available from http://linlkage.rockefeller. edu/soft/list.html). 3. Cross-sectional results for both hypertension and obesity were analysed using Chi-squared and Monte Carlo analyses. Results did not show an association of this variant with either hypertension (χ2 = 6.9, P = 0.14; Monte Carlo χ2 = 7.0, P = 0.11; n = 5000) or obesity (χ2 = 3.3, P = 0.35; Monte Carlo χ2 = 3.26, P = 0.34; n = 5000). In addition, results from the linkage study using hypertensive sib-pairs did not indicate linkage of the poly(A) repent with hypertension. Hence, results did not indicate a role far the Alu-repeat in either hypertension or obesity. However, as the heterozygosity of this poly(A) repeat is low (35%), a larger number of hypertensive sib-pairs may be required to draw definitive conclusions.
Resumo:
Interest in chromosome 18 in essential hypertension comes from comparative mapping of rat blood pressure quantitative trait loci (QTL), familial orthostatic hypotensive syndrome studies, and essential hypertension pedigree linkage analyses indicating that a locus or loci on human chromosome 18 may play a role in hypertension development. To further investigate involvement of chromosome 18 in human essential hypertension, the present study utilized a linkage scan approach to genotype twelve microsatellite markers spanning human chromosome 18 in 177 Australian Caucasian hypertensive (HT) sibling pairs. Linkage analysis showed significant excess allele sharing of the D18S61 marker when analyzed with SPLINK (P=0.00012), ANALYZE (Sibpair) (P=0.0081), and also with MAPMAKER SIBS (P=0.0001). Similarly, the D18S59 marker also showed evidence for excess allele sharing when analyzed with SPLINK (P=0.016), ANALYZE (Sibpair) (P=0.0095), and with MAPMAKER SIBS (P = 0.014). The adenylate cyclase activating polypeptide 1 gene (ADCYAP1) is involved in vasodilation and has been co-localized to the D18S59 marker. Results testing a microsatellite marker in the 3′ untranslated region of ADCYAP1 in age and gender matched HT and normotensive (NT) individuals showed possible association with hypertension (P = 0.038; Monte Carlo P = 0.02), but not with obesity. The present study shows a chromosome 18 role in essential hypertension and indicates that the genomic region near the ADCYAP1 gene or perhaps the gene itself may be implicated. Further investigation is required to conclusively determine the extent to which ADCYAP1 polymorphisms are involved in essential hypertension. © 2003 Wiley-Liss, Inc.
Resumo:
Monte Carlo simulations were used to investigate the relationship between the morphological characteristics and the diffusion tensor (DT) of partially aligned networks of cylindrical fibres. The orientation distributions of the fibres in each network were approximately uniform within a cone of a given semi-angle (θ0). This semi-angle was used to control the degree of alignment of the fibres. The networks studied ranged from perfectly aligned (θ0 = 0) to completely disordered (θ0 = 90°). Our results are qualitatively consistent with previous numerical models in the overall behaviour of the DT. However, we report a non-linear relationship between the fractional anisotropy (FA) of the DT and collagen volume fraction, which is different to the findings from previous work. We discuss our results in the context of diffusion tensor imaging of articular cartilage. We also demonstrate how appropriate diffusion models have the potential to enable quantitative interpretation of the experimentally measured diffusion-tensor FA in terms of collagen fibre alignment distributions.
Resumo:
Background To explore the impact of geographical remoteness and area-level socioeconomic disadvantage on colorectal cancer (CRC) survival. Methods Multilevel logistic regression and Markov chain Monte Carlo simulations were used to analyze geographical variations in five-year all-cause and CRC-specific survival across 478 regions in Queensland Australia for 22,727 CRC cases aged 20–84 years diagnosed from 1997–2007. Results Area-level disadvantage and geographic remoteness were independently associated with CRC survival. After full multivariate adjustment (both levels), patients from remote (odds Ratio [OR]: 1.24, 95%CrI: 1.07-1.42) and more disadvantaged quintiles (OR = 1.12, 1.15, 1.20, 1.23 for Quintiles 4, 3, 2 and 1 respectively) had lower CRC-specific survival than major cities and least disadvantaged areas. Similar associations were found for all-cause survival. Area disadvantage accounted for a substantial amount of the all-cause variation between areas. Conclusions We have demonstrated that the area-level inequalities in survival of colorectal cancer patients cannot be explained by the measured individual-level characteristics of the patients or their cancer and remain after adjusting for cancer stage. Further research is urgently needed to clarify the factors that underlie the survival differences, including the importance of geographical differences in clinical management of CRC.
Resumo:
The selection of optimal camera configurations (camera locations, orientations, etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a trans-dimensional simulated annealing algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on binary integer programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than two alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.
Resumo:
This study considered the problem of predicting survival, based on three alternative models: a single Weibull, a mixture of Weibulls and a cure model. Instead of the common procedure of choosing a single “best” model, where “best” is defined in terms of goodness of fit to the data, a Bayesian model averaging (BMA) approach was adopted to account for model uncertainty. This was illustrated using a case study in which the aim was the description of lymphoma cancer survival with covariates given by phenotypes and gene expression. The results of this study indicate that if the sample size is sufficiently large, one of the three models emerge as having highest probability given the data, as indicated by the goodness of fit measure; the Bayesian information criterion (BIC). However, when the sample size was reduced, no single model was revealed as “best”, suggesting that a BMA approach would be appropriate. Although a BMA approach can compromise on goodness of fit to the data (when compared to the true model), it can provide robust predictions and facilitate more detailed investigation of the relationships between gene expression and patient survival. Keywords: Bayesian modelling; Bayesian model averaging; Cure model; Markov Chain Monte Carlo; Mixture model; Survival analysis; Weibull distribution
Resumo:
Plug-in electric vehicles will soon be connected to residential distribution networks in high quantities and will add to already overburdened residential feeders. However, as battery technology improves, plug-in electric vehicles will also be able to support networks as small distributed generation units by transferring the energy stored in their battery into the grid. Even though the increase in the plug-in electric vehicle connection is gradual, their connection points and charging/discharging levels are random. Therefore, such single-phase bidirectional power flows can have an adverse effect on the voltage unbalance of a three-phase distribution network. In this article, a voltage unbalance sensitivity analysis based on charging/discharging levels and the connection point of plug-in electric vehicles in a residential low-voltage distribution network is presented. Due to the many uncertainties in plug-in electric vehicle ratings and connection points and the network load, a Monte Carlo-based stochastic analysis is developed to predict voltage unbalance in the network in the presence of plug-in electric vehicles. A failure index is introduced to demonstrate the probability of non-standard voltage unbalance in the network due to plug-in electric vehicles.
Resumo:
Voltage unbalance is a major power quality problem in low voltage residential feeders due to the random location and rating of single-phase rooftop photovoltaic cells (PV). In this paper, two different improvement methods based on the application of series (DVR) and parallel (DSTATCOM) custom power devices are investigated to improve the voltage unbalance problem in these feeders. First, based on the load flow analysis carried out in MATLAB, the effectiveness of these two custom power devices is studied vis-à-vis the voltage unbalance reduction in urban and semi-urban/rural feeders containing rooftop PVs. Their effectiveness is studied from the installation location and rating points of view. Later, a Monte Carlo based stochastic analysis is carried out to investigate their efficacy for different uncertainties of load and PV rating and location in the network. After the numerical analyses, a converter topology and control algorithm is proposed for the DSTATCOM and DVR for balancing the network voltage at their point of common coupling. A state feedback control, based on pole-shift technique, is developed to regulate the voltage in the output of the DSTATCOM and DVR converters such that the voltage balancing is achieved in the network. The dynamic feasibility of voltage unbalance and profile improvement in LV feeders, by the proposed structure and control algorithm for the DSTATCOM and DVR, is verified through detailed PSCAD/EMTDC simulations.
Resumo:
Monitoring stream networks through time provides important ecological information. The sampling design problem is to choose locations where measurements are taken so as to maximise information gathered about physicochemical and biological variables on the stream network. This paper uses a pseudo-Bayesian approach, averaging a utility function over a prior distribution, in finding a design which maximizes the average utility. We use models for correlations of observations on the stream network that are based on stream network distances and described by moving average error models. Utility functions used reflect the needs of the experimenter, such as prediction of location values or estimation of parameters. We propose an algorithmic approach to design with the mean utility of a design estimated using Monte Carlo techniques and an exchange algorithm to search for optimal sampling designs. In particular we focus on the problem of finding an optimal design from a set of fixed designs and finding an optimal subset of a given set of sampling locations. As there are many different variables to measure, such as chemical, physical and biological measurements at each location, designs are derived from models based on different types of response variables: continuous, counts and proportions. We apply the methodology to a synthetic example and the Lake Eacham stream network on the Atherton Tablelands in Queensland, Australia. We show that the optimal designs depend very much on the choice of utility function, varying from space filling to clustered designs and mixtures of these, but given the utility function, designs are relatively robust to the type of response variable.
Resumo:
Due to knowledge gaps in relation to urban stormwater quality processes, an in-depth understanding of model uncertainty can enhance decision making. Uncertainty in stormwater quality models can originate from a range of sources such as the complexity of urban rainfall-runoff-stormwater pollutant processes and the paucity of observed data. Unfortunately, studies relating to epistemic uncertainty, which arises from the simplification of reality are limited and often deemed mostly unquantifiable. This paper presents a statistical modelling framework for ascertaining epistemic uncertainty associated with pollutant wash-off under a regression modelling paradigm using Ordinary Least Squares Regression (OLSR) and Weighted Least Squares Regression (WLSR) methods with a Bayesian/Gibbs sampling statistical approach. The study results confirmed that WLSR assuming probability distributed data provides more realistic uncertainty estimates of the observed and predicted wash-off values compared to OLSR modelling. It was also noted that the Bayesian/Gibbs sampling approach is superior compared to the most commonly adopted classical statistical and deterministic approaches commonly used in water quality modelling. The study outcomes confirmed that the predication error associated with wash-off replication is relatively higher due to limited data availability. The uncertainty analysis also highlighted the variability of the wash-off modelling coefficient k as a function of complex physical processes, which is primarily influenced by surface characteristics and rainfall intensity.
Resumo:
Purpose The goal of this work was to set out a methodology for measuring and reporting small field relative output and to assess the application of published correction factors across a population of linear accelerators. Methods and materials Measurements were made at 6 MV on five Varian iX accelerators using two PTW T60017 unshielded diodes. Relative output readings and profile measurements were made for nominal square field sizes of side 0.5 to 1.0 cm. The actual in-plane (A) and cross-plane (B) field widths were taken to be the FWHM at the 50% isodose level. An effective field size, defined as FSeff=A·B, was calculated and is presented as a field size metric. FSeffFSeff was used to linearly interpolate between published Monte Carlo (MC) calculated kQclin,Qmsrfclin,fmsr values to correct for the diode over-response in small fields. Results The relative output data reported as a function of the nominal field size were different across the accelerator population by up to nearly 10%. However, using the effective field size for reporting showed that the actual output ratios were consistent across the accelerator population to within the experimental uncertainty of ±1.0%. Correcting the measured relative output using kQclin,Qmsrfclin,fmsr at both the nominal and effective field sizes produce output factors that were not identical but differ by much less than the reported experimental and/or MC statistical uncertainties. Conclusions In general, the proposed methodology removes much of the ambiguity in reporting and interpreting small field dosimetric quantities and facilitates a clear dosimetric comparison across a population of linacs
Resumo:
Purpose This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs, and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 mm to 100 mm, using a nominal photon energy of 6 MV. Results According to the practical definition established in this project, field sizes < 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0 % to 2.0 %, or field size uncertainties are 0.5 mm, field sizes < 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes < 12 mm. Source occlusion also caused a large change in OPF for field sizes < 8 mm. Based on the results of this study, field sizes < 12 mm were considered to be theoretically very small for 6 MV beams. Conclusions Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least < 12 mm and more conservatively < 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.
Resumo:
The use of graphical processing unit (GPU) parallel processing is becoming a part of mainstream statistical practice. The reliance of Bayesian statistics on Markov Chain Monte Carlo (MCMC) methods makes the applicability of parallel processing not immediately obvious. It is illustrated that there are substantial gains in improved computational time for MCMC and other methods of evaluation by computing the likelihood using GPU parallel processing. Examples use data from the Global Terrorism Database to model terrorist activity in Colombia from 2000 through 2010 and a likelihood based on the explicit convolution of two negative-binomial processes. Results show decreases in computational time by a factor of over 200. Factors influencing these improvements and guidelines for programming parallel implementations of the likelihood are discussed.
Resumo:
We present a novel approach for developing summary statistics for use in approximate Bayesian computation (ABC) algorithms using indirect infer- ence. We embed this approach within a sequential Monte Carlo algorithm that is completely adaptive. This methodological development was motivated by an application involving data on macroparasite population evolution modelled with a trivariate Markov process. The main objective of the analysis is to compare inferences on the Markov process when considering two di®erent indirect mod- els. The two indirect models are based on a Beta-Binomial model and a three component mixture of Binomials, with the former providing a better ¯t to the observed data.
Resumo:
Introduction This study examines and compares the dosimetric quality of radiotherapy treatment plans for prostate carcinoma across a cohort of 163 patients treated across 5 centres: 83 treated with three-dimensional conformal radiotherapy (3DCRT), 33 treated with intensity-modulated radiotherapy (IMRT) and 47 treated with volumetric-modulated arc therapy (VMAT). Methods Treatment plan quality was evaluated in terms of target dose homogeneity and organ-at-risk sparing, through the use of a set of dose metrics. These included the mean, maximum and minimum doses; the homogeneity and conformity indices for the target volumes; and a selection of dose coverage values that were relevant to each organ-at-risk. Statistical significance was evaluated using two-tailed Welch’s T-tests. The Monte Carlo DICOM ToolKit software was adapted to permit the evaluation of dose metrics from DICOM data exported from a commercial radiotherapy treatment planning system. Results The 3DCRT treatment plans offered greater planning target volume dose homogeneity than the other two treatment modalities. The IMRT and VMAT plans offered greater dose reduction in the organs-at-risk: with increased compliance with recommended organ-at-risk dose constraints, compared to conventional 3DCRT treatments. When compared to each other, IMRT and VMAT did not provide significantly different treatment plan quality for like-sized tumour volumes. Conclusions This study indicates that IMRT and VMAT have provided similar dosimetric quality, which is superior to the dosimetric quality achieved with 3DCRT.