62 resultados para Sequential Monte Carlo methods
Resumo:
A new Stata command called -mgof- is introduced. The command is used to compute distributional tests for discrete (categorical, multinomial) variables. Apart from classic large sample $\chi^2$-approximation tests based on Pearson's $X^2$, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read 1984), large sample tests for complex survey designs and exact tests for small samples are supported. The complex survey correction is based on the approach by Rao and Scott (1981) and parallels the survey design correction used for independence tests in -svy:tabulate-. The exact tests are computed using Monte Carlo methods or exhaustive enumeration. An exact Kolmogorov-Smirnov test for discrete data is also provided.
Resumo:
The FANOVA (or “Sobol’-Hoeffding”) decomposition of multivariate functions has been used for high-dimensional model representation and global sensitivity analysis. When the objective function f has no simple analytic form and is costly to evaluate, computing FANOVA terms may be unaffordable due to numerical integration costs. Several approximate approaches relying on Gaussian random field (GRF) models have been proposed to alleviate these costs, where f is substituted by a (kriging) predictor or by conditional simulations. Here we focus on FANOVA decompositions of GRF sample paths, and we notably introduce an associated kernel decomposition into 4 d 4d terms called KANOVA. An interpretation in terms of tensor product projections is obtained, and it is shown that projected kernels control both the sparsity of GRF sample paths and the dependence structure between FANOVA effects. Applications on simulated data show the relevance of the approach for designing new classes of covariance kernels dedicated to high-dimensional kriging.
Resumo:
Objectives To examine the extent of multiplicity of data in trial reports and to assess the impact of multiplicity on meta-analysis results. Design Empirical study on a cohort of Cochrane systematic reviews. Data sources All Cochrane systematic reviews published from issue 3 in 2006 to issue 2 in 2007 that presented a result as a standardised mean difference (SMD). We retrieved trial reports contributing to the first SMD result in each review, and downloaded review protocols. We used these SMDs to identify a specific outcome for each meta-analysis from its protocol. Review methods Reviews were eligible if SMD results were based on two to ten randomised trials and if protocols described the outcome. We excluded reviews if they only presented results of subgroup analyses. Based on review protocols and index outcomes, two observers independently extracted the data necessary to calculate SMDs from the original trial reports for any intervention group, time point, or outcome measure compatible with the protocol. From the extracted data, we used Monte Carlo simulations to calculate all possible SMDs for every meta-analysis. Results We identified 19 eligible meta-analyses (including 83 trials). Published review protocols often lacked information about which data to choose. Twenty-four (29%) trials reported data for multiple intervention groups, 30 (36%) reported data for multiple time points, and 29 (35%) reported the index outcome measured on multiple scales. In 18 meta-analyses, we found multiplicity of data in at least one trial report; the median difference between the smallest and largest SMD results within a meta-analysis was 0.40 standard deviation units (range 0.04 to 0.91). Conclusions Multiplicity of data can affect the findings of systematic reviews and meta-analyses. To reduce the risk of bias, reviews and meta-analyses should comply with prespecified protocols that clearly identify time points, intervention groups, and scales of interest.
Resumo:
A main field in biomedical optics research is diffuse optical tomography, where intensity variations of the transmitted light traversing through tissue are detected. Mathematical models and reconstruction algorithms based on finite element methods and Monte Carlo simulations describe the light transport inside the tissue and determine differences in absorption and scattering coefficients. Precise knowledge of the sample's surface shape and orientation is required to provide boundary conditions for these techniques. We propose an integrated method based on structured light three-dimensional (3-D) scanning that provides detailed surface information of the object, which is usable for volume mesh creation and allows the normalization of the intensity dispersion between surface and camera. The experimental setup is complemented by polarization difference imaging to avoid overlaying byproducts caused by inter-reflections and multiple scattering in semitransparent tissue.
Resumo:
Tissue phantoms play a central role in validating biomedical imaging techniques. Here we employ a series of methods that aim to fully determine the optical properties, i.e., the refractive index n, absorption coefficient μa, transport mean free path ℓ∗, and scattering coefficient μs of a TiO2 in gelatin phantom intended for use in optoacoustic imaging. For the determination of the key parameters μa and ℓ∗, we employ a variant of time of flight measurements, where fiber optodes are immersed into the phantom to minimize the influence of boundaries. The robustness of the method was verified with Monte Carlo simulations, where the experimentally obtained values served as input parameters for the simulations. The excellent agreement between simulations and experiments confirmed the reliability of the results. The parameters determined at 780 nm are n=1.359(±0.002), μ′s=1/ℓ∗=0.22(±0.02) mm-1, μa= 0.0053(+0.0006-0.0003) mm-1, and μs=2.86(±0.04) mm-1. The asymmetry parameter g obtained from the parameters ℓ∗ and μ′s is 0.93, which indicates that the scattering entities are not bare TiO2 particles but large sparse clusters. The interaction between the scattering particles and the gelatin matrix should be taken into account when developing such phantoms.
Resumo:
A simulation model adopting a health system perspective showed population-based screening with DXA, followed by alendronate treatment of persons with osteoporosis, or with anamnestic fracture and osteopenia, to be cost-effective in Swiss postmenopausal women from age 70, but not in men. INTRODUCTION: We assessed the cost-effectiveness of a population-based screen-and-treat strategy for osteoporosis (DXA followed by alendronate treatment if osteoporotic, or osteopenic in the presence of fracture), compared to no intervention, from the perspective of the Swiss health care system. METHODS: A published Markov model assessed by first-order Monte Carlo simulation was refined to reflect the diagnostic process and treatment effects. Women and men entered the model at age 50. Main screening ages were 65, 75, and 85 years. Age at bone densitometry was flexible for persons fracturing before the main screening age. Realistic assumptions were made with respect to persistence with intended 5 years of alendronate treatment. The main outcome was cost per quality-adjusted life year (QALY) gained. RESULTS: In women, costs per QALY were Swiss francs (CHF) 71,000, CHF 35,000, and CHF 28,000 for the main screening ages of 65, 75, and 85 years. The threshold of CHF 50,000 per QALY was reached between main screening ages 65 and 75 years. Population-based screening was not cost-effective in men. CONCLUSION: Population-based DXA screening, followed by alendronate treatment in the presence of osteoporosis, or of fracture and osteopenia, is a cost-effective option in Swiss postmenopausal women after age 70.
Resumo:
The conformational properties of the microtubule-stabilizing agent epothilone A ( 1a) and its 3-deoxy and 3-deoxy-2,3-didehydro derivatives 2 and 3 have been investigated in aqueous solution by a combination of NMR spectroscopic methods, Monte Carlo conformational searches, and NAMFIS calculations. The tubulin-bound conformation of epothilone A ( 1a), as previously proposed on the basis of solution NMR data, was found to represent a significant fraction of the ensemble of conformations present for the free ligands in aqueous solution.
Resumo:
During the past decade microbeam radiation therapy has evolved from preclinical studies to a stage in which clinical trials can be planned, using spatially fractionated, highly collimated and high intensity beams like those generated at the x-ray ID17 beamline of the European Synchrotron Radiation Facility. The production of such microbeams typically between 25 and 100 microm full width at half maximum (FWHM) values and 100-400 microm center-to-center (c-t-c) spacings requires a multislit collimator either with fixed or adjustable microbeam width. The mechanical regularity of such devices is the most important property required to produce an array of identical microbeams. That ensures treatment reproducibility and reliable use of Monte Carlo-based treatment planning systems. New high precision wire cutting techniques allow the fabrication of these collimators made of tungsten carbide. We present a variable slit width collimator as well as a single slit device with a fixed setting of 50 microm FWHM and 400 microm c-t-c, both able to cover irradiation fields of 50 mm width, deemed to meet clinical requirements. Important improvements have reduced the standard deviation of 5.5 microm to less than 1 microm for a nominal FWHM value of 25 microm. The specifications of both devices, the methods used to measure these characteristics, and the results are presented.
Resumo:
In this article we propose an exact efficient simulation algorithm for the generalized von Mises circular distribution of order two. It is an acceptance-rejection algorithm with a piecewise linear envelope based on the local extrema and the inflexion points of the generalized von Mises density of order two. We show that these points can be obtained from the roots of polynomials and degrees four and eight, which can be easily obtained by the methods of Ferrari and Weierstrass. A comparative study with the von Neumann acceptance-rejection, with the ratio-of-uniforms and with a Markov chain Monte Carlo algorithms shows that this new method is generally the most efficient.
Resumo:
Background: Recently, Cipriani and colleagues examined the relative efficacy of 12 new-generation antidepressants on major depression using network meta-analytic methods. They found that some of these medications outperformed others in patient response to treatment. However, several methodological criticisms have been raised about network meta-analysis and Cipriani’s analysis in particular which creates the concern that the stated superiority of some antidepressants relative to others may be unwarranted. Materials and Methods: A Monte Carlo simulation was conducted which involved replicating Cipriani’s network metaanalysis under the null hypothesis (i.e., no true differences between antidepressants). The following simulation strategy was implemented: (1) 1000 simulations were generated under the null hypothesis (i.e., under the assumption that there were no differences among the 12 antidepressants), (2) each of the 1000 simulations were network meta-analyzed, and (3) the total number of false positive results from the network meta-analyses were calculated. Findings: Greater than 7 times out of 10, the network meta-analysis resulted in one or more comparisons that indicated the superiority of at least one antidepressant when no such true differences among them existed. Interpretation: Based on our simulation study, the results indicated that under identical conditions to those of the 117 RCTs with 236 treatment arms contained in Cipriani et al.’s meta-analysis, one or more false claims about the relative efficacy of antidepressants will be made over 70% of the time. As others have shown as well, there is little evidence in these trials that any antidepressant is more effective than another. The tendency of network meta-analyses to generate false positive results should be considered when conducting multiple comparison analyses.
Resumo:
Two new approaches to quantitatively analyze diffuse diffraction intensities from faulted layer stacking are reported. The parameters of a probability-based growth model are determined with two iterative global optimization methods: a genetic algorithm (GA) and particle swarm optimization (PSO). The results are compared with those from a third global optimization method, a differential evolution (DE) algorithm [Storn & Price (1997). J. Global Optim. 11, 341–359]. The algorithm efficiencies in the early and late stages of iteration are compared. The accuracy of the optimized parameters improves with increasing size of the simulated crystal volume. The wall clock time for computing quite large crystal volumes can be kept within reasonable limits by the parallel calculation of many crystals (clones) generated for each model parameter set on a super- or grid computer. The faulted layer stacking in single crystals of trigonal three-pointedstar- shaped tris(bicylco[2.1.1]hexeno)benzene molecules serves as an example for the numerical computations. Based on numerical values of seven model parameters (reference parameters), nearly noise-free reference intensities of 14 diffuse streaks were simulated from 1280 clones, each consisting of 96 000 layers (reference crystal). The parameters derived from the reference intensities with GA, PSO and DE were compared with the original reference parameters as a function of the simulated total crystal volume. The statistical distribution of structural motifs in the simulated crystals is in good agreement with that in the reference crystal. The results found with the growth model for layer stacking disorder are applicable to other disorder types and modeling techniques, Monte Carlo in particular.
Resumo:
The measurement of the jet energy resolution is presented using data recorded with the ATLAS detector in proton-proton collisions at root s = 7 TeV. The sample corresponds to an integrated luminosity of 35 pb(-1). Jets are reconstructed from energy deposits measured by the calorimeters and calibrated using different jet calibration schemes. The jet energy resolution is measured with two different in situ methods which are found to be in agreement within uncertainties. The total uncertainties on these measurements range from 20 % to 10 % for jets within vertical bar y vertical bar < 2.8 and with transverse momenta increasing from 30 GeV to 500 GeV. Overall, the Monte Carlo simulation of the jet energy resolution agrees with the data within 10 %.
Resumo:
PURPOSE A beamlet based direct aperture optimization (DAO) for modulated electron radiotherapy (MERT) using photon multileaf collimator (pMLC) shaped electron fields is developed and investigated. METHODS The Swiss Monte Carlo Plan (SMCP) allows the calculation of dose distributions for pMLC shaped electron beams. SMCP is interfaced with the Eclipse TPS (Varian Medical Systems, Palo Alto, CA) which can thus be included into the inverse treatment planning process for MERT. This process starts with the import of a CT-scan into Eclipse, the contouring of the target and the organs at risk (OARs), and the choice of the initial electron beam directions. For each electron beam, the number of apertures, their energy, and initial shape are defined. Furthermore, the DAO requires dose-volume constraints for the structures contoured. In order to carry out the DAO efficiently, the initial electron beams are divided into a grid of beamlets. For each of those, the dose distribution is precalculated using a modified electron beam model, resulting in a dose list for each beamlet and energy. Then the DAO is carried out, leading to a set of optimal apertures and corresponding weights. These optimal apertures are now converted into pMLC shaped segments and the dose calculation for each segment is performed. For these dose distributions, a weight optimization process is launched in order to minimize the differences between the dose distribution using the optimal apertures and the pMLC segments. Finally, a deliverable dose distribution for the MERT plan is obtained and loaded back into Eclipse for evaluation. For an idealized water phantom geometry, a MERT treatment plan is created and compared to the plan obtained using a previously developed forward planning strategy. Further, MERT treatment plans for three clinical situations (breast, chest wall, and parotid metastasis of a squamous cell skin carcinoma) are created using the developed inverse planning strategy. The MERT plans are compared to clinical standard treatment plans using photon beams and the differences between the optimal and the deliverable dose distributions are determined. RESULTS For the idealized water phantom geometry, the inversely optimized MERT plan is able to obtain the same PTV coverage, but with an improved OAR sparing compared to the forwardly optimized plan. Regarding the right-sided breast case, the MERT plan is able to reduce the lung volume receiving more than 30% of the prescribed dose and the mean lung dose compared to the standard plan. However, the standard plan leads to a better homogeneity within the CTV. The results for the left-sided thorax wall are similar but also the dose to the heart is reduced comparing MERT to the standard treatment plan. For the parotid case, MERT leads to lower doses for almost all OARs but to a less homogeneous dose distribution for the PTV when compared to a standard plan. For all cases, the weight optimization successfully minimized the differences between the optimal and the deliverable dose distribution. CONCLUSIONS A beamlet based DAO using multiple beam angles is implemented and successfully tested for an idealized water phantom geometry and clinical situations.
Resumo:
In this paper, we report on an optical tolerance analysis of the submillimeter atmospheric multi-beam limb sounder, STEAMR. Physical optics and ray-tracing methods were used to quantify and separate errors in beam pointing and distortion due to reflector misalignment and primary reflector surface deformations. Simulations were performed concurrently with the manufacturing of a multi-beam demonstrator of the relay optical system which shapes and images the beams to their corresponding receiver feed horns. Results from Monte Carlo simulations show that the inserts used for reflector mounting should be positioned with an overall accuracy better than 100 μm (~ 1/10 wavelength). Analyses of primary reflector surface deformations show that a deviation of magnitude 100 μm can be tolerable before deployment, whereas the corresponding variations should be less than 30 μm during operation. The most sensitive optical elements in terms of misalignments are found near the focal plane. This localized sensitivity is attributed to the off-axis nature of the beams at this location. Post-assembly mechanical measurements of the reflectors in the demonstrator show that alignment better than 50 μm could be obtained.
Resumo:
Background Catheter ablation (CA) of ventricular tachycardia (VT) is an important treatment option in patients with structural heart disease (SHD) and implantable cardioverter defibrillator (ICD). A subset of patients requires epicardial CA for VT. Objective The purpose of the study was to assess the significance of epicardial CA in these patients after a systematic sequential endocardial approach. Methods Between January 2009 and October 2012 CA for VT was analyzed. A sequential CA approach guided by earliest ventricular activation, pacemap, entrainment and stimulus to QRS-interval analysis was used. Acute CA success was assessed by programmed ventricular stimulation. ICD interrogation and 24 h-Holter ECG were used to evaluate long-term success. Results One hundred sixty VT ablation procedures in 126 consecutive patients (114 men; age 65 ± 12 years) were performed. Endocardial CA succeeded in 250 (94%) out of 265 treated VT. For 15 (6%) VT an additional epicardial CA was performed and succeeded in 9 of these 15 VT. Long-term FU (25 ± 18.2 month) showed freedom of VT in 104 pts (82%) after 1.2 ± 0.5 procedures, 11 (9%) suffered from repeated ICD shocks and 11 (9%) died due to worsening of heart failure. Conclusions Despite a heterogenic substrate for VT in SHD, endocardial CA alone results in high acute success rates. In this study additional epicardial CA following a sequential endocardial mapping and CA approach was performed in 6% of VT. Thus, due to possible complications epicardial CA should only be considered if endocardial CA fails.