982 resultados para Gel Dosimetry, Monte Carlo Modelling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the issue of performing accurate small-sample likelihood-based inference in beta regression models, which are useful for modelling continuous proportions that are affected by independent variables. We derive small-sample adjustments to the likelihood ratio statistic in this class of models. The adjusted statistics can be easily implemented from standard statistical software. We present Monte Carlo simulations showing that inference based on the adjusted statistics we propose is much more reliable than that based on the usual likelihood ratio statistic. A real data example is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, we introduce a semi-parametric Bayesian approach based on Dirichlet process priors for the discrete calibration problem in binomial regression models. An interesting topic is the dosimetry problem related to the dose-response model. A hierarchical formulation is provided so that a Markov chain Monte Carlo approach is developed. The methodology is applied to simulated and real data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study presents an approach to combine uncertainties of the hydrological model outputs predicted from a number of machine learning models. The machine learning based uncertainty prediction approach is very useful for estimation of hydrological models' uncertainty in particular hydro-metrological situation in real-time application [1]. In this approach the hydrological model realizations from Monte Carlo simulations are used to build different machine learning uncertainty models to predict uncertainty (quantiles of pdf) of the a deterministic output from hydrological model . Uncertainty models are trained using antecedent precipitation and streamflows as inputs. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. We used three machine learning models namely artificial neural networks, model tree, locally weighted regression to predict output uncertainties. These three models produce similar verification results, which can be improved by merging their outputs dynamically. We propose an approach to form a committee of the three models to combine their outputs. The approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model in the Brue catchment in UK and the Bagmati catchment in Nepal. The verification results show that merged output is better than an individual model output. [1] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press, 2013.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the estimation and testing of conditional duration models by looking at the density and baseline hazard rate functions. More precisely, we foeus on the distance between the parametric density (or hazard rate) function implied by the duration process and its non-parametric estimate. Asymptotic justification is derived using the functional delta method for fixed and gamma kernels, whereas finite sample properties are investigated through Monte Carlo simulations. Finally, we show the practical usefulness of such testing procedures by carrying out an empirical assessment of whether autoregressive conditional duration models are appropriate to oIs for modelling price durations of stocks traded at the New York Stock Exchange.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stellar differential rotation is an important key to understand hydromagnetic stellar dynamos, instabilities, and transport processes in stellar interiors as well as for a better treatment of tides in close binary and star-planet systems. The space-borne high-precision photometry with MOST, CoRoT, and Kepler has provided large and homogeneous datasets. This allows, for the first time, the study of differential rotation statistically robust samples covering almost all stages of stellar evolution. In this sense, we introduce a method to measure a lower limit to the amplitude of surface differential rotation from high-precision evenly sampled photometric time series such as those obtained by space-borne telescopes. It is designed for application to main-sequence late-type stars whose optical flux modulation is dominated by starspots. An autocorrelation of the time series is used to select stars that allow an accurate determination of spot rotation periods. A simple two-spot model is applied together with a Bayesian Information Criterion to preliminarily select intervals of the time series showing evidence of differential rotation with starspots of almost constant area. Finally, the significance of the differential rotation detection and a measurement of its amplitude and uncertainty are obtained by an a posteriori Bayesian analysis based on a Monte Carlo Markov Chain (hereafter MCMC) approach. We apply our method to the Sun and eight other stars for which previous spot modelling has been performed to compare our results with previous ones. The selected stars are of spectral type F, G and K. Among the main results of this work, We find that autocorrelation is a simple method for selecting stars with a coherent rotational signal that is a prerequisite to a successful measurement of differential rotation through spot modelling. For a proper MCMC analysis, it is necessary to take into account the strong correlations among different parameters that exists in spot modelling. For the planethosting star Kepler-30, we derive a lower limit to the relative amplitude of the differential rotation. We confirm that the Sun as a star in the optical passband is not suitable for a measurement of the differential rotation owing to the rapid evolution of its photospheric active regions. In general, our method performs well in comparison with more sophisticated procedures used until now in the study of stellar differential rotation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rural-urban migration phenomenon is analyzed by using an agent-based computational model. Agents are placed on lattices which dimensions varying from d = 2 up to d = 7. The localization of the agents in the lattice defines that their social neighborhood (rural or urban) is not related to their spatial distribution. The effect of the dimension of lattice is studied by analyzing the variation of the main parameters that characterizes the migratory process. The dynamics displays strong effects even for around one million of sites, in higher dimensions (d = 6, 7).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the results of a search for the flavor-changing neutral current decay Bs 0 → μ+ μ-. using a data set with integrated luminosity of 240 pb-1 of pp̄ collisions at √s = 1.96 TeV collected with the D0 detector in run II of the Fermilab Tevatron collider. We find the upper limit on the branching fraction to be B(Bs 0 → μ+ π-) ≤ 5.0 × 10-7 at the 95% C.L. assuming no contributions from the decay Bd 0 → μ+ μ- in the signal region. This limit is the most stringent upper bound on the branching fraction Bs 0 → μ+ μ- to date. © 2005 The American Physical Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of muon reconstruction in CMS is evaluated using a large data sample of cosmic-ray muons recorded in 2008. Efficiencies of various high-level trigger, identification, and reconstruction algorithms have been measured for a broad range of muon momenta, and were found to be in good agreement with expectations from Monte Carlo simulation. The relative momentum resolution for muons crossing the barrel part of the detector is better than 1% at 10 GeV/c and is about 8% at 500 GeV/c, the latter being only a factor of two worse than expected with ideal alignment conditions. Muon charge misassignment ranges from less than 0.01% at 10GeV/c to about 1% at 500 GeV/c. © 2010 IOP Publishing Ltd and SISSA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 pb-1 of data collected in pp collisions at s = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV/c is above 95% over the whole region of pseudorapidity covered by the CMS muon system, < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeVc is higher than 90% over the full η range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100GeV/c and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV/c. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation. © 2012 IOP Publishing Ltd and Sissa Medialab srl.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Invariant mass spectra for jets reconstructed using the anti-k T and CambridgeAachen algorithms are studied for different jet grooming techniques in data corresponding to an integrated luminosity of 5 fb-1, recorded with the CMS detector in proton-proton collisions at the LHC at a center-of-mass energy of 7 TeV. Leading-order QCD predictions for inclusive dijet and W/Z+jet production combined with parton-shower Monte Carlo models are found to agree overall with the data, and the agreement improves with the implementation of jet grooming methods used to distinguish merged jets of large transverse momentum from softer QCD gluon radiation. © 2013 CERN for the benefit of the CMS collaboration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measurements are presented of the production of primary KS0 and Λ particles in proton-proton collisions at √s=7 TeV in the region transverse to the leading charged-particle jet in each event. The average multiplicity and average scalar transverse momentum sum of KS0 and Λ particles measured at pseudorapidities |η|<2 rise with increasing charged-particle jet pT in the range 1-10 GeV/c and saturate in the region 10-50 GeV/c. The rise and saturation of the strange-particle yields and transverse momentum sums in the underlying event are similar to those observed for inclusive charged particles, which confirms the impact-parameter picture of multiple parton interactions. The results are compared to recent tunes of the pythia Monte Carlo event generator. The pythia simulations underestimate the data by 15%-30% for KS0 mesons and by about 50% for Λ baryons, a deficit similar to that observed for the inclusive strange-particle production in non-single-diffractive proton-proton collisions. The constant strange- to charged-particle activity ratios with respect to the leading jet pT and similar trends for mesons and baryons indicate that the multiparton-interaction dynamics is decoupled from parton hadronization, which occurs at a later stage. © 2013 CERN, for the CMS Collaboration Published by the American Physical Society under the terms of the Creative Commons Attribution 3.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave (GW) astrophysics communities. The purpose of NINJA is to study the ability to detect GWs emitted from merging binary black holes (BBH) and recover their parameters with next-generation GW observatories. We report here on the results of the second NINJA project, NINJA-2, which employs 60 complete BBH hybrid waveforms consisting of a numerical portion modelling the late inspiral, merger, and ringdown stitched to a post-Newtonian portion modelling the early inspiral. In a 'blind injection challenge' similar to that conducted in recent Laser Interferometer Gravitational Wave Observatory (LIGO) and Virgo science runs, we added seven hybrid waveforms to two months of data recoloured to predictions of Advanced LIGO (aLIGO) and Advanced Virgo (AdV) sensitivity curves during their first observing runs. The resulting data was analysed by GW detection algorithms and 6 of the waveforms were recovered with false alarm rates smaller than 1 in a thousand years. Parameter-estimation algorithms were run on each of these waveforms to explore the ability to constrain the masses, component angular momenta and sky position of these waveforms. We find that the strong degeneracy between the mass ratio and the BHs' angular momenta will make it difficult to precisely estimate these parameters with aLIGO and AdV. We also perform a large-scale Monte Carlo study to assess the ability to recover each of the 60 hybrid waveforms with early aLIGO and AdV sensitivity curves. Our results predict that early aLIGO and AdV will have a volume-weighted average sensitive distance of 300 Mpc (1 Gpc) for 10M circle dot + 10M circle dot (50M circle dot + 50M circle dot) BBH coalescences. We demonstrate that neglecting the component angular momenta in the waveform models used in matched-filtering will result in a reduction in sensitivity for systems with large component angular momenta. This reduction is estimated to be up to similar to 15% for 50M circle dot + 50M circle dot BBH coalescences with almost maximal angular momenta aligned with the orbit when using early aLIGO and AdV sensitivity curves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In radiotherapy, computational systems are used for radiation dose determination in the treatment’s volume and radiometric parameters quality analysis of equipment and field irradiated. Due to the increasing technological advancement, several research has been performed in brachytherapy for different computational algorithms development which may be incorporated to treatment planning systems, providing greater accuracy and confidence in the dose calculation. Informatics and information technology fields undergo constant updating and refinement, allowing the use Monte Carlo Method to simulate brachytherapy source dose distribution. The methodology formalization employed to dosimetric analysis is based mainly in the American Association of Physicists in Medicine (AAPM) studies, by Task Group nº 43 (TG-43) and protocols aimed at dosimetry of these radiation sources types. This work aims to analyze the feasibility of using the MCNP-5C (Monte Carlo N-Particle) code to obtain radiometric parameters of brachytherapy sources and so to study the radiation dose variation in the treatment planning. Simulations were performed for the radiation dose variation in the source plan and determined the dosimetric parameters required by TG-43 formalism for the characterization of the two high dose rate iridium-192 sources. The calculated values were compared with the presents in the literature, which were obtained with different Monte Carlo simulations codes. The results showed excellent consistency with the compared codes, enhancing MCNP-5C code the capacity and viability in the sources dosimetry employed in HDR brachytherapy. The method employed may suggest a possible incorporation of this code in the treatment planning systems provided by manufactures together with the equipment, since besides reducing acquisition cost, it can also make the used computational routines more comprehensive, facilitating the brachytherapy ...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To evaluate the dosimetric characteristics of a new formulation of MAGIC gel, called MAGIC-f, which contains the addition of 3.3% formaldehyde, resulting in a gel with increased thermal stability. Methods: MAGIC-f gel was prepared and stored in hermetically sealed plastic containers. After irradiation, magnetic resonance images (MRI) were acquired to evaluate dose and dose distribution. Dosimetric characterization was performed by means of depth dose measurements, dose response sensitivity and linearity, temporal stability, energy and dose rate dependence, dose integration using sequential beams, temperature influence during MRI acquisition and dose distribution integrity. Results: MAGIC-f depth dose measurements are compatible with the dosimetric table data within +/- 4% uncertainty. The dosimeter's R-2 response varies linearly with dose at least from 0 to 6 Gy. The time-course of the sensitivity of the dosimeter following irradiation, indicated stabilization after 2 weeks. The dosimeter's response to irradiation was altered by 6% when increasing the energy from cobalt beams to 10 MV beams. The dose rate dependence of this new formulation of gel dosimeter is small: less than 2.5% for a variation from 200 to 500 cGy/min, and the dependence with the fractionation scheme is about 50% smaller than for standard MAGIC gel, The dependence on scanning temperature was also verified, and the integrity of the dose distribution was confirmed for a period of 90 days. Conclusions: The results demonstrate the applicability of this new dosimeter in tridimensional dose distribution measurements. (C) 2012 Elsevier Ltd. All rights reserved.