980 resultados para Método de Monte Carlo via cadeias de Markov


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Calmodulin (CaM) is a ubiquitous Ca(2+) buffer and second messenger that affects cellular function as diverse as cardiac excitability, synaptic plasticity, and gene transcription. In CA1 pyramidal neurons, CaM regulates two opposing Ca(2+)-dependent processes that underlie memory formation: long-term potentiation (LTP) and long-term depression (LTD). Induction of LTP and LTD require activation of Ca(2+)-CaM-dependent enzymes: Ca(2+)/CaM-dependent kinase II (CaMKII) and calcineurin, respectively. Yet, it remains unclear as to how Ca(2+) and CaM produce these two opposing effects, LTP and LTD. CaM binds 4 Ca(2+) ions: two in its N-terminal lobe and two in its C-terminal lobe. Experimental studies have shown that the N- and C-terminal lobes of CaM have different binding kinetics toward Ca(2+) and its downstream targets. This may suggest that each lobe of CaM differentially responds to Ca(2+) signal patterns. Here, we use a novel event-driven particle-based Monte Carlo simulation and statistical point pattern analysis to explore the spatial and temporal dynamics of lobe-specific Ca(2+)-CaM interaction at the single molecule level. We show that the N-lobe of CaM, but not the C-lobe, exhibits a nano-scale domain of activation that is highly sensitive to the location of Ca(2+) channels, and to the microscopic injection rate of Ca(2+) ions. We also demonstrate that Ca(2+) saturation takes place via two different pathways depending on the Ca(2+) injection rate, one dominated by the N-terminal lobe, and the other one by the C-terminal lobe. Taken together, these results suggest that the two lobes of CaM function as distinct Ca(2+) sensors that can differentially transduce Ca(2+) influx to downstream targets. We discuss a possible role of the N-terminal lobe-specific Ca(2+)-CaM nano-domain in CaMKII activation required for the induction of synaptic plasticity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A three-dimensional model has been proposed that uses Monte Carlo and fast Fourier transform convolution techniques to calculate the dose distribution from a fast neutron beam. This method transports scattered neutrons and photons in the forward, lateral, and backward directions and protons, electrons, and positrons in the forward and lateral directions by convolving energy spread kernels with initial interaction available energy distributions. The primary neutron and photon spectrums have been derived from narrow beam attenuation measurements. The positions and strengths of the effective primary neutron, scattered neutron, and photon sources have been derived from dual ion chamber measurements. The size of the effective primary neutron source has been measured using a copper activation technique. Heterogeneous tissue calculations require a weighted sum of two convolutions for each component since the kernels must be invariant for FFT convolution. Comparisons between calculations and measurements were performed for several water and heterogeneous phantom geometries. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study compared four alternative approaches (Taylor, Fieller, percentile bootstrap, and bias-corrected bootstrap methods) to estimating confidence intervals (CIs) around cost-effectiveness (CE) ratio. The study consisted of two components: (1) Monte Carlo simulation was conducted to identify characteristics of hypothetical cost-effectiveness data sets which might lead one CI estimation technique to outperform another. These results were matched to the characteristics of an (2) extant data set derived from the National AIDS Demonstration Research (NADR) project. The methods were used to calculate (CIs) for data set. These results were then compared. The main performance criterion in the simulation study was the percentage of times the estimated (CIs) contained the “true” CE. A secondary criterion was the average width of the confidence intervals. For the bootstrap methods, bias was estimated. ^ Simulation results for Taylor and Fieller methods indicated that the CIs estimated using the Taylor series method contained the true CE more often than did those obtained using the Fieller method, but the opposite was true when the correlation was positive and the CV of effectiveness was high for each value of CV of costs. Similarly, the CIs obtained by applying the Taylor series method to the NADR data set were wider than those obtained using the Fieller method for positive correlation values and for values for which the CV of effectiveness were not equal to 30% for each value of the CV of costs. ^ The general trend for the bootstrap methods was that the percentage of times the true CE ratio was contained in CIs was higher for the percentile method for higher values of the CV of effectiveness, given the correlation between average costs and effects and the CV of effectiveness. The results for the data set indicated that the bias corrected CIs were wider than the percentile method CIs. This result was in accordance with the prediction derived from the simulation experiment. ^ Generally, the bootstrap methods are more favorable for parameter specifications investigated in this study. However, the Taylor method is preferred for low CV of effect, and the percentile method is more favorable for higher CV of effect. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A measurement of angular correlations in Drell-Yan lepton pairs via the phi(eta)* observable is presented. This variable probes the same physics as the Z/gamma* boson transverse momentum with a better experimental resolution. The Z/gamma* -> e(+)e(-) and Z/gamma* -> mu(+)mu(-) decays produced in proton-proton collisions at a centre-of-mass energy of root s = 7 TeV are used. The data were collected with the ATLAS detector at the LHC and correspond to an integrated luminosity of 4.6 fb(-1). Normalised differential cross sections as a function of phi(eta)* are measured separately for electron and muon decay channels. These channels are then combined for improved accuracy. The cross section is also measured double differentially as a function of phi(eta)* for three independent bins of the Z boson rapidity. The results are compared to QCD calculations and to predictions from different Monte Carlo event generators. The data are reasonably well described, in all measured Z boson rapidity regions, by resummed QCD predictions combined with fixed-order perturbative QCD calculations or by some Monte Carlo event generators. The measurement precision is typically better by one order of magnitude than present theoretical uncertainties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show that exotic phases arise in generalized lattice gauge theories known as quantum link models in which classical gauge fields are replaced by quantum operators. While these quantum models with discrete variables have a finite-dimensional Hilbert space per link, the continuous gauge symmetry is still exact. An efficient cluster algorithm is used to study these exotic phases. The (2+1)-d system is confining at zero temperature with a spontaneously broken translation symmetry. A crystalline phase exhibits confinement via multi stranded strings between chargeanti-charge pairs. A phase transition between two distinct confined phases is weakly first order and has an emergent spontaneously broken approximate SO(2) global symmetry. The low-energy physics is described by a (2 + 1)-d RP(1) effective field theory, perturbed by a dangerously irrelevant SO(2) breaking operator, which prevents the interpretation of the emergent pseudo-Goldstone boson as a dual photon. This model is an ideal candidate to be implemented in quantum simulators to study phenomena that are not accessible using Monte Carlo simulations such as the real-time evolution of the confining string and the real-time dynamics of the pseudo-Goldstone boson.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Sexual transmission of Ebola virus disease (EVD) 6 months after onset of symptoms has been recently documented, and Ebola virus RNA has been detected in semen of survivors up to 9 months after onset of symptoms. As countries affected by the 2013-2015 epidemic in West Africa, by far the largest to date, are declared free of Ebola virus disease (EVD), it remains unclear what threat is posed by rare sexual transmission events that could arise from survivors. METHODOLOGY/PRINCIPAL FINDINGS We devised a compartmental mathematical model that includes sexual transmission from convalescent survivors: a SEICR (susceptible-exposed-infectious-convalescent-recovered) transmission model. We fitted the model to weekly incidence of EVD cases from the 2014-2015 epidemic in Sierra Leone. Sensitivity analyses and Monte Carlo simulations showed that a 0.1% per sex act transmission probability and a 3-month convalescent period (the two key unknown parameters of sexual transmission) create very few additional cases, but would extend the epidemic by 83 days [95% CI: 68-98 days] (p < 0.0001) on average. Strikingly, a 6-month convalescent period extended the average epidemic by 540 days (95% CI: 508-572 days), doubling the current length, despite an insignificant rise in the number of new cases generated. CONCLUSIONS/SIGNIFICANCE Our results show that reductions in the per sex act transmission probability via abstinence and condom use should reduce the number of sporadic sexual transmission events, but will not significantly reduce the epidemic size and may only minimally shorten the length of time the public health community must maintain response preparedness. While the number of infectious survivors is expected to greatly decline over the coming months, our results show that transmission events may still be expected for quite some time as each event results in a new potential cluster of non-sexual transmission. Precise measurement of the convalescent period is thus important for planning ongoing surveillance efforts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The discoveries of the BRCA1 and BRCA2 genes have made it possible for women of families with hereditary breast/ovarian cancer to determine if they carry cancer-predisposing genetic mutations. Women with germline mutations have significantly higher probabilities of developing both cancers than the general population. Since the presence of a BRCA1 or BRCA2 mutation does not guarantee future cancer development, the appropriate course of action remains uncertain for these women. Prophylactic mastectomy and oophorectomy remain controversial since the underlying premise for surgical intervention is based more upon reduction in the estimated risk of cancer than on actual evidence of clinical benefit. Issues that are incorporated in a woman's decision making process include quality of life without breasts, ovaries, attitudes toward possible surgical morbidity as well as a remaining risk of future development of breast/ovarian cancer despite prophylactic surgery. The incorporation of patient preferences into decision analysis models can determine the quality-adjusted survival of different prophylactic approaches to breast/ovarian cancer prevention. Monte Carlo simulation was conducted on 4 separate decision models representing prophylactic oophorectomy, prophylactic mastectomy, prophylactic oophorectomy/mastectomy and screening. The use of 3 separate preference assessment methods across different populations of women allows researchers to determine how quality adjusted survival varies according to clinical strategy, method of preference assessment and the population from which preferences are assessed. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bayesian phylogenetic analyses are now very popular in systematics and molecular evolution because they allow the use of much more realistic models than currently possible with maximum likelihood methods. There are, however, a growing number of examples in which large Bayesian posterior clade probabilities are associated with very short edge lengths and low values for non-Bayesian measures of support such as nonparametric bootstrapping. For the four-taxon case when the true tree is the star phylogeny, Bayesian analyses become increasingly unpredictable in their preference for one of the three possible resolved tree topologies as data set size increases. This leads to the prediction that hard (or near-hard) polytomies in nature will cause unpredictable behavior in Bayesian analyses, with arbitrary resolutions of the polytomy receiving very high posterior probabilities in some cases. We present a simple solution to this problem involving a reversible-jump Markov chain Monte Carlo (MCMC) algorithm that allows exploration of all of tree space, including unresolved tree topologies with one or more polytomies. The reversible-jump MCMC approach allows prior distributions to place some weight on less-resolved tree topologies, which eliminates misleadingly high posteriors associated with arbitrary resolutions of hard polytomies. Fortunately, assigning some prior probability to polytomous tree topologies does not appear to come with a significant cost in terms of the ability to assess the level of support for edges that do exist in the true tree. Methods are discussed for applying arbitrary prior distributions to tree topologies of varying resolution, and an empirical example showing evidence of polytomies is analyzed and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bayesian phylogenetic analyses are now very popular in systematics and molecular evolution because they allow the use of much more realistic models than currently possible with maximum likelihood methods. There are, however, a growing number of examples in which large Bayesian posterior clade probabilities are associated with very short edge lengths and low values for non-Bayesian measures of support such as nonparametric bootstrapping. For the four-taxon case when the true tree is the star phylogeny, Bayesian analyses become increasingly unpredictable in their preference for one of the three possible resolved tree topologies as data set size increases. This leads to the prediction that hard (or near-hard) polytomies in nature will cause unpredictable behavior in Bayesian analyses, with arbitrary resolutions of the polytomy receiving very high posterior probabilities in some cases. We present a simple solution to this problem involving a reversible-jump Markov chain Monte Carlo (MCMC) algorithm that allows exploration of all of tree space, including unresolved tree topologies with one or more polytomies. The reversible-jump MCMC approach allows prior distributions to place some weight on less-resolved tree topologies, which eliminates misleadingly high posteriors associated with arbitrary resolutions of hard polytomies. Fortunately, assigning some prior probability to polytomous tree topologies does not appear to come with a significant cost in terms of the ability to assess the level of support for edges that do exist in the true tree. Methods are discussed for applying arbitrary prior distributions to tree topologies of varying resolution, and an empirical example showing evidence of polytomies is analyzed and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monte Carlo simulation has been conducted to investigate parameter estimation and hypothesis testing in some well known adaptive randomization procedures. The four urn models studied are Randomized Play-the-Winner (RPW), Randomized Pôlya Urn (RPU), Birth and Death Urn with Immigration (BDUI), and Drop-the-Loses Urn (DL). Two sequential estimation methods, the sequential maximum likelihood estimation (SMLE) and the doubly adaptive biased coin design (DABC), are simulated at three optimal allocation targets that minimize the expected number of failures under the assumption of constant variance of simple difference (RSIHR), relative risk (ORR), and odds ratio (OOR) respectively. Log likelihood ratio test and three Wald-type tests (simple difference, log of relative risk, log of odds ratio) are compared in different adaptive procedures. ^ Simulation results indicates that although RPW is slightly better in assigning more patients to the superior treatment, the DL method is considerably less variable and the test statistics have better normality. When compared with SMLE, DABC has slightly higher overall response rate with lower variance, but has larger bias and variance in parameter estimation. Additionally, the test statistics in SMLE have better normality and lower type I error rate, and the power of hypothesis testing is more comparable with the equal randomization. Usually, RSIHR has the highest power among the 3 optimal allocation ratios. However, the ORR allocation has better power and lower type I error rate when the log of relative risk is the test statistics. The number of expected failures in ORR is smaller than RSIHR. It is also shown that the simple difference of response rates has the worst normality among all 4 test statistics. The power of hypothesis test is always inflated when simple difference is used. On the other hand, the normality of the log likelihood ratio test statistics is robust against the change of adaptive randomization procedures. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current standard treatment for head and neck cancer at our institution uses intensity-modulated x-ray therapy (IMRT), which improves target coverage and sparing of critical structures by delivering complex fluence patterns from a variety of beam directions to conform dose distributions to the shape of the target volume. The standard treatment for breast patients is field-in-field forward-planned IMRT, with initial tangential fields and additional reduced-weight tangents with blocking to minimize hot spots. For these treatment sites, the addition of electrons has the potential of improving target coverage and sparing of critical structures due to rapid dose falloff with depth and reduced exit dose. In this work, the use of mixed-beam therapy (MBT), i.e., combined intensity-modulated electron and x-ray beams using the x-ray multi-leaf collimator (MLC), was explored. The hypothesis of this study was that addition of intensity-modulated electron beams to existing clinical IMRT plans would produce MBT plans that were superior to the original IMRT plans for at least 50% of selected head and neck and 50% of breast cases. Dose calculations for electron beams collimated by the MLC were performed with Monte Carlo methods. An automation system was created to facilitate communication between the dose calculation engine and the treatment planning system. Energy and intensity modulation of the electron beams was accomplished by dividing the electron beams into 2x2-cm2 beamlets, which were then beam-weight optimized along with intensity-modulated x-ray beams. Treatment plans were optimized to obtain equivalent target dose coverage, and then compared with the original treatment plans. MBT treatment plans were evaluated by participating physicians with respect to target coverage, normal structure dose, and overall plan quality in comparison with original clinical plans. The physician evaluations did not support the hypothesis for either site, with MBT selected as superior in 1 out of the 15 head and neck cases (p=1) and 6 out of 18 breast cases (p=0.95). While MBT was not shown to be superior to IMRT, reductions were observed in doses to critical structures distal to the target along the electron beam direction and to non-target tissues, at the expense of target coverage and dose homogeneity. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

External beam radiation therapy is used to treat nearly half of the more than 200,000 new cases of prostate cancer diagnosed in the United States each year. During a radiation therapy treatment, healthy tissues in the path of the therapeutic beam are exposed to high doses. In addition, the whole body is exposed to a low-dose bath of unwanted scatter radiation from the pelvis and leakage radiation from the treatment unit. As a result, survivors of radiation therapy for prostate cancer face an elevated risk of developing a radiogenic second cancer. Recently, proton therapy has been shown to reduce the dose delivered by the therapeutic beam to normal tissues during treatment compared to intensity modulated x-ray therapy (IMXT, the current standard of care). However, the magnitude of stray radiation doses from proton therapy, and their impact on this incidence of radiogenic second cancers, was not known. ^ The risk of a radiogenic second cancer following proton therapy for prostate cancer relative to IMXT was determined for 3 patients of large, median, and small anatomical stature. Doses delivered to healthy tissues from the therapeutic beam were obtained from treatment planning system calculations. Stray doses from IMXT were taken from the literature, while stray doses from proton therapy were simulated using a Monte Carlo model of a passive scattering treatment unit and an anthropomorphic phantom. Baseline risk models were taken from the Biological Effects of Ionizing Radiation VII report. A sensitivity analysis was conducted to characterize the uncertainty of risk calculations to uncertainties in the risk model, the relative biological effectiveness (RBE) of neutrons for carcinogenesis, and inter-patient anatomical variations. ^ The risk projections revealed that proton therapy carries a lower risk for radiogenic second cancer incidence following prostate irradiation compared to IMXT. The sensitivity analysis revealed that the results of the risk analysis depended only weakly on uncertainties in the risk model and inter-patient variations. Second cancer risks were sensitive to changes in the RBE of neutrons. However, the findings of the study were qualitatively consistent for all patient sizes and risk models considered, and for all neutron RBE values less than 100. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction. Investigations into the shortcomings of current intracavitary brachytherapy (ICBT) technology has lead us to design an Anatomically Adaptive Applicator (A3). The goal of this work was to design and characterize the imaging and dosimetric capabilities of this device. The A3 design incorporates a single shield that can both rotate and translate within the colpostat. We hypothesized that this feature, coupled with specific A3 component construction materials and imaging techniques, would facilitate artifact-free CT and MR image acquisition. In addition, by shaping the delivered dose distribution via the A3 movable shield, dose delivered to the rectum will be less compared to equivalent treatments utilizing current state-of-the-art ICBT applicators. ^ Method and materials. A method was developed to facilitate an artifact-free CT imaging protocol that used a "step-and-shoot" technique: pausing the scanner midway through the scan and moving the A 3 shield out of the path of the beam. The A3 CT imaging capabilities were demonstrated acquiring images of a phantom that positioned the A3 and FW applicators in a clinically-applicable geometry. Artifact-free MRI imaging was achieved by utilizing MRI-compatible ovoid components and pulse-sequences that minimize susceptibility artifacts. Artifacts were qualitatively compared, in a clinical setup. For the dosimetric study, Monte-Carlo (MC) models of the A3 and FW (shielded and unshielded) applicators were validated. These models were incorporated into a MC model of one cervical cancer patient ICBT insertion, using 192Ir (mHDR v2 source). The A3 shield's rotation and translation was adjusted for each dwell position to minimize dose to the rectum. Superposition of dose to rectum for all A3 dwell sources (4 per ovoid) was applied to obtain a comparison of equivalent FW treatments. Rectal dose-volume histograms (absolute and HDR/PDR biologically effective dose (BED)) and BED to 2 cc (BED2cc ) were determined for all applicators and compared. ^ Results. Using a "step-and-shoot" CT scanning method and MR compliant materials and optimized pulse-sequences, images of the A 3 were nearly artifact-free for both modalities. The A3 reduced BED2cc by 18.5% and 7.2% for a PDR treatment and 22.4% and 8.7% for a HDR treatment compared to treatments delivered using an uFW and sFW applicator, respectively. ^ Conclusions. The novel design of the A3 facilitated nearly artifact-free image quality for both CT and MR clinical imaging protocols. The design also facilitated a reduction in BED to the rectum compared to equivalent ICBT treatments delivered using current, state-of-the-art applicators. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In regression analysis, covariate measurement error occurs in many applications. The error-prone covariates are often referred to as latent variables. In this proposed study, we extended the study of Chan et al. (2008) on recovering latent slope in a simple regression model to that in a multiple regression model. We presented an approach that applied the Monte Carlo method in the Bayesian framework to the parametric regression model with the measurement error in an explanatory variable. The proposed estimator applied the conditional expectation of latent slope given the observed outcome and surrogate variables in the multiple regression models. A simulation study was presented showing that the method produces estimator that is efficient in the multiple regression model, especially when the measurement error variance of surrogate variable is large.^