971 resultados para Sequential Monte Carlo methods
Resumo:
As part of a project to use the long-lived (T(1/2)=1200a) (166m)Ho as reference source in its reference ionisation chamber, IRA standardised a commercially acquired solution of this nuclide using the 4pibeta-gamma coincidence and 4pigamma (NaI) methods. The (166m)Ho solution supplied by Isotope Product Laboratories was measured to have about 5% Europium impurities (3% (154)Eu, 0.94% (152)Eu and 0.9% (155)Eu). Holmium had therefore to be separated from europium, and this was carried out by means of ion-exchange chromatography. The holmium fractions were collected without europium contamination: 162h long HPGe gamma measurements indicated no europium impurity (detection limits of 0.01% for (152)Eu and (154)Eu, and 0.03% for (155)Eu). The primary measurement of the purified (166m)Ho solution with the 4pi (PC) beta-gamma coincidence technique was carried out at three gamma energy settings: a window around the 184.4keV peak and gamma thresholds at 121.8 and 637.3keV. The results show very good self-consistency, and the activity concentration of the solution was evaluated to be 45.640+/-0.098kBq/g (0.21% with k=1). The activity concentration of this solution was also measured by integral counting with a well-type 5''x5'' NaI(Tl) detector and efficiencies computed by Monte Carlo simulations using the GEANT code. These measurements were mutually consistent, while the resulting weighted average of the 4pi NaI(Tl) method was found to agree within 0.15% with the result of the 4pibeta-gamma coincidence technique. An ampoule of this solution and the measured value of the concentration were submitted to the BIPM as a contribution to the Système International de Référence.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Resumo:
Axial deflection of DNA molecules in solution results from thermal motion and intrinsic curvature related to the DNA sequence. In order to measure directly the contribution of thermal motion we constructed intrinsically straight DNA molecules and measured their persistence length by cryo-electron microscopy. The persistence length of such intrinsically straight DNA molecules suspended in thin layers of cryo-vitrified solutions is about 80 nm. In order to test our experimental approach, we measured the apparent persistence length of DNA molecules with natural "random" sequences. The result of about 45 nm is consistent with the generally accepted value of the apparent persistence length of natural DNA sequences. By comparing the apparent persistence length to intrinsically straight DNA with that of natural DNA, it is possible to determine both the dynamic and the static contributions to the apparent persistence length.
Resumo:
MOTIVATION: Regulatory gene networks contain generic modules such as feedback loops that are essential for the regulation of many biological functions. The study of the stochastic mechanisms of gene regulation is instrumental for the understanding of how cells maintain their expression at levels commensurate with their biological role, as well as to engineer gene expression switches of appropriate behavior. The lack of precise knowledge on the steady-state distribution of gene expression requires the use of Gillespie algorithms and Monte-Carlo approximations. METHODOLOGY: In this study, we provide new exact formulas and efficient numerical algorithms for computing/modeling the steady-state of a class of self-regulated genes, and we use it to model/compute the stochastic expression of a gene of interest in an engineered network introduced in mammalian cells. The behavior of the genetic network is then analyzed experimentally in living cells. RESULTS: Stochastic models often reveal counter-intuitive experimental behaviors, and we find that this genetic architecture displays a unimodal behavior in mammalian cells, which was unexpected given its known bimodal response in unicellular organisms. We provide a molecular rationale for this behavior, and we implement it in the mathematical picture to explain the experimental results obtained from this network.
Resumo:
A solution of (18)F was standardised with a 4pibeta-4pigamma coincidence counting system in which the beta detector is a one-inch diameter cylindrical UPS89 plastic scintillator, positioned at the bottom of a well-type 5''x5'' NaI(Tl) gamma-ray detector. Almost full detection efficiency-which was varied downwards electronically-was achieved in the beta-channel. Aliquots of this (18)F solution were also measured using 4pigamma NaI(Tl) integral counting and Monte Carlo calculated efficiencies as well as the CIEMAT-NIST method. Secondary measurements of the same solution were also performed with an IG11 ionisation chamber whose equivalent activity is traceable to the Système International de Référence through the contribution IRA-METAS made to it in 2001; IRA's degree of equivalence was found to be close to the key comparison reference value (KCRV). The (18)F activity predicted by this coincidence system agrees closely with the ionisation chamber measurement and is compatible within one standard deviation of the other primary measurements. This work demonstrates that our new coincidence system can standardise short-lived radionuclides used in nuclear medicine.
Resumo:
Astrocytes have recently become a major center of interest in neurochemistry with the discoveries on their major role in brain energy metabolism. An interesting way to probe this glial contribution is given by in vivo (13) C NMR spectroscopy coupled with the infusion labeled glial-specific substrate, such as acetate. In this study, we infused alpha-chloralose anesthetized rats with [2-(13) C]acetate and followed the dynamics of the fractional enrichment (FE) in the positions C4 and C3 of glutamate and glutamine with high sensitivity, using (1) H-[(13) C] magnetic resonance spectroscopy (MRS) at 14.1T. Applying a two-compartment mathematical model to the measured time courses yielded a glial tricarboxylic acid (TCA) cycle rate (Vg ) of 0.27 ± 0.02 μmol/g/min and a glutamatergic neurotransmission rate (VNT ) of 0.15 ± 0.01 μmol/g/min. Glial oxidative ATP metabolism thus accounts for 38% of total oxidative metabolism measured by NMR. Pyruvate carboxylase (VPC ) was 0.09 ± 0.01 μmol/g/min, corresponding to 37% of the glial glutamine synthesis rate. The glial and neuronal transmitochondrial fluxes (Vx (g) and Vx (n) ) were of the same order of magnitude as the respective TCA cycle fluxes. In addition, we estimated a glial glutamate pool size of 0.6 ± 0.1 μmol/g. The effect of spectral data quality on the fluxes estimates was analyzed by Monte Carlo simulations. In this (13) C-acetate labeling study, we propose a refined two-compartment analysis of brain energy metabolism based on (13) C turnover curves of acetate, glutamate and glutamine measured with state of the art in vivo dynamic MRS at high magnetic field in rats, enabling a deeper understanding of the specific role of glial cells in brain oxidative metabolism. In addition, the robustness of the metabolic fluxes determination relative to MRS data quality was carefully studied.
Resumo:
Understanding why dispersal is sex-biased in many taxa is still a major concern in evolutionary ecology. Dispersal tends to be male-biased in mammals and female-biased in birds, but counter-examples exist and little is known about sex bias in other taxa. Obtaining accurate measures of dispersal in the field remains a problem. Here we describe and compare several methods for detecting sex-biased dispersal using bi-parentally inherited, codominant genetic markers. If gene flow is restricted among populations, then the genotype of an individual tells something about its origin. Provided that dispersal occurs at the juvenile stage and that sampling is carried out on adults, genotypes sampled from the dispersing sex should on average be less likely (compared to genotypes from the philopatric sex) in the population in which they were sampled. The dispersing sex should be less genetically structured and should present a larger heterozygote deficit. In this study we use computer simulations and a permutation test on four statistics to investigate the conditions under which sex-biased dispersal can be detected. Two tests emerge as fairly powerful. We present results concerning the optimal sampling strategy (varying number of samples, individuals, loci per individual and level of polymorphism) under different amounts of dispersal for each sex. These tests for biases in dispersal are also appropriate for any attribute (e.g. size, colour, status) suspected to influence the probability of dispersal. A windows program carrying out these tests can be freely downloaded from http://www.unil.ch/izea/softwares/fstat.html
Resumo:
PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.
Resumo:
We consider the application of normal theory methods to the estimation and testing of a general type of multivariate regressionmodels with errors--in--variables, in the case where various data setsare merged into a single analysis and the observable variables deviatepossibly from normality. The various samples to be merged can differ on the set of observable variables available. We show that there is a convenient way to parameterize the model so that, despite the possiblenon--normality of the data, normal--theory methods yield correct inferencesfor the parameters of interest and for the goodness--of--fit test. Thetheory described encompasses both the functional and structural modelcases, and can be implemented using standard software for structuralequations models, such as LISREL, EQS, LISCOMP, among others. An illustration with Monte Carlo data is presented.
Resumo:
We extend to score, Wald and difference test statistics the scaled and adjusted corrections to goodness-of-fit test statistics developed in Satorra and Bentler (1988a,b). The theory is framed in the general context of multisample analysis of moment structures, under general conditions on the distribution of observable variables. Computational issues, as well as the relation of the scaled and corrected statistics to the asymptotic robust ones, is discussed. A Monte Carlo study illustrates thecomparative performance in finite samples of corrected score test statistics.
Resumo:
Aim of the present article was to perform three-dimensional (3D) single photon emission tomography-based dosimetry in radioimmunotherapy (RIT) with (90)Y-ibritumomab-tiuxetan. A custom MATLAB-based code was used to elaborate 3D images and to compare average 3D doses to lesions and to organs at risk (OARs) with those obtained with planar (2D) dosimetry. Our 3D dosimetry procedure was validated through preliminary phantom studies using a body phantom consisting of a lung insert and six spheres with various sizes. In phantom study, the accuracy of dose determination of our imaging protocol decreased when the object volume decreased below 5 mL, approximately. The poorest results were obtained for the 2.58 mL and 1.30 mL spheres where the dose error evaluated on corrected images with regard to the theoretical dose value was -12.97% and -18.69%, respectively. Our 3D dosimetry protocol was subsequently applied on four patients before RIT with (90)Y-ibritumomab-tiuxetan for a total of 5 lesions and 4 OARs (2 livers, 2 spleens). In patient study, without the implementation of volume recovery technique, tumor absorbed doses calculated with the voxel-based approach were systematically lower than those calculated with the planar protocol, with average underestimation of -39% (range from -13.1% to -62.7%). After volume recovery, dose differences reduce significantly, with average deviation of -14.2% (range from -38.7.4% to +3.4%, 1 overestimation, 4 underestimations). Organ dosimetry in one case overestimated, in the other underestimated the dose delivered to liver and spleen. However, both for 2D and 3D approach, absorbed doses to organs per unit administered activity are comparable with most recent literature findings.
Resumo:
Although the histogram is the most widely used density estimator, itis well--known that the appearance of a constructed histogram for a given binwidth can change markedly for different choices of anchor position. In thispaper we construct a stability index $G$ that assesses the potential changesin the appearance of histograms for a given data set and bin width as theanchor position changes. If a particular bin width choice leads to an unstableappearance, the arbitrary choice of any one anchor position is dangerous, anda different bin width should be considered. The index is based on the statisticalroughness of the histogram estimate. We show via Monte Carlo simulation thatdensities with more structure are more likely to lead to histograms withunstable appearance. In addition, ignoring the precision to which the datavalues are provided when choosing the bin width leads to instability. We provideseveral real data examples to illustrate the properties of $G$. Applicationsto other binned density estimators are also discussed.
Resumo:
In this article we propose using small area estimators to improve the estimatesof both the small and large area parameters. When the objective is to estimateparameters at both levels accurately, optimality is achieved by a mixed sampledesign of fixed and proportional allocations. In the mixed sample design, oncea sample size has been determined, one fraction of it is distributedproportionally among the different small areas while the rest is evenlydistributed among them. We use Monte Carlo simulations to assess theperformance of the direct estimator and two composite covariant-freesmall area estimators, for different sample sizes and different sampledistributions. Performance is measured in terms of Mean Squared Errors(MSE) of both small and large area parameters. It is found that the adoptionof small area composite estimators open the possibility of 1) reducingsample size when precision is given, or 2) improving precision for a givensample size.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
Background: Alcohol is a major risk factor for burden of disease and injuries globally. This paper presents a systematic method to compute the 95% confidence intervals of alcohol-attributable fractions (AAFs) with exposure and risk relations stemming from different sources.Methods: The computation was based on previous work done on modelling drinking prevalence using the gamma distribution and the inherent properties of this distribution. The Monte Carlo approach was applied to derive the variance for each AAF by generating random sets of all the parameters. A large number of random samples were thus created for each AAF to estimate variances. The derivation of the distributions of the different parameters is presented as well as sensitivity analyses which give an estimation of the number of samples required to determine the variance with predetermined precision, and to determine which parameter had the most impact on the variance of the AAFs.Results: The analysis of the five Asian regions showed that 150 000 samples gave a sufficiently accurate estimation of the 95% confidence intervals for each disease. The relative risk functions accounted for most of the variance in the majority of cases.Conclusions: Within reasonable computation time, the method yielded very accurate values for variances of AAFs.