960 resultados para Averaging Theorem
Resumo:
This study considered the problem of predicting survival, based on three alternative models: a single Weibull, a mixture of Weibulls and a cure model. Instead of the common procedure of choosing a single “best” model, where “best” is defined in terms of goodness of fit to the data, a Bayesian model averaging (BMA) approach was adopted to account for model uncertainty. This was illustrated using a case study in which the aim was the description of lymphoma cancer survival with covariates given by phenotypes and gene expression. The results of this study indicate that if the sample size is sufficiently large, one of the three models emerge as having highest probability given the data, as indicated by the goodness of fit measure; the Bayesian information criterion (BIC). However, when the sample size was reduced, no single model was revealed as “best”, suggesting that a BMA approach would be appropriate. Although a BMA approach can compromise on goodness of fit to the data (when compared to the true model), it can provide robust predictions and facilitate more detailed investigation of the relationships between gene expression and patient survival. Keywords: Bayesian modelling; Bayesian model averaging; Cure model; Markov Chain Monte Carlo; Mixture model; Survival analysis; Weibull distribution
Resumo:
Nitrous oxide (N2O) is one of the greenhouse gases that can contribute to global warming. Spatial variability of N2O can lead to large uncertainties in prediction. However, previous studies have often ignored the spatial dependency to quantify the N2O - environmental factors relationships. Few researches have examined the impacts of various spatial correlation structures (e.g. independence, distance-based and neighbourhood based) on spatial prediction of N2O emissions. This study aimed to assess the impact of three spatial correlation structures on spatial predictions and calibrate the spatial prediction using Bayesian model averaging (BMA) based on replicated, irregular point-referenced data. The data were measured in 17 chambers randomly placed across a 271 m(2) field between October 2007 and September 2008 in the southeast of Australia. We used a Bayesian geostatistical model and a Bayesian spatial conditional autoregressive (CAR) model to investigate and accommodate spatial dependency, and to estimate the effects of environmental variables on N2O emissions across the study site. We compared these with a Bayesian regression model with independent errors. The three approaches resulted in different derived maps of spatial prediction of N2O emissions. We found that incorporating spatial dependency in the model not only substantially improved predictions of N2O emission from soil, but also better quantified uncertainties of soil parameters in the study. The hybrid model structure obtained by BMA improved the accuracy of spatial prediction of N2O emissions across this study region.
Resumo:
This paper discusses how fundamentals of number theory, such as unique prime factorization and greatest common divisor can be made accessible to secondary school students through spreadsheets. In addition, the three basic multiplicative functions of number theory are defined and illustrated through a spreadsheet environment. Primes are defined simply as those natural numbers with just two divisors. One focus of the paper is to show the ease with which spreadsheets can be used to introduce students to some basics of elementary number theory. Complete instructions are given to build a spreadsheet to enable the user to input a positive integer, either with a slider or manually, and see the prime decomposition. The spreadsheet environment allows students to observe patterns, gain structural insight, form and test conjectures, and solve problems in elementary number theory.
Resumo:
This article lays down the foundations of the renormalization group (RG) approach for differential equations characterized by multiple scales. The renormalization of constants through an elimination process and the subsequent derivation of the amplitude equation [Chen, Phys. Rev. E 54, 376 (1996)] are given a rigorous but not abstract mathematical form whose justification is based on the implicit function theorem. Developing the theoretical framework that underlies the RG approach leads to a systematization of the renormalization process and to the derivation of explicit closed-form expressions for the amplitude equations that can be carried out with symbolic computation for both linear and nonlinear scalar differential equations and first order systems but independently of their particular forms. Certain nonlinear singular perturbation problems are considered that illustrate the formalism and recover well-known results from the literature as special cases. © 2008 American Institute of Physics.
Contrast transfer function correction applied to cryo-electron tomography and sub-tomogram averaging
Resumo:
Cryo-electron tomography together with averaging of sub-tomograms containing identical particles can reveal the structure of proteins or protein complexes in their native environment. The resolution of this technique is limited by the contrast transfer function (CTF) of the microscope. The CTF is not routinely corrected in cryo-electron tomography because of difficulties including CTF detection, due to the low signal to noise ratio, and CTF correction, since images are characterised by a spatially variant CTF. Here we simulate the effects of the CTF on the resolution of the final reconstruction, before and after CTF correction, and consider the effect of errors and approximations in defocus determination. We show that errors in defocus determination are well tolerated when correcting a series of tomograms collected at a range of defocus values. We apply methods for determining the CTF parameters in low signal to noise images of tilted specimens, for monitoring defocus changes using observed magnification changes, and for correcting the CTF prior to reconstruction. Using bacteriophage PRDI as a test sample, we demonstrate that this approach gives an improvement in the structure obtained by sub-tomogram averaging from cryo-electron tomograms.
Resumo:
Experiments in spintronics necessarily involve the detection of spin polarization. The sensitivity of this detection becomes an important factor to consider when extending the low temperature studies on semiconductor spintronic devices to room temperature, where the spin signal is weaker. In pump-probe experiments, which optically inject and detect spins, the sensitivity is often improved by using a photoelastic modulator (PEM) for lock-in detection. However, spurious signals can arise if diode lasers are used as optical sources in such experiments, along with a PEM. In this work, we eliminated the spurious electromagnetic coupling of the PEM onto the probe diode laser, by the double modulation technique. We also developed a test for spurious modulated interference in the pump-probe signal, due to the PEM. Besides, an order of magnitude enhancement in the sensitivity of detection of spin polarization by Kerr rotation, to 3x10(-8) rad was obtained by using the concept of Allan variance to optimally average the time series data over a period of 416 s. With these improvements, we are able to experimentally demonstrate at room temperature, photoinduced steady-state spin polarization in bulk GaAs. Thus, the advances reported here facilitate the use of diode lasers with a PEM for sensitive pump-probe experiments. They also constitute a step toward detection of spin-injection in Si at room temperature.
Resumo:
Using the dimensional reduction regularization scheme, we show that radiative corrections to the anomaly of the axial current, which is coupled to the gauge field, are absent in a supersymmetric U(1) gauge model for both 't Hooft-Veltman and Bardeen prescriptions for γ5. We also discuss the results with reference to conventional dimensional regularization. This result has significant implications with respect to the renormalizability of supersymmetric models.
Resumo:
Based on a Hamiltonian description we present a rigorous derivation of the transient state work fluctuation theorem and the Jarzynski equality for a classical harmonic oscillator linearly coupled to a harmonic heat bath, which is dragged by an external agent. Coupling with the bath makes the dynamics dissipative. Since we do not assume anything about the spectral nature of the harmonic bath the derivation is not restricted only to the Ohmic bath, rather it is more general, for a non-Ohmic bath. We also derive expressions of the average work done and the variance of the work done in terms of the two-time correlation function of the fluctuations of the position of the harmonic oscillator. In the case of an Ohmic bath, we use these relations to evaluate the average work done and the variance of the work done analytically and verify the transient state work fluctuation theorem quantitatively. Actually these relations have far-reaching consequences. They can be used to numerically evaluate the average work done and the variance of the work done in the case of a non-Ohmic bath when analytical evaluation is not possible.
Resumo:
Analytical expressions for the corrections to duality are obtained for nonsingular potentials, and are found to be small numerically. An alternative consistent way of energy smoothing, developed by Strutinsky, is elucidated. This may be of use even when potential models are not valid.
Resumo:
Background Next-generation sequencing technology is an important tool for the rapid, genome-wide identification of genetic variations. However, it is difficult to resolve the ‘signal’ of variations of interest and the ‘noise’ of stochastic sequencing and bioinformatic errors in the large datasets that are generated. We report a simple approach to identify regional linkage to a trait that requires only two pools of DNA to be sequenced from progeny of a defined genetic cross (i.e. bulk segregant analysis) at low coverage (<10×) and without parentage assignment of individual SNPs. The analysis relies on regional averaging of pooled SNP frequencies to rapidly scan polymorphisms across the genome for differential regional homozygosity, which is then displayed graphically. Results Progeny from defined genetic crosses of Tribolium castaneum (F4 and F19) segregating for the phosphine resistance trait were exposed to phosphine to select for the resistance trait while the remainders were left unexposed. Next generation sequencing was then carried out on the genomic DNA from each pool of selected and unselected insects from each generation. The reads were mapped against the annotated T. castaneum genome from NCBI (v3.0) and analysed for SNP variations. Since it is difficult to accurately call individual SNP frequencies when the depth of sequence coverage is low, variant frequencies were averaged across larger regions. Results from regional SNP frequency averaging identified two loci, tc_rph1 on chromosome 8 and tc_rph2 on chromosome 9, which together are responsible for high level resistance. Identification of the two loci was possible with only 5-7× average coverage of the genome per dataset. These loci were subsequently confirmed by direct SNP marker analysis and fine-scale mapping. Individually, homozygosity of tc_rph1 or tc_rph2 results in only weak resistance to phosphine (estimated at up to 1.5-2.5× and 3-5× respectively), whereas in combination they interact synergistically to provide a high-level resistance >200×. The tc_rph2 resistance allele resulted in a significant fitness cost relative to the wild type allele in unselected beetles over eighteen generations. Conclusion We have validated the technique of linkage mapping by low-coverage sequencing of progeny from a simple genetic cross. The approach relied on regional averaging of SNP frequencies and was used to successfully identify candidate gene loci for phosphine resistance in T. castaneum. This is a relatively simple and rapid approach to identifying genomic regions associated with traits in defined genetic crosses that does not require any specialised statistical analysis.
Resumo:
An existence theorem is obtained for a generalized Hammerstein type equation
Resumo:
Es wird die Temperaturabhiingigkeit der CI35-Kernquadrupolresonanz in Natriumchlorat und Kupferchlorat im Temperature von 77 bis 300 °K untersucht. Es wird gezeigt, daß die Annahmen, die in der Theorie von Bayer gemacht werden, fur Chlorate gelten. Die Frequenz der Torsionsschwingungen der ClO3-Gruppe wird folglich mit dieser Theorie berechnet. Der berechnete Wert der Torsionsfrequenz stimmt gut mit vorhandenen Werten der Ramanspektroskopie überein.
Resumo:
Rae and Davidson have found a striking connection between the averaging method generalised by Kruskal and the diagram technique used by the Brussels school in statistical mechanics. They have considered conservative systems whose evolution is governed by the Liouville equation. In this paper we have considered a class of dissipative systems whose evolution is governed not by the Liouville equation but by the last-multiplier equation of Jacobi whose Fourier transform has been shown to be the Hopf equation. The application of the diagram technique to the interaction representation of the Jacobi equation reveals the presence of two kinds of interactions, namely the transition from one mode to another and the persistence of a mode. The first kind occurs in the treatment of conservative systems while the latter type is unique to dissipative fields and is precisely the one that determines the asymptotic Jacobi equation. The dynamical equations of motion equivalent to this limiting Jacobi equation have been shown to be the same as averaged equations.
Resumo:
In many instances we find it advantageous to display a quantum optical density matrix as a generalized statistical ensemble of coherent wave fields. The weight functions involved in these constructions turn out to belong to a family of distributions, not always smooth functions. In this paper we investigate this question anew and show how it is related to the problem of expanding an arbitrary state in terms of an overcomplete subfamily of the overcomplete set of coherent states. This provides a relatively transparent derivation of the optical equivalence theorem. An interesting by-product is the discovery of a new class of discrete diagonal representations.