76 resultados para Random error
Resumo:
Directed evolution techniques have been used to improve the thermal stability of the xylanase A from Bacillus subtilis (XylA). Two generations of random mutant libraries generated by error prone PCR coupled with a single generation of DNA shuffling produced a series of mutant proteins with increasing thermostability. The most Thermostable XylA variant from the third generation contained four mutations Q7H, G13R, S22P, and S179C that showed an increase in melting temperature of 20 degrees C. The thermodynamic properties Of a representative subset of nine XylA variants showing a range of thermostabilities were measured by thermal denaturation as monitored by the change in the far ultraviolet circular dichroism signal. Analysis of the data from these thermostable variants demonstrated a correlation between the decrease in the heat capacity change (Delta C(p)) with an increase in the midpoint of the transition temperature (T(m)) on transition from the native to the unfolded state. This result could not be interpreted within the context of the changes in accessible surface area of the protein on transition from the native to unfolded states. Since all the mutations are located at the surface of the protein, these results suggest that an explanation of the decrease in Delta C(p) on should include effects arising from the prot inlsolvent interface.
Resumo:
We recently predicted the existence of random primordial magnetic fields (RPMFs) in the form of randomly oriented cells with dipole-like structure with a cell size L(0) and an average magnetic field B(0). Here, we investigate models for primordial magnetic field with a similar web-like structure, and other geometries, differing perhaps in L(0) and B(0). The effect of RPMF on the formation of the first galaxies is investigated. The filtering mass, M(F), is the halo mass below which baryon accretion is severely depressed. We show that these RPMF could influence the formation of galaxies by altering the filtering mass and the baryon gas fraction of a halo, f(g). The effect is particularly strong in small galaxies. We find, for example, for a comoving B(0) = 0.1 mu G, and a reionization epoch that starts at z(s) = 11 and ends at z(e) = 8, for L(0) = 100 pc at z = 12, the f(g) becomes severely depressed for M < 10(7) M(circle dot), whereas for B(0) = 0 the f(g) becomes severely depressed only for much smaller masses, M < 10(5) M(circle dot). We suggest that the observation of M(F) and f(g) at high redshifts can give information on the intensity and structure of primordial magnetic fields.
Resumo:
We estimate the conditions for detectability of two planets in a 2/1 mean-motion resonance from radial velocity data, as a function of their masses, number of observations and the signal-to-noise ratio. Even for a data set of the order of 100 observations and standard deviations of the order of a few meters per second, we find that Jovian-size resonant planets are difficult to detect if the masses of the planets differ by a factor larger than similar to 4. This is consistent with the present population of real exosystems in the 2/1 commensurability, most of which have resonant pairs with similar minimum masses, and could indicate that many other resonant systems exist, but are currently beyond the detectability limit. Furthermore, we analyze the error distribution in masses and orbital elements of orbital fits from synthetic data sets for resonant planets in the 2/1 commensurability. For various mass ratios and number of data points we find that the eccentricity of the outer planet is systematically overestimated, although the inner planet`s eccentricity suffers a much smaller effect. If the initial conditions correspond to small-amplitude oscillations around stable apsidal corotation resonances, the amplitudes estimated from the orbital fits are biased toward larger amplitudes, in accordance to results found in real resonant extrasolar systems.
Resumo:
Non-linear methods for estimating variability in time-series are currently of widespread use. Among such methods are approximate entropy (ApEn) and sample approximate entropy (SampEn). The applicability of ApEn and SampEn in analyzing data is evident and their use is increasing. However, consistency is a point of concern in these tools, i.e., the classification of the temporal organization of a data set might indicate a relative less ordered series in relation to another when the opposite is true. As highlighted by their proponents themselves, ApEn and SampEn might present incorrect results due to this lack of consistency. In this study, we present a method which gains consistency by using ApEn repeatedly in a wide range of combinations of window lengths and matching error tolerance. The tool is called volumetric approximate entropy, vApEn. We analyze nine artificially generated prototypical time-series with different degrees of temporal order (combinations of sine waves, logistic maps with different control parameter values, random noises). While ApEn/SampEn clearly fail to consistently identify the temporal order of the sequences, vApEn correctly do. In order to validate the tool we performed shuffled and surrogate data analysis. Statistical analysis confirmed the consistency of the method. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Habitat use and the processes which determine fish distribution were evaluated at the reef flat and reef crest zones of a tropical, algal-dominated reef. Our comparisons indicated significant differences in the majority of the evaluated environmental characteristics between zones. Also, significant differences in the abundances of twelve, from thirteen analyzed species, were observed within and between-sites. According to null models, non-random patterns of species co-occurrences were significant, suggesting that fish guilds in both zones were non-randomly structured. Unexpectedly, structural complexity negatively affected overall species richness, but had a major positive influence on highly site-attached species such as a damselfish. Depth and substrate composition, particularly macroalgae cover, were positive determinants for the fish assemblage structure in the studied reef, prevailing over factors such as structural complexity and live coral cover. Our results are conflicting with other studies carried out in coral-dominated reefs of the Caribbean and Pacific, therefore supporting the idea that the factors which may potentially influence reef fish composition are highly site-dependent and variable.
Resumo:
The Prospective and Retrospective Memory Questionnaire (PRMQ) has been shown to have acceptable reliability and factorial, predictive, and concurrent validity. However, the PRMQ has never been administered to a probability sample survey representative of all ages in adulthood, nor have previous studies controlled for factors that are known to influence metamemory, such as affective status. Here, the PRMQ was applied in a survey adopting a probabilistic three-stage cluster sample representative of the population of Sao Paulo, Brazil, according to gender, age (20-80 years), and economic status (n=1042). After excluding participants who had conditions that impair memory (depression, anxiety, used psychotropics, and/or had neurological/psychiatric disorders), in the remaining 664 individuals we (a) used confirmatory factor analyses to test competing models of the latent structure of the PRMQ, and (b) studied effects of gender, age, schooling, and economic status on prospective and retrospective memory complaints. The model with the best fit confirmed the same tripartite structure (general memory factor and two orthogonal prospective and retrospective memory factors) previously reported. Women complained more of general memory slips, especially those in the first 5 years after menopause, and there were more complaints of prospective than retrospective memory, except in participants with lower family income.
Resumo:
In this paper we deal with robust inference in heteroscedastic measurement error models Rather than the normal distribution we postulate a Student t distribution for the observed variables Maximum likelihood estimates are computed numerically Consistent estimation of the asymptotic covariance matrices of the maximum likelihood and generalized least squares estimators is also discussed Three test statistics are proposed for testing hypotheses of interest with the asymptotic chi-square distribution which guarantees correct asymptotic significance levels Results of simulations and an application to a real data set are also reported (C) 2009 The Korean Statistical Society Published by Elsevier B V All rights reserved
Resumo:
Skew-normal distribution is a class of distributions that includes the normal distributions as a special case. In this paper, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis in a multivariate, null intercept, measurement error model [R. Aoki, H. Bolfarine, J.A. Achcar, and D. Leao Pinto Jr, Bayesian analysis of a multivariate null intercept error-in -variables regression model, J. Biopharm. Stat. 13(4) (2003b), pp. 763-771] where the unobserved value of the covariate (latent variable) follows a skew-normal distribution. The results and methods are applied to a real dental clinical trial presented in [A. Hadgu and G. Koch, Application of generalized estimating equations to a dental randomized clinical trial, J. Biopharm. Stat. 9 (1999), pp. 161-178].
Resumo:
We investigate the eigenvalue statistics of ensembles of normal random matrices when their order N tends to infinite. In the model, the eigenvalues have uniform density within a region determined by a simple analytic polynomial curve. We study the conformal deformations of equilibrium measures of normal random ensembles to the real line and give sufficient conditions for it to weakly converge to a Wigner measure.
Resumo:
In this Letter we deal with a nonlinear Schrodinger equation with chaotic, random, and nonperiodic cubic nonlinearity. Our goal is to study the soliton evolution, with the strength of the nonlinearity perturbed in the space and time coordinates and to check its robustness under these conditions. Here we show that the chaotic perturbation is more effective in destroying the soliton behavior, when compared with random or nonperiodic perturbation. For a real system, the perturbation can be related to, e.g., impurities in crystalline structures, or coupling to a thermal reservoir which, on the average, enhances the nonlinearity. We also discuss the relevance of such random perturbations to the dynamics of Bose-Einstein condensates and their collective excitations and transport. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We consider random generalizations of a quantum model of infinite range introduced by Emch and Radin. The generalizations allow a neat extension from the class l (1) of absolutely summable lattice potentials to the optimal class l (2) of square summable potentials first considered by Khanin and Sinai and generalised by van Enter and van Hemmen. The approach to equilibrium in the case of a Gaussian distribution is proved to be faster than for a Bernoulli distribution for both short-range and long-range lattice potentials. While exponential decay to equilibrium is excluded in the nonrandom l (1) case, it is proved to occur for both short and long range potentials for Gaussian distributions, and for potentials of class l (2) in the Bernoulli case. Open problems are discussed.
Resumo:
We discuss the applicability, within the random matrix theory, of perturbative treatment of symmetry breaking to the experimental data on the flip symmetry breaking in quartz crystal. We found that the values of the parameter that measures this breaking are different for the spacing distribution as compared to those for the spectral rigidity. We consider both two-fold and three-fold symmetries. The latter was found to account better for the spectral rigidity than the former. Both cases, however, underestimate the experimental spectral rigidity at large L. This discrepancy can be resolved if an appropriate number of eigenfrequencies is considered to be missing in the sample. Our findings are relevant for symmetry violation studies in general. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Bose systems, subject to the action of external random potentials, are considered. For describing the system properties, under the action of spatially random potentials of arbitrary strength, the stochastic mean-field approximation is employed. When the strength of disorder increases, the extended Bose-Einstein condensate fragments into spatially disconnected regions, forming a granular condensate. Increasing the strength of disorder even more transforms the granular condensate into the normal glass. The influence of time-dependent external potentials is also discussed. Fastly varying temporal potentials, to some extent, imitate the action of spatially random potentials. In particular, strong time-alternating potential can induce the appearance of a nonequilibrium granular condensate.
Resumo:
Mixed models may be defined with or without reference to sampling, and can be used to predict realized random effects, as when estimating the latent values of study subjects measured with response error. When the model is specified without reference to sampling, a simple mixed model includes two random variables, one stemming from an exchangeable distribution of latent values of study subjects and the other, from the study subjects` response error distributions. Positive probabilities are assigned to both potentially realizable responses and artificial responses that are not potentially realizable, resulting in artificial latent values. In contrast, finite population mixed models represent the two-stage process of sampling subjects and measuring their responses, where positive probabilities are only assigned to potentially realizable responses. A comparison of the estimators over the same potentially realizable responses indicates that the optimal linear mixed model estimator (the usual best linear unbiased predictor, BLUP) is often (but not always) more accurate than the comparable finite population mixed model estimator (the FPMM BLUP). We examine a simple example and provide the basis for a broader discussion of the role of conditioning, sampling, and model assumptions in developing inference.
Resumo:
In this paper we study the accumulated claim in some fixed time period, skipping the classical assumption of mutual independence between the variables involved. Two basic models are considered: Model I assumes that any pair of claims are equally correlated which means that the corresponding square-integrable sequence is exchangeable one. Model 2 states that the correlations between the adjacent claims are the same. Recurrence and explicit expressions for the joint probability generating function are derived and the impact of the dependence parameter (correlation coefficient) in both models is examined. The Markov binomial distribution is obtained as a particular case under assumptions of Model 2. (C) 2007 Elsevier B.V. All rights reserved.