980 resultados para Random test
Resumo:
Aiming to establish a rigorous link between macroscopic random motion (described e.g. by Langevin-type theories) and microscopic dynamics, we have undertaken a kinetic-theoretical study of the dynamics of a classical test-particle weakly coupled to a large heat-bath in thermal equilibrium. Both subsystems are subject to an external force field. From the (time-non-local) generalized master equation a Fokker-Planck-type equation follows as a "quasi-Markovian" approximation. The kinetic operator thus defined is shown to be ill-defined; in specific, it does not preserve the positivity of the test-particle distribution function f(x, v; t). Adopting an alternative approach, previously introduced for quantum open systems, is proposed to lead to a correct kinetic operator, which yields all the expected properties. A set of explicit expressions for the diffusion and drift coefficients are obtained, allowing for modelling macroscopic diffusion and dynamical friction phenomena, in terms of an external field and intrinsic physical parameters.
Resumo:
A multivariate Fokker-Planck-type kinetic equation modeling a test - panicle weakly interacting with an electrostatic plasma. in the presence of a magnetic field B . is analytically solved in an Ornstein - Uhlenbeck - type approximation. A new set of analytic expressions are obtained for variable moments and panicle density as a function of time. The process is diffusive.
Resumo:
A new nonlinear theory for the perpendicular transport of charged particles is presented. This approach is based on an improved nonlinear treatment of field line random walk in combination with a generalized compound diffusion model. The generalized compound diffusion model is much more systematic and reliable, in comparison to previous theories. Furthermore, the new theory shows remarkably good agreement with test-particle simulations and heliospheric observations.
Resumo:
Background: Selection bias in HIV prevalence estimates occurs if non-participation in testing is correlated with HIV status. Longitudinal data suggests that individuals who know or suspect they are HIV positive are less likely to participate in testing in HIV surveys, in which case methods to correct for missing data which are based on imputation and observed characteristics will produce biased results. Methods: The identity of the HIV survey interviewer is typically associated with HIV testing participation, but is unlikely to be correlated with HIV status. Interviewer identity can thus be used as a selection variable allowing estimation of Heckman-type selection models. These models produce asymptotically unbiased HIV prevalence estimates, even when non-participation is correlated with unobserved characteristics, such as knowledge of HIV status. We introduce a new random effects method to these selection models which overcomes non-convergence caused by collinearity, small sample bias, and incorrect inference in existing approaches. Our method is easy to implement in standard statistical software, and allows the construction of bootstrapped standard errors which adjust for the fact that the relationship between testing and HIV status is uncertain and needs to be estimated. Results: Using nationally representative data from the Demographic and Health Surveys, we illustrate our approach with new point estimates and confidence intervals (CI) for HIV prevalence among men in Ghana (2003) and Zambia (2007). In Ghana, we find little evidence of selection bias as our selection model gives an HIV prevalence estimate of 1.4% (95% CI 1.2% – 1.6%), compared to 1.6% among those with a valid HIV test. In Zambia, our selection model gives an HIV prevalence estimate of 16.3% (95% CI 11.0% - 18.4%), compared to 12.1% among those with a valid HIV test. Therefore, those who decline to test in Zambia are found to be more likely to be HIV positive. Conclusions: Our approach corrects for selection bias in HIV prevalence estimates, is possible to implement even when HIV prevalence or non-participation is very high or very low, and provides a practical solution to account for both sampling and parameter uncertainty in the estimation of confidence intervals. The wide confidence intervals estimated in an example with high HIV prevalence indicate that it is difficult to correct statistically for the bias that may occur when a large proportion of people refuse to test.
Resumo:
La tâche de kinématogramme de points aléatoires est utilisée avec le paradigme de choix forcé entre deux alternatives pour étudier les prises de décisions perceptuelles. Les modèles décisionnels supposent que les indices de mouvement pour les deux alternatives sont encodés dans le cerveau. Ainsi, la différence entre ces deux signaux est accumulée jusqu’à un seuil décisionnel. Cependant, aucune étude à ce jour n’a testé cette hypothèse avec des stimuli contenant des mouvements opposés. Ce mémoire présente les résultats de deux expériences utilisant deux nouveaux stimuli avec des indices de mouvement concurrentiels. Parmi une variété de combinaisons d’indices concurrentiels, la performance des sujets dépend de la différence nette entre les deux signaux opposés. De plus, les sujets obtiennent une performance similaire avec les deux types de stimuli. Ces résultats supportent un modèle décisionnel basé sur l’accumulation des indices de mouvement net et suggèrent que le processus décisionnel peut intégrer les signaux de mouvement à partir d’une grande gamme de directions pour obtenir un percept global de mouvement.
Resumo:
A novel test of spatial independence of the distribution of crystals or phases in rocks based on compositional statistics is introduced. It improves and generalizes the common joins-count statistics known from map analysis in geographic information systems. Assigning phases independently to objects in RD is modelled by a single-trial multinomial random function Z(x), where the probabilities of phases add to one and are explicitly modelled as compositions in the K-part simplex SK. Thus, apparent inconsistencies of the tests based on the conventional joins{count statistics and their possibly contradictory interpretations are avoided. In practical applications we assume that the probabilities of phases do not depend on the location but are identical everywhere in the domain of de nition. Thus, the model involves the sum of r independent identical multinomial distributed 1-trial random variables which is an r-trial multinomial distributed random variable. The probabilities of the distribution of the r counts can be considered as a composition in the Q-part simplex SQ. They span the so called Hardy-Weinberg manifold H that is proved to be a K-1-affine subspace of SQ. This is a generalisation of the well-known Hardy-Weinberg law of genetics. If the assignment of phases accounts for some kind of spatial dependence, then the r-trial probabilities do not remain on H. This suggests the use of the Aitchison distance between observed probabilities to H to test dependence. Moreover, when there is a spatial uctuation of the multinomial probabilities, the observed r-trial probabilities move on H. This shift can be used as to check for these uctuations. A practical procedure and an algorithm to perform the test have been developed. Some cases applied to simulated and real data are presented. Key words: Spatial distribution of crystals in rocks, spatial distribution of phases, joins-count statistics, multinomial distribution, Hardy-Weinberg law, Hardy-Weinberg manifold, Aitchison geometry
Resumo:
Scholastic Aptitude Test (SAT) se trata de una prueba estandarizada usada frecuentemente para valorar los conocimientos adquiridos durante la enseñanza secundaria por los estudiantes que deseen acceder a una educación superior en EE.UU. Esta publicación proporciona la información y las estrategias necesarias para maximizar la puntuación de la prueba del SAT en biología. Enseña a pensar como los redactores de la prueba, y a practicar con la materia que se pondrá en el examen para poder estudiar con mayor eficacia. Se hace una revisión de los conceptos principales de la biología que van a aparecer en la prueba y facilita, con explicaciones detalladas, estrategias para aplicar los conocimientos aprendidos en resolver cuestiones específicas complicadas. Incluye dos ensayos prácticos de una hora de duración con preguntas de opción múltiple.
Resumo:
Scholastic Aptitude Test (SAT) se trata de una prueba estandarizada usada frecuentemente para valorar los conocimientos adquiridos durante la enseñanza secundaria por los estudiantes que deseen acceder a una educación superior en EE.UU. Esta publicación ,proporciona la información y las estrategias necesarias para desarrollar la capacidad de comprender y analizar textos literarios seleccionados de prosa, poesía y teatro escritos en inglés, para maximizar la puntuación de la prueba del SAT en literatura. Enseña a pensar como los redactores de la prueba, y a practicar con la materia que se pondrá en el examen para poder estudiar con mayor eficacia. Se hace una revisión de los conceptos principales de la literatura que van a aparecer en la prueba y facilita con explicaciones detalladas técnicas para aplicar los conocimientos aprendidos en resolver cuestiones específicas complicadas. Incluye cuatro ensayos prácticos de una hora de duración cada uno con preguntas de opción múltiple que se centran en conocimientos básicos de términos literarios.
Resumo:
Scholastic Aptitude Test (SAT) se trata de una prueba estandarizada usada frecuentemente para valorar los conocimientos adquiridos durante la enseñanza secundaria por los estudiantes que deseen acceder a una educación superior en EE.UU. Esta publicación proporciona la información y las estrategias necesarias para maximizar la puntuación de la prueba del SAT en física. Enseña a pensar como los redactores de la prueba, y a practicar con la materia que se pondrá en el examen para poder estudiar con mayor eficacia. Se hace una revisión de los conceptos principales de la física que van a aparecer en la prueba y facilita, con explicaciones detalladas estrategias para aplicar los conocimientos aprendidos en resolver cuestiones específicas complicadas. Incluye dos ensayos prácticos con setenta y cinco preguntas de opción múltiple, con una hora de duración para cada uno.
Resumo:
Given an observed test statistic and its degrees of freedom, one may compute the observed P value with most statistical packages. It is unknown to what extent test statistics and P values are congruent in published medical papers. Methods: We checked the congruence of statistical results reported in all the papers of volumes 409–412 of Nature (2001) and a random sample of 63 results from volumes 322–323 of BMJ (2001). We also tested whether the frequencies of the last digit of a sample of 610 test statistics deviated from a uniform distribution (i.e., equally probable digits).Results: 11.6% (21 of 181) and 11.1% (7 of 63) of the statistical results published in Nature and BMJ respectively during 2001 were incongruent, probably mostly due to rounding, transcription, or type-setting errors. At least one such error appeared in 38% and 25% of the papers of Nature and BMJ, respectively. In 12% of the cases, the significance level might change one or more orders of magnitude. The frequencies of the last digit of statistics deviated from the uniform distribution and suggested digit preference in rounding and reporting.Conclusions: this incongruence of test statistics and P values is another example that statistical practice is generally poor, even in the most renowned scientific journals, and that quality of papers should be more controlled and valued
Resumo:
[1] Cloud cover is conventionally estimated from satellite images as the observed fraction of cloudy pixels. Active instruments such as radar and Lidar observe in narrow transects that sample only a small percentage of the area over which the cloud fraction is estimated. As a consequence, the fraction estimate has an associated sampling uncertainty, which usually remains unspecified. This paper extends a Bayesian method of cloud fraction estimation, which also provides an analytical estimate of the sampling error. This method is applied to test the sensitivity of this error to sampling characteristics, such as the number of observed transects and the variability of the underlying cloud field. The dependence of the uncertainty on these characteristics is investigated using synthetic data simulated to have properties closely resembling observations of the spaceborne Lidar NASA-LITE mission. Results suggest that the variance of the cloud fraction is greatest for medium cloud cover and least when conditions are mostly cloudy or clear. However, there is a bias in the estimation, which is greatest around 25% and 75% cloud cover. The sampling uncertainty is also affected by the mean lengths of clouds and of clear intervals; shorter lengths decrease uncertainty, primarily because there are more cloud observations in a transect of a given length. Uncertainty also falls with increasing number of transects. Therefore a sampling strategy aimed at minimizing the uncertainty in transect derived cloud fraction will have to take into account both the cloud and clear sky length distributions as well as the cloud fraction of the observed field. These conclusions have implications for the design of future satellite missions. This paper describes the first integrated methodology for the analytical assessment of sampling uncertainty in cloud fraction observations from forthcoming spaceborne radar and Lidar missions such as NASA's Calipso and CloudSat.
Resumo:
We propose a novel method for scoring the accuracy of protein binding site predictions – the Binding-site Distance Test (BDT) score. Recently, the Matthews Correlation Coefficient (MCC) has been used to evaluate binding site predictions, both by developers of new methods and by the assessors for the community wide prediction experiment – CASP8. Whilst being a rigorous scoring method, the MCC does not take into account the actual 3D location of the predicted residues from the observed binding site. Thus, an incorrectly predicted site that is nevertheless close to the observed binding site will obtain an identical score to the same number of nonbinding residues predicted at random. The MCC is somewhat affected by the subjectivity of determining observed binding residues and the ambiguity of choosing distance cutoffs. By contrast the BDT method produces continuous scores ranging between 0 and 1, relating to the distance between the predicted and observed residues. Residues predicted close to the binding site will score higher than those more distant, providing a better reflection of the true accuracy of predictions. The CASP8 function predictions were evaluated using both the MCC and BDT methods and the scores were compared. The BDT was found to strongly correlate with the MCC scores whilst also being less susceptible to the subjectivity of defining binding residues. We therefore suggest that this new simple score is a potentially more robust method for future evaluations of protein-ligand binding site predictions.
Resumo:
We describe a simple comparative method for determining whether rates of diversification are correlated with continuous traits in species-level phylogenies. This involves comparing traits of species with net speciation rate (number of nodes linking extant species with the root divided by the root to tip evolutionary distance), using a phylogenetically corrected correlation. We use simulations to examine the power of this test. We find that the approach has acceptable power to uncover relationships between speciation and a continuous trait and is robust to background random extinction; however, the power of the approach is reduced when the rate of trait evolution is decreased. The test has low power to relate diversification to traits when extinction rate is correlated with the trait. Clearly, there are inherent limitations in using only data on extant species to infer correlates of extinction; however, this approach is potentially a powerful tool in analyzing correlates of speciation.
Resumo:
Epidemiological evidence shows that a diet high in monounsaturated fatty acids (MUFA) but low in saturated fatty acids (SFA) is associated with reduced risk of CHD. The hypocholesterolaemic effect of MUFA is known but there has been little research on the effect of test meal MUFA and SFA composition on postprandial lipid metabolism. The present study investigated the effect of meals containing different proportions of MUFA and SFA on postprandial triacylglycerol and non-esterified fatty acid (NEFA) metabolism. Thirty healthy male volunteers consumed three meals containing equal amounts of fat (40 g), but different proportions of MUFA (12, 17 and 24% energy) in random order. Postprandial plasma triacylglycerol, apolipoprotein B-48, cholesterol, HDL-cholesterol, glucose and insulin concentrations and lipoprotein lipase (EC 3.1.1.34) activity were not significantly different following the three meals which varied in their levels of SFA and MUFA. There was a significant difference in the postprandial NEFA response between meals. The incremental area under the curve of postprandial plasma NEFA concentrations was significantly (P = 0.03) lower following the high-MUFA meal. Regression analysis showed that the non-significant difference in fasting NEFA concentrations was the most important factor determining difference between meals, and that the test meal MUFA content had only a minor effect. In conclusion, varying the levels of MUFA and SFA in test meals has little or no effect on postprandial lipid metabolism.
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.