1000 resultados para Null value
Resumo:
We present a computer-simulation study of the effect of the distribution of energy barriers in an anisotropic magnetic system on the relaxation behavior of the magnetization. While the relaxation law for the magnetization can be approximated in all cases by a time logarithmic decay, the law for the dependence of the magnetic viscosity with temperature is found to be quite sensitive to the shape of the distribution of barriers. The low-temperature region for the magnetic viscosity never extrapolates to a positive no-null value. Moreover our computer simulation results agree reasonably well with some recent relaxation experiments on highly anisotropic single-domain particles.
Resumo:
La mammite subclinique est un problème de santé fréquent et coûteux. Les infections intra-mammaires (IIM) sont souvent détectées à l’aide de mesures du comptage des cellules somatiques (CCS). La culture bactériologique du lait est cependant requise afin d’identifier le pathogène en cause. À cause de cette difficulté, pratiquement toutes les recherches sur la mammite subclinique ont été centrées sur la prévalence d’IIM et les facteurs de risque pour l’incidence ou l’élimination des IIM sont peu connus. L’objectif principal de cette thèse était d’identifier les facteurs de risque modifiables associés à l’incidence, l’élimination et la prévalence d’IIM d’importance dans les troupeaux laitiers Canadiens. En premier lieu, une revue systématique de la littérature sur les associations entre pratiques utilisées à la ferme et CCS a été réalisée. Les pratiques de gestion constamment associées au CCS ont été identifiées et différentiées de celles faisant l’objet de rapports anecdotiques. Par la suite, un questionnaire bilingue a été développé, validé, et utilisé afin de mesurer les pratiques de gestion d’un échantillon de 90 troupeaux laitiers canadiens. Afin de valider l’outil, des mesures de répétabilité et de validité des items composant le questionnaire ont été analysées et une évaluation de l’équivalence des versions anglaise et française a été réalisée. Ces analyses ont permis d’identifier des items problématiques qui ont du être recatégorisés, lorsque possible, ou exclus des analyses subséquentes pour assurer une certaine qualité des données. La plupart des troupeaux étudiés utilisaient déjà la désinfection post-traite des trayons et le traitement universel des vaches au tarissement, mais beaucoup des pratiques recommandées n’étaient que peu utilisées. Ensuite, les facteurs de risque modifiables associés à l’incidence, à l’élimination et à la prévalence d’IIM à Staphylococcus aureus ont été investigués de manière longitudinale sur les 90 troupeaux sélectionnés. L’incidence d’IIM semblait être un déterminant plus important de la prévalence d’IIM du troupeau comparativement à l’élimination des IIM. Le port de gants durant la traite, la désinfection pré-traite des trayons, de même qu’une condition adéquate des bouts de trayons démontraient des associations désirables avec les différentes mesures d’IIM. Ces résultats viennent souligner l’importance des procédures de traite pour l’obtention d’une réduction à long-terme de la prévalence d’IIM. Finalement, les facteurs de risque modifiables associés à l’incidence, à l’élimination et à la prévalence d’IIM à staphylocoques coagulase-négatif (SCN) ont été étudiés de manière similaire. Cependant, afin de prendre en considération les limitations de la culture bactériologique du lait pour l’identification des IIM causées par ce groupe de pathogènes, une approche semi-Bayesienne à l’aide de modèles de variable à classe latente a été utilisée. Les estimés non-ajusté de l’incidence, de l’élimination, de la prévalence et des associations avec les expositions apparaissaient tous considérablement biaisés par les imperfections de la procédure diagnostique. Ce biais était en général vers la valeur nulle. Encore une fois, l’incidence d’IIM était le principal déterminant de la prévalence d’IIM des troupeaux. Les litières de sable et de produits du bois, de même que l’accès au pâturage étaient associés à une incidence et une prévalence plus basse de SCN.
Resumo:
In this paper, by using the Poincare compactification in R(3) we make a global analysis of the Lorenz system, including the complete description of its dynamic behavior on the sphere at infinity. Combining analytical and numerical techniques we show that for the parameter value b = 0 the system presents an infinite set of singularly degenerate heteroclinic cycles, which consist of invariant sets formed by a line of equilibria together with heteroclinic orbits connecting two of the equilibria. The dynamical consequences related to the existence of such cycles are discussed. In particular a possibly new mechanism behind the creation of Lorenz-like chaotic attractors, consisting of the change in the stability index of the saddle at the origin as the parameter b crosses the null value, is proposed. Based on the knowledge of this mechanism we have numerically found chaotic attractors for the Lorenz system in the case of small b > 0, so nearby the singularly degenerate heteroclinic cycles.
Resumo:
Aiming to ensure greater reliability and consistency of data stored in the database, the data cleaning stage is set early in the process of Knowledge Discovery in Databases (KDD) and is responsible for eliminating problems and adjust the data for the later stages, especially for the stage of data mining. Such problems occur in the instance level and schema, namely, missing values, null values, duplicate tuples, values outside the domain, among others. Several algorithms were developed to perform the cleaning step in databases, some of them were developed specifically to work with the phonetics of words, since a word can be written in different ways. Within this perspective, this work presents as original contribution an optimization of algorithm for the detection of duplicate tuples in databases through phonetic based on multithreading without the need for trained data, as well as an independent environment of language to be supported for this. © 2011 IEEE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The purpose of the work is: define and calculate a factor of collapse related to traditional method to design sheet pile walls. Furthermore, we tried to find the parameters that most influence a finite element model representative of this problem. The text is structured in this way: from chapter 1 to 5, we analyzed a series of arguments which are usefull to understanding the problem, while the considerations mainly related to the purpose of the text are reported in the chapters from 6 to 10. In the first part of the document the following arguments are shown: what is a sheet pile wall, what are the codes to be followed for the design of these structures and what they say, how can be formulated a mathematical model of the soil, some fundamentals of finite element analysis, and finally, what are the traditional methods that support the design of sheet pile walls. In the chapter 6 we performed a parametric analysis, giving an answer to the second part of the purpose of the work. Comparing the results from a laboratory test for a cantilever sheet pile wall in a sandy soil, with those provided by a finite element model of the same problem, we concluded that:in modelling a sandy soil we should pay attention to the value of cohesion that we insert in the model (some programs, like Abaqus, don’t accept a null value for this parameter), friction angle and elastic modulus of the soil, they influence significantly the behavior of the system (structure-soil), others parameters, like the dilatancy angle or the Poisson’s ratio, they don’t seem influence it. The logical path that we followed in the second part of the text is reported here. We analyzed two different structures, the first is able to support an excavation of 4 m, while the second an excavation of 7 m. Both structures are first designed by using the traditional method, then these structures are implemented in a finite element program (Abaqus), and they are pushed to collapse by decreasing the friction angle of the soil. The factor of collapse is the ratio between tangents of the initial friction angle and of the friction angle at collapse. At the end, we performed a more detailed analysis of the first structure, observing that, the value of the factor of collapse is influenced by a wide range of parameters including: the value of the coefficients assumed in the traditional method and by the relative stiffness of the structure-soil system. In the majority of cases, we found that the value of the factor of collapse is between and 1.25 and 2. With some considerations, reported in the text, we can compare the values so far found, with the value of the safety factor proposed by the code (linked to the friction angle of the soil).
Resumo:
Despite increasing interest in the relationship between socioeconomic position (SEP) and health, there remains little understanding of the mechanisms through which SEP is related to chronic disease. This dissertation utilized data from 2,592 U.S. households in the 1995 telephone survey of the Aging, Status, and the Sense of Control study to: (1) investigate potential mediating factors in the association between educational level and prevalence of diabetes and (2) to investigate the association between the three major measures of SEP—income, education, and occupation—and the prevalence of diabetes. Regression analyses were conducted to examine the degree to which sense of personal control and social support mediate the association between level of educational attainment and diabetes and to examine the contribution of each of the SEP measures to diabetes. After adjusting for age, obesity, sex, and race, respondents with less than a high school education had greater odds of having diabetes than those with a college degree or higher level of educational attainment, although the corresponding confidence interval contained the null value (OR = 1.2, 95% CI: 0.7, 2.0). Neither sense of control nor social support significantly mediated the association between education and diabetes. However, sense of control was associated with diabetes status (OR = 0.7, 95% CI: 0.5, 1.0). Compared with income and education, employment status was the most strongly associated measure of SEP with diabetes prevalence. After adjusting for age, obesity, sex, and race, respondents who were unable to work due to disability had fourfold greater odds of having diabetes than those who were employed full time (OR = 4.0; 95% CI: 1.9, 8.3). Adding income and/or education to the model did not improve the fit. Understanding the impact of socioeconomic factors on diabetes requires consideration of multiple measures of SEP as well as the psychosocial pathways through which SEP may influence diabetes. ^
Resumo:
The standard difference model of two-alternative forced-choice (2AFC) tasks implies that performance should be the same when the target is presented in the first or the second interval. Empirical data often show “interval bias” in that percentage correct differs significantly when the signal is presented in the first or the second interval. We present an extension of the standard difference model that accounts for interval bias by incorporating an indifference zone around the null value of the decision variable. Analytical predictions are derived which reveal how interval bias may occur when data generated by the guessing model are analyzed as prescribed by the standard difference model. Parameter estimation methods and goodness-of-fit testing approaches for the guessing model are also developed and presented. A simulation study is included whose results show that the parameters of the guessing model can be estimated accurately. Finally, the guessing model is tested empirically in a 2AFC detection procedure in which guesses were explicitly recorded. The results support the guessing model and indicate that interval bias is not observed when guesses are separated out.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices.
Resumo:
The multivariate skew-t distribution (J Multivar Anal 79:93-113, 2001; J R Stat Soc, Ser B 65:367-389, 2003; Statistics 37:359-363, 2003) includes the Student t, skew-Cauchy and Cauchy distributions as special cases and the normal and skew-normal ones as limiting cases. In this paper, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis of repeated measures, pretest/post-test data, under multivariate null intercept measurement error model (J Biopharm Stat 13(4):763-771, 2003) where the random errors and the unobserved value of the covariate (latent variable) follows a Student t and skew-t distribution, respectively. The results and methods are numerically illustrated with an example in the field of dentistry.
Resumo:
Skew-normal distribution is a class of distributions that includes the normal distributions as a special case. In this paper, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis in a multivariate, null intercept, measurement error model [R. Aoki, H. Bolfarine, J.A. Achcar, and D. Leao Pinto Jr, Bayesian analysis of a multivariate null intercept error-in -variables regression model, J. Biopharm. Stat. 13(4) (2003b), pp. 763-771] where the unobserved value of the covariate (latent variable) follows a skew-normal distribution. The results and methods are applied to a real dental clinical trial presented in [A. Hadgu and G. Koch, Application of generalized estimating equations to a dental randomized clinical trial, J. Biopharm. Stat. 9 (1999), pp. 161-178].
A robust Bayesian approach to null intercept measurement error model with application to dental data
Resumo:
Measurement error models often arise in epidemiological and clinical research. Usually, in this set up it is assumed that the latent variable has a normal distribution. However, the normality assumption may not be always correct. Skew-normal/independent distribution is a class of asymmetric thick-tailed distributions which includes the Skew-normal distribution as a special case. In this paper, we explore the use of skew-normal/independent distribution as a robust alternative to null intercept measurement error model under a Bayesian paradigm. We assume that the random errors and the unobserved value of the covariate (latent variable) follows jointly a skew-normal/independent distribution, providing an appealing robust alternative to the routine use of symmetric normal distribution in this type of model. Specific distributions examined include univariate and multivariate versions of the skew-normal distribution, the skew-t distributions, the skew-slash distributions and the skew contaminated normal distributions. The methods developed is illustrated using a real data set from a dental clinical trial. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Researchers often rely on the t-statistic to make inference on parameters in statistical models. It is common practice to obtain critical values by simulation techniques. This paper proposes a novel numerical method to obtain an approximately similar test. This test rejects the null hypothesis when the test statistic islarger than a critical value function (CVF) of the data. We illustrate this procedure when regressors are highly persistent, a case in which commonly-used simulation methods encounter dificulties controlling size uniformly. Our approach works satisfactorily, controls size, and yields a test which outperforms the two other known similar tests.
Resumo:
We discuss the relation between correlation functions of twist-two large spin operators and expectation values of Wilson loops along light-like trajectories. After presenting some heuristic field theoretical arguments suggesting this relation, we compute the divergent part of the correlator in the limit of large 't Hooft coupling and large spins, using a semi-classical world-sheet which asymptotically looks like a GKP rotating string. We show this diverges as expected from the expectation value of a null Wilson loop, namely, as (ln mu(-2))(2). mu being a cut-off of the theory. (C) 2012 Elsevier B.V. All rights reserved.