989 resultados para Statistical Theory
Resumo:
Dans ce texte, nous analysons les développements récents de l’économétrie à la lumière de la théorie des tests statistiques. Nous revoyons d’abord quelques principes fondamentaux de philosophie des sciences et de théorie statistique, en mettant l’accent sur la parcimonie et la falsifiabilité comme critères d’évaluation des modèles, sur le rôle de la théorie des tests comme formalisation du principe de falsification de modèles probabilistes, ainsi que sur la justification logique des notions de base de la théorie des tests (tel le niveau d’un test). Nous montrons ensuite que certaines des méthodes statistiques et économétriques les plus utilisées sont fondamentalement inappropriées pour les problèmes et modèles considérés, tandis que de nombreuses hypothèses, pour lesquelles des procédures de test sont communément proposées, ne sont en fait pas du tout testables. De telles situations conduisent à des problèmes statistiques mal posés. Nous analysons quelques cas particuliers de tels problèmes : (1) la construction d’intervalles de confiance dans le cadre de modèles structurels qui posent des problèmes d’identification; (2) la construction de tests pour des hypothèses non paramétriques, incluant la construction de procédures robustes à l’hétéroscédasticité, à la non-normalité ou à la spécification dynamique. Nous indiquons que ces difficultés proviennent souvent de l’ambition d’affaiblir les conditions de régularité nécessaires à toute analyse statistique ainsi que d’une utilisation inappropriée de résultats de théorie distributionnelle asymptotique. Enfin, nous soulignons l’importance de formuler des hypothèses et modèles testables, et de proposer des techniques économétriques dont les propriétés sont démontrables dans les échantillons finis.
Resumo:
La tradition sociologique oppose généralement deux thèses : individualiste et holiste. Ces caractérisations laissent entendre que la première thèse s’attarde à l’action des individus pour expliquer la société. Ce style s’est développé surtout en Allemagne grâce à Max Weber. La thèse holiste prend une position plus globale en expliquant la société par des faits sociaux. Celle-ci est dite française par l’entremise du père de la sociologie, Émile Durkheim. Pourtant, plusieurs auteurs français ont snobé la tradition allemande pour ramener à l’avant-scène un compatriote qui s’est opposé à Durkheim : Gabriel Tarde. Ces réintroductions ont été produites pour s’opposer aux thèses durkheimiennes qui laisseraient l’individu victime du contexte social dans lequel il se trouve. La sociologie allemande propose déjà une opposition de ce type avec les théories postulant un objet réel et particulier à la sociologie. Pourquoi réintroduire un auteur disparu en sociologie pour prendre la place d’autres qui sont encore là? L’hypothèse serait que Tarde propose un individualisme différent qui se traduirait par une notion d’individu particulière. L’étude comparative du corpus durkheimien et tardien révèle pourtant que ces deux auteurs partagent la plupart des caractéristiques associées à la définition du sens commun de l’individu. L’opposition entre Durkheim et Tarde ne relève pas de la place de l’individu dans la science sociale, mais d’une interprétation différente de certains aspects de la théorie statistique. Ces théories sociales ont été construites grâce à cette notion ce qui laisse penser que certains des problèmes explicatifs de ces dernières pourraient être liés à cette base.
Resumo:
The study deals with the distribution theory and applications of concomitants from the Morgenstern family of bivariate distributions.The Morgenstern system of distributions include all cumulative distributions of the form FX,Y(X,Y)=FX(X) FY(Y)[1+α(1-FX(X))(1-FY(Y))], -1≤α≤1.The system provides a very general expression of a bivariate distributions from which members can be derived by substituting expressions of any desired set of marginal distributions.It is a brief description of the basic distribution theory and a quick review of the existing literature.The Morgenstern family considered in the present study provides a very general expression of a bivariate distribution from which several members can be derived by substituting expressions of any desired set of marginal distributions.Order statistics play a very important role in statistical theory and practice and accordingly a remarkably large body of literature has been devoted to its study.It helps to develop special methods of statistical inference,which are valid with respect to a broad class of distributions.The present study deals with the general distribution theory of Mk, [r: m] and Mk, [r: m] from the Morgenstern family of distributions and discuss some applications in inference, estimation of the parameter of the marginal variable Y in the Morgestern type uniform distributions.
Resumo:
Este documento presenta un resumen de las principales corrientes teóricas que han tratado de dar una explicación a la discriminación laboral, asi como una exposición pedagógica de las principales modalidades mediante las cuales dicha discriminación se manifiesta. Se realiza un particular análisis de las consecuencias que dicha discriminación tiene, en particular para el caso de los jóvenes de América Latina, y finalmente se realiza una revisión de los estudios que sobre el tema se han realizado en Colombia, sus aspectos metodológicos y las conclusiones a las que dichos estudios han llegado.
Resumo:
The coarsening of the nanoporous structure developed in undoped and 3% Sb-doped SnO2 sol-gel dip-coated films deposited on a mica substrate was studied by time-resolved small-angle x-ray scattering (SAXS) during in situ isothermal treatments at 450 and 650 degrees C. The time dependence of the structure function derived from the experimental SAXS data is in reasonable agreement with the predictions of the statistical theory of dynamical scaling, thus suggesting that the coarsening process in the studied nanoporous structures exhibits dynamical self-similar properties. The kinetic exponents of the power time dependence of the characteristic scaling length of undoped SnO2 and 3% Sb-doped SnO2 films are similar (alpha approximate to 0.09), this value being invariant with respect to the firing temperature. In the case of undoped SnO2 films, another kinetic exponent, alpha('), corresponding to the maximum of the structure function was determined to be approximately equal to three times the value of the exponent alpha, as expected for the random tridimensional coarsening process in the dynamical scaling regime. Instead, for 3% Sb-doped SnO2 films fired at 650 degrees C, we have determined that alpha(')approximate to 2 alpha, thus suggesting a bidimensional coarsening of the porous structure. The analyses of the dynamical scaling functions and their asymptotic behavior at high q (q being the modulus of the scattering vector) provided additional evidence for the two-dimensional features of the pore structure of 3% Sb-doped SnO2 films. The presented experimental results support the hypotheses of the validity of the dynamic scaling concept to describe the coarsening process in anisotropic nanoporous systems.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The presented text aims to evaluate the causes that leading to the development of the informal sector and primarily its consequences for the registered economy (formal). Therefore, defines the concept of the shadow economy, and sets out its varied activities. The objective is to analyze the growth of this phenomenon in Brazil, since its supposed rise in the early 70's until recent days. Finally, it will be introduced some measurement methods that allow estimating the size of the shadow economy. Among these methods, in order to evaluate the behavior of the informal sector in Brazil, it was used the MIMIC model (Multiple Indicators and Multiple Causes) that consists of a statistical theory of unobserved variables, or also called latent, which considers multiple causes and multiple effects or indicators. Thus, it will be developed the analysis of results obtained
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
Professor Sir David R. Cox (DRC) is widely acknowledged as among the most important scientists of the second half of the twentieth century. He inherited the mantle of statistical science from Pearson and Fisher, advanced their ideas, and translated statistical theory into practice so as to forever change the application of statistics in many fields, but especially biology and medicine. The logistic and proportional hazards models he substantially developed, are arguably among the most influential biostatistical methods in current practice. This paper looks forward over the period from DRC's 80th to 90th birthdays, to speculate about the future of biostatistics, drawing lessons from DRC's contributions along the way. We consider "Cox's model" of biostatistics, an approach to statistical science that: formulates scientific questions or quantities in terms of parameters gamma in probability models f(y; gamma) that represent in a parsimonious fashion, the underlying scientific mechanisms (Cox, 1997); partition the parameters gamma = theta, eta into a subset of interest theta and other "nuisance parameters" eta necessary to complete the probability distribution (Cox and Hinkley, 1974); develops methods of inference about the scientific quantities that depend as little as possible upon the nuisance parameters (Barndorff-Nielsen and Cox, 1989); and thinks critically about the appropriate conditional distribution on which to base infrences. We briefly review exciting biomedical and public health challenges that are capable of driving statistical developments in the next decade. We discuss the statistical models and model-based inferences central to the CM approach, contrasting them with computationally-intensive strategies for prediction and inference advocated by Breiman and others (e.g. Breiman, 2001) and to more traditional design-based methods of inference (Fisher, 1935). We discuss the hierarchical (multi-level) model as an example of the future challanges and opportunities for model-based inference. We then consider the role of conditional inference, a second key element of the CM. Recent examples from genetics are used to illustrate these ideas. Finally, the paper examines causal inference and statistical computing, two other topics we believe will be central to biostatistics research and practice in the coming decade. Throughout the paper, we attempt to indicate how DRC's work and the "Cox Model" have set a standard of excellence to which all can aspire in the future.
Resumo:
The last few years have seen the advent of high-throughput technologies to analyze various properties of the transcriptome and proteome of several organisms. The congruency of these different data sources, or lack thereof, can shed light on the mechanisms that govern cellular function. A central challenge for bioinformatics research is to develop a unified framework for combining the multiple sources of functional genomics information and testing associations between them, thus obtaining a robust and integrated view of the underlying biology. We present a graph theoretic approach to test the significance of the association between multiple disparate sources of functional genomics data by proposing two statistical tests, namely edge permutation and node label permutation tests. We demonstrate the use of the proposed tests by finding significant association between a Gene Ontology-derived "predictome" and data obtained from mRNA expression and phenotypic experiments for Saccharomyces cerevisiae. Moreover, we employ the graph theoretic framework to recast a surprising discrepancy presented in Giaever et al. (2002) between gene expression and knockout phenotype, using expression data from a different set of experiments.