971 resultados para Autocorrelation (Statistics)
Resumo:
Integrating single nucleotide polymorphism (SNP) p-values from genome-wide association studies (GWAS) across genes and pathways is a strategy to improve statistical power and gain biological insight. Here, we present Pascal (Pathway scoring algorithm), a powerful tool for computing gene and pathway scores from SNP-phenotype association summary statistics. For gene score computation, we implemented analytic and efficient numerical solutions to calculate test statistics. We examined in particular the sum and the maximum of chi-squared statistics, which measure the strongest and the average association signals per gene, respectively. For pathway scoring, we use a modified Fisher method, which offers not only significant power improvement over more traditional enrichment strategies, but also eliminates the problem of arbitrary threshold selection inherent in any binary membership based pathway enrichment approach. We demonstrate the marked increase in power by analyzing summary statistics from dozens of large meta-studies for various traits. Our extensive testing indicates that our method not only excels in rigorous type I error control, but also results in more biologically meaningful discoveries.
Resumo:
This article proposes a checklist to improve statistical reporting in the manuscripts submitted to Public Understanding of Science. Generally, these guidelines will allow the reviewers (and readers) to judge whether the evidence provided in the manuscript is relevant. The article ends with other suggestions for a better statistical quality of the journal.
Resumo:
LORs, addressing content management and preservation, have the positive collaterals of institutional positioning and dissemination, but their main benefit is the empowerment of interest-centred learning communities, as we recognise that learning is much more than content, which becomes infrastructure: the LOR provides the learner interaction with the LOs, but also with other learners and teachers.
Resumo:
Our first objective is to compare the degree of concentration in manufacturing and services, with special emphasis on its evolution in these two sectors, using a sensitivity analysis for different concentration indices and different geographic units of analysis: municipalities and local labour systems of Catalonia in 1991 and 2001. Most concentration measures fail to consider the space in which a particular municipality is located. Our second objective is to overcome this problem by applying two different techniques: by using a clustering measure, and by analysing whether the location quotients computed for each municipality and sector present some kind of spatial autocorrelation process. We take special account of the differences in patterns of concentration according to the technological level of the sectors.
Resumo:
The reduction of quantum scattering leads to the suppression of shot noise. In this Letter, we analyze the crossover from the quantum transport regime with universal shot noise to the classical regime where noise vanishes. By making use of the stochastic path integral approach, we find the statistics of transport and the transmission properties of a chaotic cavity as a function of a system parameter controlling the crossover. We identify three different scenarios of the crossover.
Resumo:
The Bartlett-Lewis Rectangular Pulse Modified (BLPRM) model simulates the precipitous slide in the hourly and sub-hourly and has six parameters for each of the twelve months of the year. This study aimed to evaluate the behavior of precipitation series in the duration of 15 min, obtained by simulation using the model BLPRM in situations: (a) where the parameters are estimated from a combination of statistics, creating five different sets; (b) suitability of the model to generate rain. To adjust the parameters were used rain gauge records of Pelotas/RS/Brazil, which statistics were estimated - mean, variance, covariance, autocorrelation coefficient of lag 1, the proportion of dry days in the period considered. The results showed that the parameters related to the time of onset of precipitation (λ) and intensities (μx) were the most stable and the most unstable were ν parameter, related to rain duration. The BLPRM model adequately represented the mean, variance, and proportion of the dry period of the series of precipitation lasting 15 min and, the time dependence of the heights of rain, represented autocorrelation coefficient of the first retardation was statistically less simulated series suitability for the duration of 15 min.
Resumo:
This dissertation examines knowledge and industrial knowledge creation processes. It looks at the way knowledge is created in industrial processes based on data, which is transformed into information and finally into knowledge. In the context of this dissertation the main tool for industrial knowledge creation are different statistical methods. This dissertation strives to define industrial statistics. This is done using an expert opinion survey, which was sent to a number of industrial statisticians. The survey was conducted to create a definition for this field of applied statistics and to demonstrate the wide applicability of statistical methods to industrial problems. In this part of the dissertation, traditional methods of industrial statistics are introduced. As industrial statistics are the main tool for knowledge creation, the basics of statistical decision making and statistical modeling are also included. The widely known Data Information Knowledge Wisdom (DIKW) hierarchy serves as a theoretical background for this dissertation. The way that data is transformed into information, information into knowledge and knowledge finally into wisdom is used as a theoretical frame of reference. Some scholars have, however, criticized the DIKW model. Based on these different perceptions of the knowledge creation process, a new knowledge creation process, based on statistical methods is proposed. In the context of this dissertation, the data is a source of knowledge in industrial processes. Because of this, the mathematical categorization of data into continuous and discrete types is explained. Different methods for gathering data from processes are clarified as well. There are two methods for data gathering in this dissertation: survey methods and measurements. The enclosed publications provide an example of the wide applicability of statistical methods in industry. In these publications data is gathered using surveys and measurements. Enclosed publications have been chosen so that in each publication, different statistical methods are employed in analyzing of data. There are some similarities between the analysis methods used in the publications, but mainly different methods are used. Based on this dissertation the use of statistical methods for industrial knowledge creation is strongly recommended. With statistical methods it is possible to handle large datasets and different types of statistical analysis results can easily be transformed into knowledge.
Resumo:
A trade-off between return and risk plays a central role in financial economics. The intertemporal capital asset pricing model (ICAPM) proposed by Merton (1973) provides a neoclassical theory for expected returns on risky assets. The model assumes that risk-averse investors (seeking to maximize their expected utility of lifetime consumption) demand compensation for bearing systematic market risk and the risk of unfavorable shifts in the investment opportunity set. Although the ICAPM postulates a positive relation between the conditional expected market return and its conditional variance, the empirical evidence on the sign of the risk-return trade-off is conflicting. In contrast, autocorrelation in stock returns is one of the most consistent and robust findings in empirical finance. While autocorrelation is often interpreted as a violation of market efficiency, it can also reflect factors such as market microstructure or time-varying risk premia. This doctoral thesis investigates a relation between the mixed risk-return trade-off results and autocorrelation in stock returns. The results suggest that, in the case of the US stock market, the relative contribution of the risk-return trade-off and autocorrelation in explaining the aggregate return fluctuates with volatility. This effect is then shown to be even more pronounced in the case of emerging stock markets. During high-volatility periods, expected returns can be described using rational (intertemporal) investors acting to maximize their expected utility. During lowvolatility periods, market-wide persistence in returns increases, leading to a failure of traditional equilibrium-model descriptions for expected returns. Consistent with this finding, traditional models yield conflicting evidence concerning the sign of the risk-return trade-off. The changing relevance of the risk-return trade-off and autocorrelation can be explained by heterogeneous agents or, more generally, by the inadequacy of the neoclassical view on asset pricing with unboundedly rational investors and perfect market efficiency. In the latter case, the empirical results imply that the neoclassical view is valid only under certain market conditions. This offers an economic explanation as to why it has been so difficult to detect a positive tradeoff between the conditional mean and variance of the aggregate stock return. The results highlight the importance, especially in the case of emerging stock markets, of noting both the risk-return trade-off and autocorrelation in applications that require estimates for expected returns.