989 resultados para 010405 Statistical Theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We review and structure some of the mathematical and statistical models that have been developed over the past half century to grapple with theoretical and experimental questions about the stochastic development of aging over the life course. We suggest that the mathematical models are in large part addressing the problem of partitioning the randomness in aging: How does aging vary between individuals, and within an individual over the lifecourse? How much of the variation is inherently related to some qualities of the individual, and how much is entirely random? How much of the randomness is cumulative, and how much is merely short-term flutter? We propose that recent lines of statistical inquiry in survival analysis could usefully grapple with these questions, all the more so if they were more explicitly linked to the relevant mathematical and biological models of aging. To this end, we describe points of contact among the various lines of mathematical and statistical research. We suggest some directions for future work, including the exploration of information-theoretic measures for evaluating components of stochastic models as the basis for analyzing experiments and anchoring theoretical discussions of aging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical diagnostics of mixing and transport are computed for a numerical model of forced shallow-water flow on the sphere and a middle-atmosphere general circulation model. In particular, particle dispersion statistics, transport fluxes, Liapunov exponents (probability density functions and ensemble averages), and tracer concentration statistics are considered. It is shown that the behavior of the diagnostics is in accord with that of kinematic chaotic advection models so long as stochasticity is sufficiently weak. Comparisons with random-strain theory are made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we provide a connection between the geometrical properties of the attractor of a chaotic dynamical system and the distribution of extreme values. We show that the extremes of so-called physical observables are distributed according to the classical generalised Pareto distribution and derive explicit expressions for the scaling and the shape parameter. In particular, we derive that the shape parameter does not depend on the cho- sen observables, but only on the partial dimensions of the invariant measure on the stable, unstable, and neutral manifolds. The shape parameter is negative and is close to zero when high-dimensional systems are considered. This result agrees with what was derived recently using the generalized extreme value approach. Combining the results obtained using such physical observables and the properties of the extremes of distance observables, it is possible to derive estimates of the partial dimensions of the attractor along the stable and the unstable directions of the flow. Moreover, by writing the shape parameter in terms of moments of the extremes of the considered observable and by using linear response theory, we relate the sensitivity to perturbations of the shape parameter to the sensitivity of the moments, of the partial dimensions, and of the Kaplan–Yorke dimension of the attractor. Preliminary numer- ical investigations provide encouraging results on the applicability of the theory presented here. The results presented here do not apply for all combinations of Axiom A systems and observables, but the breakdown seems to be related to very special geometrical configurations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss the applicability, within the random matrix theory, of perturbative treatment of symmetry breaking to the experimental data on the flip symmetry breaking in quartz crystal. We found that the values of the parameter that measures this breaking are different for the spacing distribution as compared to those for the spectral rigidity. We consider both two-fold and three-fold symmetries. The latter was found to account better for the spectral rigidity than the former. Both cases, however, underestimate the experimental spectral rigidity at large L. This discrepancy can be resolved if an appropriate number of eigenfrequencies is considered to be missing in the sample. Our findings are relevant for symmetry violation studies in general. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of translation invariant and locally defined binary image operators over large windows is made difficult by decreased statistical precision and increased training time. We present a complete framework for the application of stacked design, a recently proposed technique to create two-stage operators that circumvents that difficulty. We propose a novel algorithm, based on Information Theory, to find groups of pixels that should be used together to predict the Output Value. We employ this algorithm to automate the process of creating a set of first-level operators that are later combined in a global operator. We also propose a principled way to guide this combination, by using feature selection and model comparison. Experimental results Show that the proposed framework leads to better results than single stage design. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quadratic assignment problems (QAPs) are commonly solved by heuristic methods, where the optimum is sought iteratively. Heuristics are known to provide good solutions but the quality of the solutions, i.e., the confidence interval of the solution is unknown. This paper uses statistical optimum estimation techniques (SOETs) to assess the quality of Genetic algorithm solutions for QAPs. We examine the functioning of different SOETs regarding biasness, coverage rate and length of interval, and then we compare the SOET lower bound with deterministic ones. The commonly used deterministic bounds are confined to only a few algorithms. We show that, the Jackknife estimators have better performance than Weibull estimators, and when the number of heuristic solutions is as large as 100, higher order JK-estimators perform better than lower order ones. Compared with the deterministic bounds, the SOET lower bound performs significantly better than most deterministic lower bounds and is comparable with the best deterministic ones. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solutions to combinatorial optimization problems, such as problems of locating facilities, frequently rely on heuristics to minimize the objective function. The optimum is sought iteratively and a criterion is needed to decide when the procedure (almost) attains it. Pre-setting the number of iterations dominates in OR applications, which implies that the quality of the solution cannot be ascertained. A small, almost dormant, branch of the literature suggests using statistical principles to estimate the minimum and its bounds as a tool to decide upon stopping and evaluating the quality of the solution. In this paper we examine the functioning of statistical bounds obtained from four different estimators by using simulated annealing on p-median test problems taken from Beasley’s OR-library. We find the Weibull estimator and the 2nd order Jackknife estimator preferable and the requirement of sample size to be about 10 being much less than the current recommendation. However, reliable statistical bounds are found to depend critically on a sample of heuristic solutions of high quality and we give a simple statistic useful for checking the quality. We end the paper with an illustration on using statistical bounds in a problem of locating some 70 distribution centers of the Swedish Post in one Swedish region. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Generalized linear mixed models are flexible tools for modeling non-normal data and are useful for accommodating overdispersion in Poisson regression models with random effects. Their main difficulty resides in the parameter estimation because there is no analytic solution for the maximization of the marginal likelihood. Many methods have been proposed for this purpose and many of them are implemented in software packages. The purpose of this study is to compare the performance of three different statistical principles - marginal likelihood, extended likelihood, Bayesian analysis-via simulation studies. Real data on contact wrestling are used for illustration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop a framework to explain the private capital flows between the rest of the world and an emerging economy. The model, based on the monetary premium theory, relates an endogenous supply of foreign capitals to an endogenous differential of interest rates; its estimation uses the econometric techniques initiated by Heckman. Four questions regarding the capital flows phenomenon are explored, including the statistical process that governs the events of default and the impact of the probability of default on the interest rate differential. Using the methodology, we analyse the dynamics of foreign capital movements in Brazil during the 1991- 1998 period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the functional integral formalism for the statistical generating functional in the statistical (finite temperature) quantum field theory, we prove the equivalence of many-photon Greens functions in the Duffin-Kennner-Petiau and Klein-Gordon-Fock statistical quantum field theories. As an illustration, we calculate the one-loop polarization operators in both theories and demonstrate their coincidence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A body of research has developed within the context of nonlinear signal and image processing that deals with the automatic, statistical design of digital window-based filters. Based on pairs of ideal and observed signals, a filter is designed in an effort to minimize the error between the ideal and filtered signals. The goodness of an optimal filter depends on the relation between the ideal and observed signals, but the goodness of a designed filter also depends on the amount of sample data from which it is designed. In order to lessen the design cost, a filter is often chosen from a given class of filters, thereby constraining the optimization and increasing the error of the optimal filter. To a great extent, the problem of filter design concerns striking the correct balance between the degree of constraint and the design cost. From a different perspective and in a different context, the problem of constraint versus sample size has been a major focus of study within the theory of pattern recognition. This paper discusses the design problem for nonlinear signal processing, shows how the issue naturally transitions into pattern recognition, and then provides a review of salient related pattern-recognition theory. In particular, it discusses classification rules, constrained classification, the Vapnik-Chervonenkis theory, and implications of that theory for morphological classifiers and neural networks. The paper closes by discussing some design approaches developed for nonlinear signal processing, and how the nature of these naturally lead to a decomposition of the error of a designed filter into a sum of the following components: the Bayes error of the unconstrained optimal filter, the cost of constraint, the cost of reducing complexity by compressing the original signal distribution, the design cost, and the contribution of prior knowledge to a decrease in the error. The main purpose of the paper is to present fundamental principles of pattern recognition theory within the framework of active research in nonlinear signal processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We prove the equivalence of many-gluon Green's functions in the Duffin-Kemmer-Petieu and Klein-Gordon-Fock statistical quantum field theories. The proof is based on the functional integral formulation for the statistical generating functional in a finite-temperature quantum field theory. As an illustration, we calculate one-loop polarization operators in both theories and show that their expressions indeed coincide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given that the total amount of losses in a distribution system is known, with a reliable methodology for the technical loss calculation, the non-technical losses can be obtained by subtraction. A usual method of calculation technical losses in the electric utilities uses two important factors: load factor and the loss factor. The load factor is usually obtained with energy and demand measurements, whereas, to compute the loss factor it is necessary the learning of demand and energy loss, which are not, in general, prone of direct measurements. In this work, a statistical analysis of this relationship using the curves of a sampling of consumers in a specific company is presented. These curves will be summarized in different bands of coefficient k. Then, it will be possible determine where each group of consumer has its major concentration of points. ©2008 IEEE.