995 resultados para complex statistics
Resumo:
Humans distinguish materials such as metal, plastic, and paper effortlessly at a glance. Traditional computer vision systems cannot solve this problem at all. Recognizing surface reflectance properties from a single photograph is difficult because the observed image depends heavily on the amount of light incident from every direction. A mirrored sphere, for example, produces a different image in every environment. To make matters worse, two surfaces with different reflectance properties could produce identical images. The mirrored sphere simply reflects its surroundings, so in the right artificial setting, it could mimic the appearance of a matte ping-pong ball. Yet, humans possess an intuitive sense of what materials typically "look like" in the real world. This thesis develops computational algorithms with a similar ability to recognize reflectance properties from photographs under unknown, real-world illumination conditions. Real-world illumination is complex, with light typically incident on a surface from every direction. We find, however, that real-world illumination patterns are not arbitrary. They exhibit highly predictable spatial structure, which we describe largely in the wavelet domain. Although they differ in several respects from the typical photographs, illumination patterns share much of the regularity described in the natural image statistics literature. These properties of real-world illumination lead to predictable image statistics for a surface with given reflectance properties. We construct a system that classifies a surface according to its reflectance from a single photograph under unknown illuminination. Our algorithm learns relationships between surface reflectance and certain statistics computed from the observed image. Like the human visual system, we solve the otherwise underconstrained inverse problem of reflectance estimation by taking advantage of the statistical regularity of illumination. For surfaces with homogeneous reflectance properties and known geometry, our system rivals human performance.
Resumo:
This paper presents our experience with combining statistical principles and participatory methods to generate national statistics. The methodology was developed in Malawi during 1999–2002. We demonstrate that if PRA is combined with statistical principles (including probability-based sampling and standardization), it can produce total population statistics and estimates of the proportion of households with certain characteristics (e.g., poverty). It can also provide quantitative data on complex issues of national importance such as poverty targeting. This approach is distinct from previous PRA-based approaches, which generate numbers at community level but only provide qualitative information at national level.
Resumo:
Purpose: Acquiring details of kinetic parameters of enzymes is crucial to biochemical understanding, drug development, and clinical diagnosis in ocular diseases. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. Methods: We have developed Bayesian utility functions to minimise kinetic parameter variance involving differentiation of model expressions and matrix inversion. These have been applied to the simple kinetics of the enzymes in the glyoxalase pathway (of importance in posttranslational modification of proteins in cataract), and the complex kinetics of lens aldehyde dehydrogenase (also of relevance to cataract). Results: Our successful application of Bayesian statistics has allowed us to identify a set of rules for designing optimum kinetic experiments iteratively. Most importantly, the distribution of points in the range is critical; it is not simply a matter of even or multiple increases. At least 60 % must be below the KM (or plural if more than one dissociation constant) and 40% above. This choice halves the variance found using a simple even spread across the range.With both the glyoxalase system and lens aldehyde dehydrogenase we have significantly improved the variance of kinetic parameter estimation while reducing the number and costs of experiments. Conclusions: We have developed an optimal and iterative method for selecting features of design such as substrate range, number of measurements and choice of intermediate points. Our novel approach minimises parameter error and costs, and maximises experimental efficiency. It is applicable to many areas of ocular drug design, including receptor-ligand binding and immunoglobulin binding, and should be an important tool in ocular drug discovery.
Resumo:
An evaluation is undertaken of the statistics of daily precipitation as simulated by five regional climate models using comprehensive observations in the region of the European Alps. Four limited area models and one variable-resolution global model are considered, all with a grid spacing of 50 km. The 15-year integrations were forced from reanalyses and observed sea surface temperature and sea ice (global model from sea surface only). The observational reference is based on 6400 rain gauge records (10–50 stations per grid box). Evaluation statistics encompass mean precipitation, wet-day frequency, precipitation intensity, and quantiles of the frequency distribution. For mean precipitation, the models reproduce the characteristics of the annual cycle and the spatial distribution. The domain mean bias varies between −23% and +3% in winter and between −27% and −5% in summer. Larger errors are found for other statistics. In summer, all models underestimate precipitation intensity (by 16–42%) and there is a too low frequency of heavy events. This bias reflects too dry summer mean conditions in three of the models, while it is partly compensated by too many low-intensity events in the other two models. Similar intermodel differences are found for other European subregions. Interestingly, the model errors are very similar between the two models with the same dynamical core (but different parameterizations) and they differ considerably between the two models with similar parameterizations (but different dynamics). Despite considerable biases, the models reproduce prominent mesoscale features of heavy precipitation, which is a promising result for their use in climate change downscaling over complex topography.
Resumo:
Many modern statistical applications involve inference for complex stochastic models, where it is easy to simulate from the models, but impossible to calculate likelihoods. Approximate Bayesian computation (ABC) is a method of inference for such models. It replaces calculation of the likelihood by a step which involves simulating artificial data for different parameter values, and comparing summary statistics of the simulated data with summary statistics of the observed data. Here we show how to construct appropriate summary statistics for ABC in a semi-automatic manner. We aim for summary statistics which will enable inference about certain parameters of interest to be as accurate as possible. Theoretical results show that optimal summary statistics are the posterior means of the parameters. Although these cannot be calculated analytically, we use an extra stage of simulation to estimate how the posterior means vary as a function of the data; and we then use these estimates of our summary statistics within ABC. Empirical results show that our approach is a robust method for choosing summary statistics that can result in substantially more accurate ABC analyses than the ad hoc choices of summary statistics that have been proposed in the literature. We also demonstrate advantages over two alternative methods of simulation-based inference.
Resumo:
Tsallis postulated a generalized form for entropy and give rise to a new statistics now known as Tsallis statistics. In the present work, we compare the Tsallis statistics with the gradually truncated Levy flight, and discuss the distribution of an economical index-the Standard and Poor's 500-using the values of standard deviation as calculated by our model. We find that both statistics give almost the same distribution. Thus we feel that gradual truncation of Levy distribution, after certain critical step size for describing complex systems, is a requirement of generalized thermodynamics or similar. The gradually truncated Levy flight is based on physical considerations and bring a better physical picture of the dynamics of the whole system. Tsallis statistics gives a theoretical support. Both statistics together can be utilized for the development of a more exact portfolio theory or to understand better the complexities in human and financial behaviors. A comparison of both statistics is made. (C) 2002 Published by Elsevier B.V. B.V.
Resumo:
Power law scaling is observed in many physical, biological and socio-economical complex systems and is now considered an important property of these systems. In general, power law exists in the central part of the distribution. It has deviations from power law for very small and very large variable sizes. Tsallis, through non-extensive thermodynamics, explained power law distribution in many cases including deviation from the power law. In case of very large steps, the used the heuristic crossover approach. In the present we present an alternative model in which we consider that the entropy factor 9 decreases with variable size due to the softening of long range interactions or memory. We apply this model for distribution of citation index of scientists and examination scores and are able to explain the distribution for entire variable range. In the present model, we can have very sharp cut-off without interfering with power law in its central part as observed in many cases. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
A study of concentrated attention patterns in epileptic patients was conducted with the objectives: characterization of the patients' epileptic condition; assessment of the concentrated attention levels in epileptic and nonepileptic individuals; comparison of the attention levels of the two groups. An evaluation was performed of 50 adult outpatients with complex partial seizures and 20 non-epileptic individuals (comparative group) at the Neuroepilepsy Ambulatory Unit, State University of Campinas SP, Brazil. Method: characterization of seizure types, frequency and duration; concentrated attention assessment (Concentrated Attention Test - Toulouse-Piéron); comparison of the epileptic with non-epileptic individuals. Results: A statistically significant difference was observed between the groups with regard to Correct Response, Wrong Response and No Response. A difference was observed in relation to Time, but it was statistically insignificant. The epileptic patients presented inferior cognitive performance in relation to concentrated attention when compared with the non-epileptic individuals, findings compatible with the clinical complaints.
Resumo:
Objective: To compare the performance of patients with complex partial epilepsy with the normal controls in the subtests of an instrument used to assess intelligence function. Method: Fifty epileptic patients, whose ages ranged from 19 to 49 years and 20 normal controls without any neuropsychiatric disorders. The Wechsler-Bellevue adult intelligence test was applied in groups, epileptic patients and control subjects. This test is composed of several subtests that assess specific cognitive functions. A statistical analysis was performed using non-parametric tests. Results: All the Wechsler-Bellevue subtests revealed that the intelligence functions of the patients were significantly inferior to that of the controls (p<0.05). This performance was supported by the patient's complaints in relation to their cognitive performance. Conclusion: Patients with complex partial epilepsy presented poorer results in the intelligence test when compared with individuals without neuropsychiatric disorders.
Resumo:
Includes bibliography
Resumo:
The statistical properties of trajectories of eigenvalues of Gaussian complex matrices whose Hermitian condition is progressively broken are investigated. It is shown how the ordering on the real axis of the real eigenvalues is reflected in the structure of the trajectories and also in the final distribution of the eigenvalues in the complex plane.
Resumo:
The realization that statistical physics methods can be applied to analyze written texts represented as complex networks has led to several developments in natural language processing, including automatic summarization and evaluation of machine translation. Most importantly, so far only a few metrics of complex networks have been used and therefore there is ample opportunity to enhance the statistics-based methods as new measures of network topology and dynamics are created. In this paper, we employ for the first time the metrics betweenness, vulnerability and diversity to analyze written texts in Brazilian Portuguese. Using strategies based on diversity metrics, a better performance in automatic summarization is achieved in comparison to previous work employing complex networks. With an optimized method the Rouge score (an automatic evaluation method used in summarization) was 0.5089, which is the best value ever achieved for an extractive summarizer with statistical methods based on complex networks for Brazilian Portuguese. Furthermore, the diversity metric can detect keywords with high precision, which is why we believe it is suitable to produce good summaries. It is also shown that incorporating linguistic knowledge through a syntactic parser does enhance the performance of the automatic summarizers, as expected, but the increase in the Rouge score is only minor. These results reinforce the suitability of complex network methods for improving automatic summarizers in particular, and treating text in general. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
It has been recently shown numerically that the transition from integrability to chaos in quantum systems and the corresponding spectral fluctuations are characterized by 1/f(alpha) noise with 1 <= alpha <= 2. The system of interacting trapped bosons is inhomogeneous and complex. The presence of an external harmonic trap makes it more interesting as, in the atomic trap, the bosons occupy partly degenerate single-particle states. Earlier theoretical and experimental results show that at zero temperature the low-lying levels are of a collective nature and high-lying excitations are of a single-particle nature. We observe that for few bosons, the P(s) distribution shows the Shnirelman peak, which exhibits a large number of quasidegenerate states. For a large number of bosons the low-lying levels are strongly affected by the interatomic interaction, and the corresponding level fluctuation shows a transition to a Wigner distribution with an increase in particle number. It does not follow Gaussian orthogonal ensemble random matrix predictions. For high-lying levels we observe the uncorrelated Poisson distribution. Thus it may be a very realistic system to prove that 1/f(alpha) noise is ubiquitous in nature.
Resumo:
It is a well-established fact that statistical properties of energy-level spectra are the most efficient tool to characterize nonintegrable quantum systems. The statistical behavior of different systems such as complex atoms, atomic nuclei, two-dimensional Hamiltonians, quantum billiards, and noninteracting many bosons has been studied. The study of statistical properties and spectral fluctuations in interacting many-boson systems has developed interest in this direction. We are especially interested in weakly interacting trapped bosons in the context of Bose-Einstein condensation (BEC) as the energy spectrum shows a transition from a collective nature to a single-particle nature with an increase in the number of levels. However this has received less attention as it is believed that the system may exhibit Poisson-like fluctuations due to the existence of an external harmonic trap. Here we compute numerically the energy levels of the zero-temperature many-boson systems which are weakly interacting through the van der Waals potential and are confined in the three-dimensional harmonic potential. We study the nearest-neighbor spacing distribution and the spectral rigidity by unfolding the spectrum. It is found that an increase in the number of energy levels for repulsive BEC induces a transition from a Wigner-like form displaying level repulsion to the Poisson distribution for P(s). It does not follow the Gaussian orthogonal ensemble prediction. For repulsive interaction, the lower levels are correlated and manifest level-repulsion. For intermediate levels P(s) shows mixed statistics, which clearly signifies the existence of two energy scales: external trap and interatomic interaction, whereas for very high levels the trapping potential dominates, generating a Poisson distribution. Comparison with mean-field results for lower levels are also presented. For attractive BEC near the critical point we observe the Shnirelman-like peak near s = 0, which signifies the presence of a large number of quasidegenerate states.
Resumo:
The Spin-Statistics theorem states that the statistics of a system of identical particles is determined by their spin: Particles of integer spin are Bosons (i.e. obey Bose-Einstein statistics), whereas particles of half-integer spin are Fermions (i.e. obey Fermi-Dirac statistics). Since the original proof by Fierz and Pauli, it has been known that the connection between Spin and Statistics follows from the general principles of relativistic Quantum Field Theory. In spite of this, there are different approaches to Spin-Statistics and it is not clear whether the theorem holds under assumptions that are different, and even less restrictive, than the usual ones (e.g. Lorentz-covariance). Additionally, in Quantum Mechanics there is a deep relation between indistinguishabilty and the geometry of the configuration space. This is clearly illustrated by Gibbs' paradox. Therefore, for many years efforts have been made in order to find a geometric proof of the connection between Spin and Statistics. Recently, various proposals have been put forward, in which an attempt is made to derive the Spin-Statistics connection from assumptions different from the ones used in the relativistic, quantum field theoretic proofs. Among these, there is the one due to Berry and Robbins (BR), based on the postulation of a certain single-valuedness condition, that has caused a renewed interest in the problem. In the present thesis, we consider the problem of indistinguishability in Quantum Mechanics from a geometric-algebraic point of view. An approach is developed to study configuration spaces Q having a finite fundamental group, that allows us to describe different geometric structures of Q in terms of spaces of functions on the universal cover of Q. In particular, it is shown that the space of complex continuous functions over the universal cover of Q admits a decomposition into C(Q)-submodules, labelled by the irreducible representations of the fundamental group of Q, that can be interpreted as the spaces of sections of certain flat vector bundles over Q. With this technique, various results pertaining to the problem of quantum indistinguishability are reproduced in a clear and systematic way. Our method is also used in order to give a global formulation of the BR construction. As a result of this analysis, it is found that the single-valuedness condition of BR is inconsistent. Additionally, a proposal aiming at establishing the Fermi-Bose alternative, within our approach, is made.