945 resultados para Probability Distribution Function
Resumo:
O conhecimento do modelo de distribuição espacial de pragas na cultura é fundamental para estabelecer um plano adequado de amostragem seqüencial e, assim, permitir a correta utilização das estratégias de controle e a otimização das técnicas de amostragem. Esta pesquisa objetivou estudar a distribuição espacial de lagartas de Alabama argillacea (Hübner) na cultura do algodoeiro, cultivar CNPA ITA-90. A coleta de dados ocorreu durante o ano agrícola de 1998/99 na Fazenda Itamarati Sul S.A., localizada no município de Ponta Porã, MS, em três diferentes áreas de 10.000 m² cada uma. Cada área amostral foi composta de 100 parcelas com 100 m² cada. Foi realizada semanalmente a contagem das lagartas pequenas, médias e grandes, encontradas em cinco plantas por parcela. Os índices de agregação (razão variância/média e índice de Morisita), o teste de qui-quadrado com o ajuste dos valores encontrados e esperados às distribuições teóricas de freqüência (Poisson, binomial positiva e binomial negativa), mostraram que todos os estádios das lagartas estão distribuídos de acordo com o modelo de distribuição contagiosa, ajustando-se ao padrão da Distribuição Binomial Negativa durante todo o período de infestação.
Resumo:
The aim of this thesis is to evaluate the quality of public spending on education for the municipalities of the Metropolitan Region of Natal (RMN) in 2009 by use of two theories: The Theory of Welfare (Welfare State) and the Public Choice Theory (TEP), both important to understand the relationship between education and economics. The study also uses principles of microeconomics and public sector economics to get a better idea of the role of education in economy and society. It describes the development of the educational policy in Brazil from 1988 to the Federal Constitution of 2010, following the major changes in basic education during each government. The characteristics of the RMN municipalities were illustrated with socioeconomic indicators, while educational indicators were used to characterize each municipality regarding education. The model used in this study was developed by Bertê, Brunet and Borges, the data was collected on the back of the School Census 2009 and the Brazil Exam 2009 and it was processed quantitavely in the Information System on Public Budgets in Education (SIOPE) by use of the statistical method called standardized score of the normal cumulative distribution function. The quality of public spending on education is the result of the relation between performance indicator ratio and expense ratio. For the qualitative analysis of results, the criteria of efficiency, efficacy and effectiveness were used. The study found that municipalities with higher expenses showed a worse quality of spending and failed to convert the expenditure incurred into performance, thus confirming ineffectiveness
Resumo:
Currently, one of the biggest challenges for the field of data mining is to perform cluster analysis on complex data. Several techniques have been proposed but, in general, they can only achieve good results within specific areas providing no consensus of what would be the best way to group this kind of data. In general, these techniques fail due to non-realistic assumptions about the true probability distribution of the data. Based on this, this thesis proposes a new measure based on Cross Information Potential that uses representative points of the dataset and statistics extracted directly from data to measure the interaction between groups. The proposed approach allows us to use all advantages of this information-theoretic descriptor and solves the limitations imposed on it by its own nature. From this, two cost functions and three algorithms have been proposed to perform cluster analysis. As the use of Information Theory captures the relationship between different patterns, regardless of assumptions about the nature of this relationship, the proposed approach was able to achieve a better performance than the main algorithms in literature. These results apply to the context of synthetic data designed to test the algorithms in specific situations and to real data extracted from problems of different fields
Resumo:
The standard kinetic theory for a nonrelativistic diluted gas is generalized in the spirit of the nonextensive statistic distribution introduced by Tsallis. The new formalism depends on an arbitrary q parameter measuring the degree of nonextensivity. In the limit q = 1, the extensive Maxwell-Boltzmann theory is recovered. Starting from a purely kinetic deduction of the velocity q-distribution function, the Boltzmann H-teorem is generalized for including the possibility of nonextensive out of equilibrium effects. Based on this investigation, it is proved that Tsallis' distribution is the necessary and sufficient condition defining a thermodynamic equilibrium state in the nonextensive context. This result follows naturally from the generalized transport equation and also from the extended H-theorem. Two physical applications of the nonextensive effects have been considered. Closed analytic expressions were obtained for the Doppler broadening of spectral lines from an excited gas, as well as, for the dispersion relations describing the eletrostatic oscillations in a diluted electronic plasma. In the later case, a comparison with the experimental results strongly suggests a Tsallis distribution with the q parameter smaller than unity. A complementary study is related to the thermodynamic behavior of a relativistic imperfect simple fluid. Using nonequilibrium thermodynamics, we show how the basic primary variables, namely: the energy momentum tensor, the particle and entropy fluxes depend on the several dissipative processes present in the fluid. The temperature variation law for this moving imperfect fluid is also obtained, and the Eckart and Landau-Lifshitz formulations are recovered as particular cases
Resumo:
Monte Carlo simulations of water-amides (amide=fonnamide-FOR, methylfonnamide-NMF and dimethylformamide-DMF) solutions have been carried out in the NpT ensemble at 308 K and 1 atm. The structure and excess enthalpy of the mixtures as a function of the composition have been investigated. The TIP4P model was used for simulating water and six-site models previously optimized in this laboratory were used for simulating the liquid amides. The intermolecular interaction energy was calculated using the classical 6-12 Lennard-Jones potential plus a Coulomb term. The interaction energy between solute and solvent has been partitioned what leads to a better understanding of the behavior of the enthalpy of mixture obtained for the three solutions experimentally. Radial distribution functions for the water-amides correlations permit to explore the intermolecular interactions between the molecules. The results show that three, two and one hydrogen bonds between the water and the amide molecules are formed in the FOR, NMF and DMF-water solutions, respectively. These H-bonds are, respectively, stronger for DMF-water, NMF-water and FOR-water. In the NMF-water solution, the interaction between the methyl group of the NMF and the oxygen of the water plays a role in the stabilization of the aqueous solution quite similar to that of an H-bond in the FOR-water solution. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Classical Monte Carlo simulations were carried out on the NPT ensemble at 25°C and 1 atm, aiming to investigate the ability of the TIP4P water model [Jorgensen, Chandrasekhar, Madura, Impey and Klein; J. Chem. Phys., 79 (1983) 926] to reproduce the newest structural picture of liquid water. The results were compared with recent neutron diffraction data [Soper; Bruni and Ricci; J. Chem. Phys., 106 (1997) 247]. The influence of the computational conditions on the thermodynamic and structural results obtained with this model was also analyzed. The findings were compared with the original ones from Jorgensen et al [above-cited reference plus Mol. Phys., 56 (1985) 1381]. It is notice that the thermodynamic results are dependent on the boundary conditions used, whereas the usual radial distribution functions g(O/O(r)) and g(O/H(r)) do not depend on them.
Resumo:
The segmentation of an image aims to subdivide it into constituent regions or objects that have some relevant semantic content. This subdivision can also be applied to videos. However, in these cases, the objects appear in various frames that compose the videos. The task of segmenting an image becomes more complex when they are composed of objects that are defined by textural features, where the color information alone is not a good descriptor of the image. Fuzzy Segmentation is a region-growing segmentation algorithm that uses affinity functions in order to assign to each element in an image a grade of membership for each object (between 0 and 1). This work presents a modification of the Fuzzy Segmentation algorithm, for the purpose of improving the temporal and spatial complexity. The algorithm was adapted to segmenting color videos, treating them as 3D volume. In order to perform segmentation in videos, conventional color model or a hybrid model obtained by a method for choosing the best channels were used. The Fuzzy Segmentation algorithm was also applied to texture segmentation by using adaptive affinity functions defined for each object texture. Two types of affinity functions were used, one defined using the normal (or Gaussian) probability distribution and the other using the Skew Divergence. This latter, a Kullback-Leibler Divergence variation, is a measure of the difference between two probability distributions. Finally, the algorithm was tested in somes videos and also in texture mosaic images composed by images of the Brodatz album
Resumo:
In this work, we propose a two-stage algorithm for real-time fault detection and identification of industrial plants. Our proposal is based on the analysis of selected features using recursive density estimation and a new evolving classifier algorithm. More specifically, the proposed approach for the detection stage is based on the concept of density in the data space, which is not the same as probability density function, but is a very useful measure for abnormality/outliers detection. This density can be expressed by a Cauchy function and can be calculated recursively, which makes it memory and computational power efficient and, therefore, suitable for on-line applications. The identification/diagnosis stage is based on a self-developing (evolving) fuzzy rule-based classifier system proposed in this work, called AutoClass. An important property of AutoClass is that it can start learning from scratch". Not only do the fuzzy rules not need to be prespecified, but neither do the number of classes for AutoClass (the number may grow, with new class labels being added by the on-line learning process), in a fully unsupervised manner. In the event that an initial rule base exists, AutoClass can evolve/develop it further based on the newly arrived faulty state data. In order to validate our proposal, we present experimental results from a level control didactic process, where control and error signals are used as features for the fault detection and identification systems, but the approach is generic and the number of features can be significant due to the computationally lean methodology, since covariance or more complex calculations, as well as storage of old data, are not required. The obtained results are significantly better than the traditional approaches used for comparison
Resumo:
In this work we elaborate and discuss a Complex Network model which presents connectivity scale free probability distribution (power-law degree distribution). In order to do that, we modify the rule of the preferential attachment of the Bianconi-Barabasi model, including a factor which represents the similarity of the sites. The term that corresponds to this similarity is called the affinity, and is obtained by the modulus of the difference between the fitness (or quality) of the sites. This variation in the preferential attachment generates very interesting results, by instance the time evolution of the connectivity, which follows a power-law distribution ki / ( t t0 )fi, where fi indicates the rate to the site gain connections. Certainly this depends on the affinity with other sites. Besides, we will show by numerical simulations results for the average path length and for the clustering coefficient
Resumo:
Complex systems have stimulated much interest in the scientific community in the last twenty years. Examples this area are the Domany-Kinzel cellular automaton and Contact Process that are studied in the first chapter this tesis. We determine the critical behavior of these systems using the spontaneous-search method and short-time dynamics (STD). Ours results confirm that the DKCA e CP belong to universality class of Directed Percolation. In the second chapter, we study the particle difusion in two models of stochastic sandpiles. We characterize the difusion through diffusion constant D, definite through in the relation h(x)2i = 2Dt. The results of our simulations, using finite size scalling and STD, show that the diffusion constant can be used to study critical properties. Both models belong to universality class of Conserved Directed Percolation. We also study that the mean-square particle displacement in time, and characterize its dependence on the initial configuration and particle density. In the third chapter, we introduce a computacional model, called Geographic Percolation, to study watersheds, fractals with aplications in various areas of science. In this model, sites of a network are assigned values between 0 and 1 following a given probability distribution, we order this values, keeping always its localization, and search pk site that percolate network. Once we find this site, we remove it from the network, and search for the next that has the network to percole newly. We repeat these steps until the complete occupation of the network. We study the model in 2 and 3 dimension, and compare the bidimensional case with networks form at start real data (Alps e Himalayas)
Resumo:
The covariant quark model of the pion based on the effective nonlocal quark-hadron Lagrangian involving nonlocality induced by instanton fluctuations of the QCD vacuum is reviewed. Explicit gauge invariant formalism allows us to construct the conserved vector and axial currents and to demonstrate their consistency with the Ward-Takahashi identities and low-energy theorems. The spontaneous breaking of chiral symmetry results in the dynamic quark mass and the vertex of the quark-pion interaction, both momentum-dependent. The parameters of the instanton vacuum, the average size of the instantons, and the effective quark mass are expressed in terms of the vacuum expectation values of the lowest dimension quark-gluon operators and low-energy pion observables. The transition pion form factor for the processes gamma*gamma --> pi (0) and gamma*gamma* --> pi (0) is analyzed in detail. The kinematic dependence of the transition form factor at high momentum transfers allows one to determine the relationship between the light-cone amplitude of the quark distribution in the pion and the quark-pion vertex function. Its dynamic dependence implies that the transition form factor gamma*gamma --> pi (0) at high momentum transfers is acutely sensitive to the size of the nonlocality of nonperturbative fluctuations in the QCD vacuum. In the leading twist, the distribution amplitude and the distribution function of the valence quarks in the pion are calculated at a low normalization point of the order of the inverse average instanton size rho (-1)(c). The QCD results are evolved to higher momentum transfers and are in reasonable agreement with available experimental data on the pion structure.
Resumo:
There is a well-developed framework, the Black-Scholes theory, for the pricing of contracts based on the future prices of certain assets, called options. This theory assumes that the probability distribution of the returns of the underlying asset is a Gaussian distribution. However, it is observed in the market that this hypothesis is flawed, leading to the introduction of a fudge factor, the so-called volatility smile. Therefore, it would be interesting to explore extensions of the Black-Scholes theory to non-Gaussian distributions. In this paper, we provide an explicit formula for the price of an option when the distributions of the returns of the underlying asset is parametrized by an Edgeworth expansion, which allows for the introduction of higher independent moments of the probability distribution, namely skewness and kurtosis. We test our formula with options in the Brazilian and American markets, showing that the volatility smile can be reduced. We also check whether our approach leads to more efficient hedging strategies of these instruments. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
In this paper we study the possible microscopic origin of heavy-tailed probability density distributions for the price variation of financial instruments. We extend the standard log-normal process to include another random component in the so-called stochastic volatility models. We study these models under an assumption, akin to the Born-Oppenheimer approximation, in which the volatility has already relaxed to its equilibrium distribution and acts as a background to the evolution of the price process. In this approximation, we show that all models of stochastic volatility should exhibit a scaling relation in the time lag of zero-drift modified log-returns. We verify that the Dow-Jones Industrial Average index indeed follows this scaling. We then focus on two popular stochastic volatility models, the Heston and Hull-White models. In particular, we show that in the Hull-White model the resulting probability distribution of log-returns in this approximation corresponds to the Tsallis (t-Student) distribution. The Tsallis parameters are given in terms of the microscopic stochastic volatility model. Finally, we show that the log-returns for 30 years Dow Jones index data is well fitted by a Tsallis distribution, obtaining the relevant parameters. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)