911 resultados para Permutation entropy


Relevância:

10.00% 10.00%

Publicador:

Resumo:

O aumento da complexidade do mercado financeiro tem sido relatado por Rajan (2005), Gorton (2008) e Haldane e May (2011) como um dos principais fatores responsáveis pelo incremento do risco sistêmico que culminou na crise financeira de 2007/08. O Bank for International Settlements (2013) aborda a questão da complexidade no contexto da regulação bancária e discute a comparabilidade da adequação de capital entre os bancos e entre jurisdições. No entanto, as definições dos conceitos de complexidade e de sistemas adaptativos complexos são suprimidas das principais discussões. Este artigo esclarece alguns conceitos relacionados às teorias da Complexidade, como se dá a emergência deste fenômeno, como os conceitos podem ser aplicados ao mercado financeiro. São discutidas duas ferramentas que podem ser utilizadas no contexto de sistemas adaptativos complexos: Agent Based Models (ABMs) e entropia e comparadas com ferramentas tradicionais. Concluímos que ainda que a linha de pesquisa da complexidade deixe lacunas, certamente esta contribui com a agenda de pesquisa econômica para se compreender os mecanismos que desencadeiam riscos sistêmicos, bem como adiciona ferramentas que possibilitam modelar agentes heterogêneos que interagem, de forma a permitir o surgimento de fenômenos emergentes no sistema. Hipóteses de pesquisa são sugeridas para aprofundamento posterior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation presents two papers on how to deal with simple systemic risk measures to assess portfolio risk characteristics. The first paper deals with the Granger-causation of systemic risk indicators based in correlation matrices in stock returns. Special focus is devoted to the Eigenvalue Entropy as some previous literature indicated strong re- sults, but not considering different macroeconomic scenarios; the Index Cohesion Force and the Absorption Ratio are also considered. Considering the S&P500, there is not ev- idence of Granger-causation from Eigenvalue Entropies and the Index Cohesion Force. The Absorption Ratio Granger-caused both the S&P500 and the VIX index, being the only simple measure that passed this test. The second paper develops this measure to capture the regimes underlying the American stock market. New indicators are built using filtering and random matrix theory. The returns of the S&P500 is modelled as a mixture of normal distributions. The activation of each normal distribution is governed by a Markov chain with the transition probabilities being a function of the indicators. The model shows that using a Herfindahl-Hirschman Index of the normalized eigenval- ues exhibits best fit to the returns from 1998-2013.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We aim to provide a review of the stochastic discount factor bounds usually applied to diagnose asset pricing models. In particular, we mainly discuss the bounds used to analyze the disaster model of Barro (2006). Our attention is focused in this disaster model since the stochastic discount factor bounds that are applied to study the performance of disaster models usually consider the approach of Barro (2006). We first present the entropy bounds that provide a diagnosis of the analyzed disaster model which are the methods of Almeida and Garcia (2012, 2016); Ghosh et al. (2016). Then, we discuss how their results according to the disaster model are related to each other and also present the findings of other methodologies that are similar to these bounds but provide different evidence about the performance of the framework developed by Barro (2006).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rational construction necessary to systematize scientific knowledge in physics, introduces difficulties of understanding in some of its concepts. One of these concepts which exemplify properly this difficulty in learning or teaching is entropy. This thesis propose the construction of a didactic route which constitute itself a historical and epistemological course to entropy, intending to contribute for teaching this concept as well as other physics concepts. The basic assumption to build this route is that through the historical review of the development of this concept in the way suggested by Bachelard s (1884-1962) epistemology it is possible to make subjects, to be taught and learned, more meaningful. Initially I composed a brief biographical note to give the reader an idea about the issues, interests and reflections, related to science, and how I dealt with them in my private and professional life, as well as the role they played to lead me to write this thesis. The strategy to construct the route to entropy was to split the usual contents of basic thermodynamics in three moments in a way they can constitute epistemological units , which can be identified by the way of thinking in the corresponding moments of scientific knowledge production: a technical and empiricist moment, a rationalist and positivist moment and a post-positivist rationalist one. The transition between each moment is characterized by a rupture with the former way of thinking; however the progress in the construction of knowledge in the area is evident. As the final part of this work I present an analysis based on elements of Bachelard s epistemology that are present in each moment. This analysis is the basic component of the didactic route that I propose myself to build. The way I made this route guide to entropy could contribute to the construction of other didactic routes in physics and other sciences, in a way to unveil hidden meanings and as a tool to humanize scientific knowledge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently, one of the biggest challenges for the field of data mining is to perform cluster analysis on complex data. Several techniques have been proposed but, in general, they can only achieve good results within specific areas providing no consensus of what would be the best way to group this kind of data. In general, these techniques fail due to non-realistic assumptions about the true probability distribution of the data. Based on this, this thesis proposes a new measure based on Cross Information Potential that uses representative points of the dataset and statistics extracted directly from data to measure the interaction between groups. The proposed approach allows us to use all advantages of this information-theoretic descriptor and solves the limitations imposed on it by its own nature. From this, two cost functions and three algorithms have been proposed to perform cluster analysis. As the use of Information Theory captures the relationship between different patterns, regardless of assumptions about the nature of this relationship, the proposed approach was able to achieve a better performance than the main algorithms in literature. These results apply to the context of synthetic data designed to test the algorithms in specific situations and to real data extracted from problems of different fields

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we present a new clustering method that groups up points of a data set in classes. The method is based in a algorithm to link auxiliary clusters that are obtained using traditional vector quantization techniques. It is described some approaches during the development of the work that are based in measures of distances or dissimilarities (divergence) between the auxiliary clusters. This new method uses only two a priori information, the number of auxiliary clusters Na and a threshold distance dt that will be used to decide about the linkage or not of the auxiliary clusters. The number os classes could be automatically found by the method, that do it based in the chosen threshold distance dt, or it is given as additional information to help in the choice of the correct threshold. Some analysis are made and the results are compared with traditional clustering methods. In this work different dissimilarities metrics are analyzed and a new one is proposed based on the concept of negentropy. Besides grouping points of a set in classes, it is proposed a method to statistical modeling the classes aiming to obtain a expression to the probability of a point to belong to one of the classes. Experiments with several values of Na e dt are made in tests sets and the results are analyzed aiming to study the robustness of the method and to consider heuristics to the choice of the correct threshold. During this work it is explored the aspects of information theory applied to the calculation of the divergences. It will be explored specifically the different measures of information and divergence using the Rényi entropy. The results using the different metrics are compared and commented. The work also has appendix where are exposed real applications using the proposed method

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discussions about pollution caused by vehicles emission are old and have been developed along the years. The search for cleaner technologies and frequent weather alterations have been inducing industries and government organizations to impose limits much more rigorous to the contaminant content in fuels, which have an direct impact in atmospheric emissions. Nowadays, the quality of fuels, in relation to the sulfur content, is carried out through the process of hydrodesulfurization. Adsorption processes also represent an interesting alternative route to the removal of sulfur content. Both processes are simpler and operate to atmospheric temperatures and pressures. This work studies the synthesis and characterization of aluminophosphate impregnate with zinc, molybdenum or both, and its application in the sulfur removal from the gasoline through the adsorption process, using a pattern gasoline containing isooctane and thiophene. The adsorbents were characterized by x-ray diffraction, differential thermal analysis (DTG), x-ray fluorescence and scanning electron microscopy (SEM). The specific area, volume and pore diameter were determined by BET (Brunauer- Emmet-Teller) and the t-plot method. The sulfur was quantified by elementary analysis using ANTEK 9000 NS. The adsorption process was evaluated as function of the temperature variation and initial sulfur content through the adsorption isotherm and its thermodynamic parameters. The parameters of entropy (ΔS), enthalpy variation (ΔH) and free Gibbs energy (ΔG) were calculated through the graph ln(Kd) versus 1/T. Langmuir, Freundlich and Langmuir-Freundlich models were adjusted to the experimental data, and the last one had presented better results. The thermodynamic tests were accomplished in different temperatures, such as 30, 40 and 50ºC, where it was concluded the adsorption process is spontaneous and exothermic. The kinetic of adsorption was studied by 24 h and it showed that the capability adsorption to the adsorbents studied respect the following order: MoZnPO > MoPO > ZnPO > AlPO. The maximum adsorption capacity was 4.91 mg/g for MoZnPO with an adsorption efficiency of 49%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The nonionic surfactants when in aqueous solution, have the property of separating into two phases, one called diluted phase, with low concentration of surfactant, and the other one rich in surfactants called coacervate. The application of this kind of surfactant in extraction processes from aqueous solutions has been increasing over time, which implies the need for knowledge of the thermodynamic properties of these surfactants. In this study were determined the cloud point of polyethoxylated surfactants from nonilphenolpolietoxylated family (9,5 , 10 , 11, 12 and 13), the family from octilphenolpolietoxylated (10 e 11) and polyethoxylated lauryl alcohol (6 , 7, 8 and 9) varying the degree of ethoxylation. The method used to determine the cloud point was the observation of the turbidity of the solution heating to a ramp of 0.1 ° C / minute and for the pressure studies was used a cell high-pressure maximum ( 300 bar). Through the experimental data of the studied surfactants were used to the Flory - Huggins models, UNIQUAC and NRTL to describe the curves of cloud point, and it was studied the influence of NaCl concentration and pressure of the systems in the cloud point. This last parameter is important for the processes of oil recovery in which surfactant in solution are used in high pressures. While the effect of NaCl allows obtaining cloud points for temperatures closer to the room temperature, it is possible to use in processes without temperature control. The numerical method used to adjust the parameters was the Levenberg - Marquardt. For the model Flory- Huggins parameter settings were determined as enthalpy of the mixing, mixing entropy and the number of aggregations. For the UNIQUAC and NRTL models were adjusted interaction parameters aij using a quadratic dependence with temperature. The parameters obtained had good adjust to the experimental data RSMD < 0.3 %. The results showed that both, ethoxylation degree and pressure increase the cloudy points, whereas the NaCl decrease

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present work we use a Tsallis maximum entropy distribution law to fit the observations of projected rotational velocity measurements of stars in the Pleiades open cluster. This new distribution funtion which generalizes the Ma.xwel1-Boltzmann one is derived from the non-extensivity of the Boltzmann-Gibbs entropy. We also present a oomparison between results from the generalized distribution and the Ma.xwellia.n law, and show that the generalized distribution fits more closely the observational data. In addition, we present a oomparison between the q values of the generalized distribution determined for the V sin i distribution of the main sequence stars (Pleiades) and ones found for the observed distribution of evolved stars (subgiants). We then observe a correlation between the q values and the star evolution stage for a certain range of stel1ar mass

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we study a connection between a non-Gaussian statistics, the Kaniadakis statistics, and Complex Networks. We show that the degree distribution P(k)of a scale free-network, can be calculated using a maximization of information entropy in the context of non-gaussian statistics. As an example, a numerical analysis based on the preferential attachment growth model is discussed, as well as a numerical behavior of the Kaniadakis and Tsallis degree distribution is compared. We also analyze the diffusive epidemic process (DEP) on a regular lattice one-dimensional. The model is composed of A (healthy) and B (sick) species that independently diffusive on lattice with diffusion rates DA and DB for which the probabilistic dynamical rule A + B → 2B and B → A. This model belongs to the category of non-equilibrium systems with an absorbing state and a phase transition between active an inactive states. We investigate the critical behavior of the DEP using an auto-adaptive algorithm to find critical points: the method of automatic searching for critical points (MASCP). We compare our results with the literature and we find that the MASCP successfully finds the critical exponents 1/ѵ and 1/zѵ in all the cases DA =DB, DA DB. The simulations show that the DEP has the same critical exponents as are expected from field-theoretical arguments. Moreover, we find that, contrary to a renormalization group prediction, the system does not show a discontinuous phase transition in the regime o DA >DB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The standard kinetic theory for a nonrelativistic diluted gas is generalized in the spirit of the nonextensive statistic distribution introduced by Tsallis. The new formalism depends on an arbitrary q parameter measuring the degree of nonextensivity. In the limit q = 1, the extensive Maxwell-Boltzmann theory is recovered. Starting from a purely kinetic deduction of the velocity q-distribution function, the Boltzmann H-teorem is generalized for including the possibility of nonextensive out of equilibrium effects. Based on this investigation, it is proved that Tsallis' distribution is the necessary and sufficient condition defining a thermodynamic equilibrium state in the nonextensive context. This result follows naturally from the generalized transport equation and also from the extended H-theorem. Two physical applications of the nonextensive effects have been considered. Closed analytic expressions were obtained for the Doppler broadening of spectral lines from an excited gas, as well as, for the dispersion relations describing the eletrostatic oscillations in a diluted electronic plasma. In the later case, a comparison with the experimental results strongly suggests a Tsallis distribution with the q parameter smaller than unity. A complementary study is related to the thermodynamic behavior of a relativistic imperfect simple fluid. Using nonequilibrium thermodynamics, we show how the basic primary variables, namely: the energy momentum tensor, the particle and entropy fluxes depend on the several dissipative processes present in the fluid. The temperature variation law for this moving imperfect fluid is also obtained, and the Eckart and Landau-Lifshitz formulations are recovered as particular cases

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we have studied the effects of random biquadratic and random fields in spin-glass models using the replica method. The effect of a random biquadratic coupling was studied in two spin-1 spin-glass models: in one case the interactions occur between pairs of spins, whereas in the second one the interactions occur between p spins and the limit p > oo is considered. Both couplings (spin glass and biquadratic) have zero-mean Gaussian probability distributions. In the first model, the replica-symmetric assumption reveals that the system presents two pha¬ses, namely, paramagnetic and spin-glass, separated by a continuous transition line. The stability analysis of the replica-symmetric solution yields, besides the usual instability associated with the spin-glass ordering, a new phase due to the random biquadratic cou¬plings between the spins. For the case p oo, the replica-symmetric assumption yields again only two phases, namely, paramagnetic and quadrupolar. In both these phases the spin-glass parameter is zero. Besides, it is shown that they are stable under the Almeida-Thouless stability analysis. One of them presents negative entropy at low temperatures. We developed one step of replica simmetry breaking and noticed that a new phase, the biquadratic glass phase, emerge. In this way we have obtained the correct phase diagram, with.three first-order transition lines. These lines merges in a common triple point. The effects of random fields were studied in the Sherrington-Kirkpatrick model consi¬dered in the presence of an external random magnetic field following a trimodal distribu¬tion {P{hi) = p+S(hi - h0) +Po${hi) +pS(hi + h0))- It is shown that the border of the ferromagnetic phase may present, for conveniently chosen values of p0 and hQ, first-order phase transitions, as well as tricritical points at finite temperatures. It is verified that the first-order phase transitions are directly related to the dilution in the fields: the extensions of these transitions are reduced for increasing values of po- In fact, the threshold value pg, above which all phase transitions are continuous, is calculated analytically. The stability analysis of the replica-symmetric solution is performed and the regions of validity of such a solution are identified