22 resultados para Crossover exponents

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The central interest of this thesis is to comprehend how the public action impels the formation and transformation of the tourist destinies. The research was based on the premise that the public actions are the result of the mediation process of state and non-state actors considered important in a section, which interact aiming for prevailing their interests and world visions above the others. The case of Porto de Galinhas beach, in Pernambuco, locus of the investigation of this thesis, allowed the analysis of a multiplicity of actors on the formation and implementation of local actions toward the development of the tourism between the years 1970 and 2010, as well as permitted the comprehension of the construction of the referential on the interventions made. This thesis, of a qualitative nature, has as theoretical support the cognitive approach of analysis of the public policies developed in France, and it has as main exponents the authors Bruno Jobert and Pierre Muller. This choice was made by the emphasis on the cognitive and normative factors of the politics, which aspects are not very explored in the studies of public policies in Brazil. As the source of the data collection, documental, bibliographic and field researches were utilized to the (re)constitution of the formation and transformation in the site concerned. The analysis techniques applied were the content and the documental analysis. To trace the public action referential, it started by the characterization of the touristic section frontiers and the creation of images by the main international body: the World Tourism Organization, of which analysis of the minutes of the meetings underscored guidelines to the member countries, including Brazil, which compounds the global-sectorial reference of the section. As from the analysis of the evolution of the tourism in the country, was identified that public policies in Brazil passed by transformations in their organization over the years, indicating changes in the referential that guided the interventions. These guidelines and transformations were identified in the construction of the tourist destination of Porto de Galinhas, of which data was systematized and presented in four historical periods, in which were discussed the values, the standard, the algorithms, the images and the important mediators. It has been revealed that the State worked in different roles in the decades analyzed in local tourism. From the 1990s, however, new actors were inserted in the formulation and implementation of policies developed, especially for local hotelkeepers. These, through their association, establishes a leadership relation in the local touristic section, thereby, they could set their hegemony and spread their own interest. The leadership acquired by a group of actors, in the case of Porto de Galinhas, does not mean that trade within the industry were neutralized, but that there is a cognitive framework that confronts the actors involved. In spite of the advances achieved by the work of the mediators in the last decades, that resulted in an amplification and diversification of the activity in the area, as well as the consolidation at the beach, as a tourist destiny of national standout, the position of the place is instable, concerned to the competitiveness, once that there is an situation of social and environmental unsustainability

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work aims to study the fluctuation structure of physical properties of oil well profiles. It was used as technique the analysis of fluctuations without trend (Detrended Fluctuation Analysis - DFA). It has been made part of the study 54 oil wells in the Campo de Namorado located in the Campos Basin in Rio de Janeiro. We studied five sections, namely: sonic, density, porosity, resistivity and gamma rays. For most of the profiles , DFA analysis was available in the literature, though the sonic perfile was estimated with the aid of a standard algorithm. The comparison between the exponents of DFA of the five profiles was performed using linear correlation of variables, so we had 10 comparisons of profiles. Our null hypothesis is that the values of DFA for the various physical properties are independent. The main result indicates that no refutation of the null hypothesis. That is, the fluctuations observed by DFA in the profiles do not have a universal character, that is, in general the quantities display a floating structure of their own. From the ten correlations studied only the profiles of density and sonic one showed a significant correlation (p> 0.05). Finally these results indicate that one should use the data from DFA with caution, because, in general, based on geological analysis DFA different profiles can lead to disparate conclusions

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study of complex systems has become a prestigious area of science, although relatively young . Its importance was demonstrated by the diversity of applications that several studies have already provided to various fields such as biology , economics and Climatology . In physics , the approach of complex systems is creating paradigms that influence markedly the new methods , bringing to Statistical Physics problems macroscopic level no longer restricted to classical studies such as those of thermodynamics . The present work aims to make a comparison and verification of statistical data on clusters of profiles Sonic ( DT ) , Gamma Ray ( GR ) , induction ( ILD ) , neutron ( NPHI ) and density ( RHOB ) to be physical measured quantities during exploratory drilling of fundamental importance to locate , identify and characterize oil reservoirs . Software were used : Statistica , Matlab R2006a , Origin 6.1 and Fortran for comparison and verification of the data profiles of oil wells ceded the field Namorado School by ANP ( National Petroleum Agency ) . It was possible to demonstrate the importance of the DFA method and that it proved quite satisfactory in that work, coming to the conclusion that the data H ( Hurst exponent ) produce spatial data with greater congestion . Therefore , we find that it is possible to find spatial pattern using the Hurst coefficient . The profiles of 56 wells have confirmed the existence of spatial patterns of Hurst exponents , ie parameter B. The profile does not directly assessed catalogs verification of geological lithology , but reveals a non-random spatial distribution

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The pair contact process - PCP is a nonequilibrium stochastic model which, like the basic contact process - CP, exhibits a phase transition to an absorbing state. While the absorbing state CP corresponds to a unique configuration (empty lattice), the PCP process infinitely many. Numerical and theoretical studies, nevertheless, indicate that the PCP belongs to the same universality class as the CP (direct percolation class), but with anomalies in the critical spreading dynamics. An infinite number of absorbing configurations arise in the PCP because all process (creation and annihilation) require a nearest-neighbor pair of particles. The diffusive pair contact process - PCPD) was proposed by Grassberger in 1982. But the interest in the problem follows its rediscovery by the Langevin description. On the basis of numerical results and renormalization group arguments, Carlon, Henkel and Schollwöck (2001), suggested that certain critical exponents in the PCPD had values similar to those of the party-conserving - PC class. On the other hand, Hinrichsen (2001), reported simulation results inconsistent with the PC class, and proposed that the PCPD belongs to a new universality class. The controversy regarding the universality of the PCPD remains unresolved. In the PCPD, a nearest-neighbor pair of particles is necessary for the process of creation and annihilation, but the particles to diffuse individually. In this work we study the PCPD with diffusion of pair, in which isolated particles cannot move; a nearest-neighbor pair diffuses as a unit. Using quasistationary simulation, we determined with good precision the critical point and critical exponents for three values of the diffusive probability: D=0.5 and D=0.1. For D=0.5: PC=0.89007(3), β/v=0.252(9), z=1.573(1), =1.10(2), m=1.1758(24). For D=0.1: PC=0.9172(1), β/v=0.252(9), z=1.579(11), =1.11(4), m=1.173(4)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Techniques of optimization known as metaheuristics have achieved success in the resolution of many problems classified as NP-Hard. These methods use non deterministic approaches that reach very good solutions which, however, don t guarantee the determination of the global optimum. Beyond the inherent difficulties related to the complexity that characterizes the optimization problems, the metaheuristics still face the dilemma of xploration/exploitation, which consists of choosing between a greedy search and a wider exploration of the solution space. A way to guide such algorithms during the searching of better solutions is supplying them with more knowledge of the problem through the use of a intelligent agent, able to recognize promising regions and also identify when they should diversify the direction of the search. This way, this work proposes the use of Reinforcement Learning technique - Q-learning Algorithm - as exploration/exploitation strategy for the metaheuristics GRASP (Greedy Randomized Adaptive Search Procedure) and Genetic Algorithm. The GRASP metaheuristic uses Q-learning instead of the traditional greedy-random algorithm in the construction phase. This replacement has the purpose of improving the quality of the initial solutions that are used in the local search phase of the GRASP, and also provides for the metaheuristic an adaptive memory mechanism that allows the reuse of good previous decisions and also avoids the repetition of bad decisions. In the Genetic Algorithm, the Q-learning algorithm was used to generate an initial population of high fitness, and after a determined number of generations, where the rate of diversity of the population is less than a certain limit L, it also was applied to supply one of the parents to be used in the genetic crossover operator. Another significant change in the hybrid genetic algorithm is the proposal of a mutually interactive cooperation process between the genetic operators and the Q-learning algorithm. In this interactive/cooperative process, the Q-learning algorithm receives an additional update in the matrix of Q-values based on the current best solution of the Genetic Algorithm. The computational experiments presented in this thesis compares the results obtained with the implementation of traditional versions of GRASP metaheuristic and Genetic Algorithm, with those obtained using the proposed hybrid methods. Both algorithms had been applied successfully to the symmetrical Traveling Salesman Problem, which was modeled as a Markov decision process

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Antenna arrays are able to provide high and controlled directivity, which are suitable for radiobase stations, radar systems, and point-to-point or satellite links. The optimization of an array design is usually a hard task because of the non-linear characteristic of multiobjective, requiring the application of numerical techniques, such as genetic algorithms. Therefore, in order to optimize the electronic control of the antenna array radiation pattem through genetic algorithms in real codification, it was developed a numerical tool which is able to positioning the array major lobe, reducing the side lobe levels, canceling interference signals in specific directions of arrival, and improving the antenna radiation performance. This was accomplished by using antenna theory concepts and optimization methods, mainly genetic algorithms ones, allowing to develop a numerical tool with creative genes codification and crossover rules, which is one of the most important contribution of this work. The efficiency of the developed genetic algorithm tool is tested and validated in several antenna and propagation applications. 11 was observed that the numerical results attend the specific requirements, showing the developed tool ability and capacity to handle the considered problems, as well as a great perspective for application in future works.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present investigation includes a study of Leonhard Euler and the pentagonal numbers is his article Mirabilibus Proprietatibus Numerorum Pentagonalium - E524. After a brief review of the life and work of Euler, we analyze the mathematical concepts covered in that article as well as its historical context. For this purpose, we explain the concept of figurate numbers, showing its mode of generation, as well as its geometric and algebraic representations. Then, we present a brief history of the search for the Eulerian pentagonal number theorem, based on his correspondence on the subject with Daniel Bernoulli, Nikolaus Bernoulli, Christian Goldbach and Jean Le Rond d'Alembert. At first, Euler states the theorem, but admits that he doesn t know to prove it. Finally, in a letter to Goldbach in 1750, he presents a demonstration, which is published in E541, along with an alternative proof. The expansion of the concept of pentagonal number is then explained and justified by compare the geometric and algebraic representations of the new pentagonal numbers pentagonal numbers with those of traditional pentagonal numbers. Then we explain to the pentagonal number theorem, that is, the fact that the infinite product(1 x)(1 xx)(1 x3)(1 x4)(1 x5)(1 x6)(1 x7)... is equal to the infinite series 1 x1 x2+x5+x7 x12 x15+x22+x26 ..., where the exponents are given by the pentagonal numbers (expanded) and the sign is determined by whether as more or less as the exponent is pentagonal number (traditional or expanded). We also mention that Euler relates the pentagonal number theorem to other parts of mathematics, such as the concept of partitions, generating functions, the theory of infinite products and the sum of divisors. We end with an explanation of Euler s demonstration pentagonal number theorem

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unfortunately, the Brazilian politics has been characterized by lack of ethics. In a few exceptions, our representatives often behave in the exercise of power as if they were there to care for their own interests and not public affairs. Despite the dissatisfaction that the situation seems to trigger to society, the electorate does not get to transform their anger into effective gesture in order to withdraw from the public setting people who can not fulfill their mandate at the polls. Instead, the re-election of bad politicians has become commonplace fact. In this study, we proposed to discuss the matter in light of traditional philosophical theories, by selecting exponents of ethical thought from the Ancient Period to the Modern. We put special emphasis on behalf of amorality in Florentine thinker's ideas, to Machiavelli s political doctrine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The great majority of analytical models for extragalactic radio sources suppose self-similarity and can be classified into three types: I, II and III. We have developed a model that represents a generalization of most models found in the literature and showed that these three types are particular cases. The model assumes that the area of the head of the jet varies with the jet size according to a power law and the jet luminosity is a function of time. As it is usually done, the basic hypothesis is that there is an equilibrium between the pressure exerted both by the head of the jet and the cocoon walls and the ram pressure of the ambient medium. The equilibrium equations and energy conservation equation allow us to express the size and width of the source and the pressure in the cocoon as a power law and find the respective exponents. All these assumptions can be used to calculate the evolution of the source size, width and radio luminosity. This can then be compared with the observed width-size relation for radio lobes and the power-size (P-D) diagram of both compact (GPS and CSS) and extended sources from the 3CR catalogue. In this work we introduce two important improvement as compared with a previous work: (1)We have put together a larger sample of both compact and extended radio sources

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we study some problems related to petroleum reservoirs using methods and concepts of Statistical Physics. The thesis could be divided percolation problem in random multifractal support motivated by its potential application in modelling oil reservoirs. We develped an heterogeneous and anisotropic grid that followin two parts. The first one introduce a study of the percolations a random multifractal distribution of its sites. After, we determine the percolation threshold for this grid, the fractal dimension of the percolating cluster and the critical exponents ß and v. In the second part, we propose an alternative systematic of modelling and simulating oil reservoirs. We introduce a statistical model based in a stochastic formulation do Darcy Law. In this model, the distribution of permeabilities is localy equivalent to the basic model of bond percolation

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High-precision calculations of the correlation functions and order parameters were performed in order to investigate the critical properties of several two-dimensional ferro- magnetic systems: (i) the q-state Potts model; (ii) the Ashkin-Teller isotropic model; (iii) the spin-1 Ising model. We deduced exact relations connecting specific damages (the difference between two microscopic configurations of a model) and the above mentioned thermodynamic quanti- ties which permit its numerical calculation, by computer simulation and using any ergodic dynamics. The results obtained (critical temperature and exponents) reproduced all the known values, with an agreement up to several significant figures; of particular relevance were the estimates along the Baxter critical line (Ashkin-Teller model) where the exponents have a continuous variation. We also showed that this approach is less sensitive to the finite-size effects than the standard Monte-Carlo method. This analysis shows that the present approach produces equal or more accurate results, as compared to the usual Monte Carlo simulation, and can be useful to investigate these models in circumstances for which their behavior is not yet fully understood

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ferromagnetic and antiferromagnetic Ising model on a two dimensional inhomogeneous lattice characterized by two exchange constants (J1 and J2) is investigated. The lattice allows, in a continuous manner, the interpolation between the uniforme square (J2 = 0) and triangular (J2 = J1) lattices. By performing Monte Carlo simulation using the sequential Metropolis algorithm, we calculate the magnetization and the magnetic susceptibility on lattices of differents sizes. Applying the finite size scaling method through a data colappse, we obtained the critical temperatures as well as the critical exponents of the model for several values of the parameter α = J2 J1 in the [0, 1] range. The ferromagnetic case shows a linear increasing behavior of the critical temperature Tc for increasing values of α. Inwhich concerns the antiferromagnetic system, we observe a linear (decreasing) behavior of Tc, only for small values of α; in the range [0.6, 1], where frustrations effects are more pronunciated, the critical temperature Tc decays more quickly, possibly in a non-linear way, to the limiting value Tc = 0, cor-responding to the homogeneous fully frustrated antiferromagnetic triangular case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The usual Ashkin-Teller (AT) model is obtained as a superposition of two Ising models coupled through a four-spin interaction term. In two dimension the AT model displays a line of fixed points along which the exponents vary continuously. On this line the model becomes soluble via a mapping onto the Baxter model. Such richness of multicritical behavior led Grest and Widom to introduce the N-color Ashkin-Teller model (N-AT). Those authors made an extensive analysis of the model thus introduced both in the isotropic as well as in the anisotropic cases by several analytical and computational methods. In the present work we define a more general version of the 3-color Ashkin-Teller model by introducing a 6-spin interaction term. We investigate the corresponding symmetry structure presented by our model in conjunction with an analysis of possible phase diagrams obtained by real space renormalization group techniques. The phase diagram are obtained at finite temperature in the region where the ferromagnetic behavior is predominant. Through the use of the transmissivities concepts we obtain the recursion relations in some periodical as well as aperiodic hierarchical lattices. In a first analysis we initially consider the two-color Ashkin-Teller model in order to obtain some results with could be used as a guide to our main purpose. In the anisotropic case the model was previously studied on the Wheatstone bridge by Claudionor Bezerra in his Master Degree dissertation. By using more appropriated computational resources we obtained isomorphic critical surfaces described in Bezerra's work but not properly identified. Besides, we also analyzed the isotropic version in an aperiodic hierarchical lattice, and we showed how the geometric fluctuations are affected by such aperiodicity and its consequences in the corresponding critical behavior. Those analysis were carried out by the use of appropriated definitions of transmissivities. Finally, we considered the modified 3-AT model with a 6-spin couplings. With the inclusion of such term the model becomes more attractive from the symmetry point of view. For some hierarchical lattices we derived general recursion relations in the anisotropic version of the model (3-AAT), from which case we can obtain the corresponding equations for the isotropic version (3-IAT). The 3-IAT was studied extensively in the whole region where the ferromagnetic couplings are dominant. The fixed points and the respective critical exponents were determined. By analyzing the attraction basins of such fixed points we were able to find the three-parameter phase diagram (temperature £ 4-spin coupling £ 6-spin coupling). We could identify fixed points corresponding to the universality class of Ising and 4- and 8-state Potts model. We also obtained a fixed point which seems to be a sort of reminiscence of a 6-state Potts fixed point as well as a possible indication of the existence of a Baxter line. Some unstable fixed points which do not belong to any aforementioned q-state Potts universality class was also found