11 resultados para Markov chains. Convergence. Evolutionary Strategy. Large Deviations
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
This paper we study a random strategy called MOSES, which was introduced in 1996 by Fran¸cois. Asymptotic results of this strategy; behavior of the stationary distributions of the chain associated to strategy, were derived by Fran¸cois, in 1998, of the theory of Freidlin and Wentzell [8]. Detailings of these results are in this work. Moreover, we noted that an alternative approach the convergence of this strategy is possible without making use of theory of Freidlin and Wentzell, yielding the visit almost certain of the strategy to uniform populations which contain the minimum. Some simulations in Matlab are presented in this work
Resumo:
In this work we studied the consistency for a class of kernel estimates of f f (.) in the Markov chains with general state space E C Rd case. This study is divided into two parts: In the first one f (.) is a stationary density of the chain, and in the second one f (x) v (dx) is the limit distribution of a geometrically ergodic chain
Resumo:
Os Algoritmos Genético (AG) e o Simulated Annealing (SA) são algoritmos construídos para encontrar máximo ou mínimo de uma função que representa alguma característica do processo que está sendo modelado. Esses algoritmos possuem mecanismos que os fazem escapar de ótimos locais, entretanto, a evolução desses algoritmos no tempo se dá de forma completamente diferente. O SA no seu processo de busca trabalha com apenas um ponto, gerando a partir deste sempre um nova solução que é testada e que pode ser aceita ou não, já o AG trabalha com um conjunto de pontos, chamado população, da qual gera outra população que sempre é aceita. Em comum com esses dois algoritmos temos que a forma como o próximo ponto ou a próxima população é gerada obedece propriedades estocásticas. Nesse trabalho mostramos que a teoria matemática que descreve a evolução destes algoritmos é a teoria das cadeias de Markov. O AG é descrito por uma cadeia de Markov homogênea enquanto que o SA é descrito por uma cadeia de Markov não-homogênea, por fim serão feitos alguns exemplos computacionais comparando o desempenho desses dois algoritmos
Resumo:
In this work, we studied the strong consistency for a class of estimates for a transition density of a Markov chain with general state space E ⊂ Rd. The strong ergodicity of the estimates for the density transition is obtained from the strong consistency of the kernel estimates for both the marginal density p(:) of the chain and the joint density q(., .). In this work the Markov chain is supposed to be homogeneous, uniformly ergodic and possessing a stationary density p(.,.)
Resumo:
The central objective of a study Non-Homogeneous Markov Chains is the concept of weak and strong ergodicity. A chain is weak ergodic if the dependence on the initial distribution vanishes with time, and it is strong ergodic if it is weak ergodic and converges in distribution. Most theoretical results on strong ergodicity assume some knowledge of the limit behavior of the stationary distributions. In this work, we collect some general results on weak and strong ergodicity for chains with space enumerable states, and also study the asymptotic behavior of the stationary distributions of a particular type of Markov Chains with finite state space, called Markov Chains with Rare Transitions
Resumo:
In this work, we present our understanding about the article of Aksoy [1], which uses Markov chains to model the flow of intermittent rivers. Then, we executed an application of his model in order to generate data for intermittent streamflows, based on a data set of Brazilian streams. After that, we build a hidden Markov model as a proposed new approach to the problem of simulation of such flows. We used the Gamma distribution to simulate the increases and decreases in river flows, along with a two-state Markov chain. The motivation for us to use a hidden Markov model comes from the possibility of obtaining the same information that the Aksoy’s model provides, but using a single tool capable of treating the problem as a whole, and not through multiple independent processes
Resumo:
The idea of considering imprecision in probabilities is old, beginning with the Booles George work, who in 1854 wanted to reconcile the classical logic, which allows the modeling of complete ignorance, with probabilities. In 1921, John Maynard Keynes in his book made explicit use of intervals to represent the imprecision in probabilities. But only from the work ofWalley in 1991 that were established principles that should be respected by a probability theory that deals with inaccuracies. With the emergence of the theory of fuzzy sets by Lotfi Zadeh in 1965, there is another way of dealing with uncertainty and imprecision of concepts. Quickly, they began to propose several ways to consider the ideas of Zadeh in probabilities, to deal with inaccuracies, either in the events associated with the probabilities or in the values of probabilities. In particular, James Buckley, from 2003 begins to develop a probability theory in which the fuzzy values of the probabilities are fuzzy numbers. This fuzzy probability, follows analogous principles to Walley imprecise probabilities. On the other hand, the uses of real numbers between 0 and 1 as truth degrees, as originally proposed by Zadeh, has the drawback to use very precise values for dealing with uncertainties (as one can distinguish a fairly element satisfies a property with a 0.423 level of something that meets with grade 0.424?). This motivated the development of several extensions of fuzzy set theory which includes some kind of inaccuracy. This work consider the Krassimir Atanassov extension proposed in 1983, which add an extra degree of uncertainty to model the moment of hesitation to assign the membership degree, and therefore a value indicate the degree to which the object belongs to the set while the other, the degree to which it not belongs to the set. In the Zadeh fuzzy set theory, this non membership degree is, by default, the complement of the membership degree. Thus, in this approach the non-membership degree is somehow independent of the membership degree, and this difference between the non-membership degree and the complement of the membership degree reveals the hesitation at the moment to assign a membership degree. This new extension today is called of Atanassov s intuitionistic fuzzy sets theory. It is worth noting that the term intuitionistic here has no relation to the term intuitionistic as known in the context of intuitionistic logic. In this work, will be developed two proposals for interval probability: the restricted interval probability and the unrestricted interval probability, are also introduced two notions of fuzzy probability: the constrained fuzzy probability and the unconstrained fuzzy probability and will eventually be introduced two notions of intuitionistic fuzzy probability: the restricted intuitionistic fuzzy probability and the unrestricted intuitionistic fuzzy probability
Resumo:
The use of behavioural indicators of suffering and welfare in captive animals has produced ambiguous results. In comparisons between groups, those in worse condition tend to exhibit increased overall rate of Behaviours Potentially Indicative of Stress (BPIS), but when comparing within groups, individuals differ in their stress coping strategies. This dissertation presents analyses to unravel the Behavioural Profile of a sample of 26 captive capuchin monkeys, of three different species (Sapajus libidinosus, S. flavius and S. xanthosternos), kept in different enclosure types. In total, 147,17 hours of data were collected. We explored four type of analysis: Activity Budgets, Diversity indexes, Markov chains and Sequence analyses, and Social Network Analyses, resulting in nine indexes of behavioural occurrence and organization. In chapter One we explore group differences. Results support predictions of minor sex and species differences and major differences in behavioural profile due to enclosure type: i. individuals in less enriched enclosures exhibited a more diverse BPIS repertoire and a decreased probability of a sequence with six Genus Normative Behaviour; ii. number of most probable behavioural transitions including at least one BPIS was higher in less enriched enclosures; iii. proeminence indexes indicate that BPIS function as dead ends of behavioural sequences, and proeminence of three BPIS (pacing, self-direct, active I) were higher in less enriched enclosures. Overall, these data are not supportive of BPIS as a repetitive pattern, with a mantra-like calming effect. Rather, the picture that emerges is more supportive of BPIS as activities that disrupt organization of behaviours, introducing “noise” that compromises optimal activity budget. In chapter Two we explored individual differences in stress coping strategies. We classified individuals along six axes of exploratory behaviour. These were only weakly correlated indicating low correlation among behavioural indicators of syndromes. Nevertheless, the results are suggestive of two broad stress coping strategies, similar to the bold/proactive and shy/reactive pattern: more exploratory capuchin monkeys exhibited increased values of proeminence in Pacing, aberrant sexual display and Active 1 BPIS, while less active animals exhibited increased probability in significant sequences involving at least one BPIS, and increased prominence in own stereotypy. Capuchin monkeys are known for their cognitive capacities and behavioural flexibility, therefore, the search for a consistent set of behavioural indictors of welfare and individual differences requires further studies and larger data sets. With this work we aim contributing to design scientifically grounded and statistically correct protocols for collection of behavioural data that permits comparability of results and meta-analyses, from whatever theoretical perspective interpretation it may receive.
Resumo:
The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.
Resumo:
The objective is to analyze the relationship between risk and number of stocks of a portfolio for an individual investor when stocks are chosen by "naive strategy". For this, we carried out an experiment in which individuals select actions to reproduce this relationship. 126 participants were informed that the risk of first choice would be an asset average of all standard deviations of the portfolios consist of a single asset, and the same procedure should be used for portfolios composed of two, three and so on, up to 30 actions . They selected the assets they want in their portfolios without the support of a financial analysis. For comparison we also tested a hypothetical simulation of 126 investors who selected shares the same universe, through a random number generator. Thus, each real participant is compensated for random hypothetical investor facing the same opportunity. Patterns were observed in the portfolios of individual participants, characterizing the curves for the components of the samples. Because these groupings are somewhat arbitrary, it was used a more objective measure of behavior: a simple linear regression for each participant, in order to predict the variance of the portfolio depending on the number of assets. In addition, we conducted a pooled regression on all observations by analyzing cross-section. The result of pattern occurs on average but not for most individuals, many of which effectively "de-diversify" when adding seemingly random bonds. Furthermore, the results are slightly worse using a random number generator. This finding challenges the belief that only a small number of titles is necessary for diversification and shows that there is only applicable to a large sample. The implications are important since many individual investors holding few stocks in their portfolios
Resumo:
In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables