979 resultados para variable length Markov chains
Resumo:
In this paper we consider hybrid (fast stochastic approximation and deterministic refinement) algorithms for Matrix Inversion (MI) and Solving Systems of Linear Equations (SLAE). Monte Carlo methods are used for the stochastic approximation, since it is known that they are very efficient in finding a quick rough approximation of the element or a row of the inverse matrix or finding a component of the solution vector. We show how the stochastic approximation of the MI can be combined with a deterministic refinement procedure to obtain MI with the required precision and further solve the SLAE using MI. We employ a splitting A = D – C of a given non-singular matrix A, where D is a diagonal dominant matrix and matrix C is a diagonal matrix. In our algorithm for solving SLAE and MI different choices of D can be considered in order to control the norm of matrix T = D –1C, of the resulting SLAE and to minimize the number of the Markov Chains required to reach given precision. Further we run the algorithms on a mini-Grid and investigate their efficiency depending on the granularity. Corresponding experimental results are presented.
Resumo:
A reconfigurable scalar quantiser capable of accepting n-bit input data is presented. The data length n can be varied in the range 1... N-1 under partial-run time reconfiguration, p-RTR. Issues as improvement in throughput using this reconfigurable quantiser of p-RTR against RTR for data of variable length are considered. The quantiser design referred to as the priority quantiser PQ is then compared against a direct design of the quantiser DIQ. It is then evaluated that for practical quantiser sizes, PQ shows better area usage when both are targeted onto the same FPGA. Other benefits are also identified.
Resumo:
The UK has a target for an 80% reduction in CO2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures.
Resumo:
The availability of crop specimens archived in herbaria and old seed collections represent valuable resources for the analysis of plant genetic diversity and crop domestication. The ability to extract ancient DNA (aDNA) from such samples has recently allowed molecular genetic investigations to be undertaken in ancient materials. While analyses of aDNA initially focused on the use of markers which occur in multiple copies such as the internal transcribed spacer region (ITS) within ribosomal DNA and those requiring amplification of short DNA regions of variable length such as simple sequence repeats (SSRs), emphasis is now moving towards the genotyping of single nucleotide polymorphisms (SNPs), traditionally undertaken in aDNA by Sanger sequencing. Here, using a panel of barley aDNA samples previously surveyed by Sanger sequencing for putative causative SNPs within the flowering-time gene PPD-H1, we assess the utility of the Kompetitive Allele Specific PCR (KASP) genotyping platform for aDNA analysis. We find KASP to out-perform Sanger sequencing in the genotyping of aDNA samples (78% versus 61% success, respectively), as well as being robust to contamination. The small template size (≥46 bp) and one-step, closed-tube amplification/genotyping process make this platform ideally suited to the genotypic analysis of aDNA, a process which is often hampered by template DNA degradation and sample cross-contamination. Such attributes, as well as its flexibility of use and relatively low cost, make KASP particularly relevant to the genetic analysis of aDNA samples. Furthermore, KASP provides a common platform for the genotyping and analysis of corresponding SNPs in ancient, landrace and modern plant materials. The extended haplotype analysis of PPD-H1 undertaken here (allelic variation at which is thought to be important for the spread of domestication and local adaptation) provides further resolution to the previously identified geographic cline of flowering-time allele distribution, illustrating how KASP can be used to aid genetic analyses of aDNA from plant species. We further demonstrate the utility of KASP by genotyping ten additional genetic markers diagnostic for morphological traits in barley, shedding light on the phenotypic traits, alleles and allele combinations present in these unviable ancient specimens, as well as their geographic distributions.
Resumo:
The idea of considering imprecision in probabilities is old, beginning with the Booles George work, who in 1854 wanted to reconcile the classical logic, which allows the modeling of complete ignorance, with probabilities. In 1921, John Maynard Keynes in his book made explicit use of intervals to represent the imprecision in probabilities. But only from the work ofWalley in 1991 that were established principles that should be respected by a probability theory that deals with inaccuracies. With the emergence of the theory of fuzzy sets by Lotfi Zadeh in 1965, there is another way of dealing with uncertainty and imprecision of concepts. Quickly, they began to propose several ways to consider the ideas of Zadeh in probabilities, to deal with inaccuracies, either in the events associated with the probabilities or in the values of probabilities. In particular, James Buckley, from 2003 begins to develop a probability theory in which the fuzzy values of the probabilities are fuzzy numbers. This fuzzy probability, follows analogous principles to Walley imprecise probabilities. On the other hand, the uses of real numbers between 0 and 1 as truth degrees, as originally proposed by Zadeh, has the drawback to use very precise values for dealing with uncertainties (as one can distinguish a fairly element satisfies a property with a 0.423 level of something that meets with grade 0.424?). This motivated the development of several extensions of fuzzy set theory which includes some kind of inaccuracy. This work consider the Krassimir Atanassov extension proposed in 1983, which add an extra degree of uncertainty to model the moment of hesitation to assign the membership degree, and therefore a value indicate the degree to which the object belongs to the set while the other, the degree to which it not belongs to the set. In the Zadeh fuzzy set theory, this non membership degree is, by default, the complement of the membership degree. Thus, in this approach the non-membership degree is somehow independent of the membership degree, and this difference between the non-membership degree and the complement of the membership degree reveals the hesitation at the moment to assign a membership degree. This new extension today is called of Atanassov s intuitionistic fuzzy sets theory. It is worth noting that the term intuitionistic here has no relation to the term intuitionistic as known in the context of intuitionistic logic. In this work, will be developed two proposals for interval probability: the restricted interval probability and the unrestricted interval probability, are also introduced two notions of fuzzy probability: the constrained fuzzy probability and the unconstrained fuzzy probability and will eventually be introduced two notions of intuitionistic fuzzy probability: the restricted intuitionistic fuzzy probability and the unrestricted intuitionistic fuzzy probability
Resumo:
This paper we study a random strategy called MOSES, which was introduced in 1996 by Fran¸cois. Asymptotic results of this strategy; behavior of the stationary distributions of the chain associated to strategy, were derived by Fran¸cois, in 1998, of the theory of Freidlin and Wentzell [8]. Detailings of these results are in this work. Moreover, we noted that an alternative approach the convergence of this strategy is possible without making use of theory of Freidlin and Wentzell, yielding the visit almost certain of the strategy to uniform populations which contain the minimum. Some simulations in Matlab are presented in this work
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Analytical and Monte Carlo approaches to evaluate probability distributions of interruption duration
Resumo:
Regulatory authorities in many countries, in order to maintain an acceptable balance between appropriate customer service qualities and costs, are introducing a performance-based regulation. These regulations impose penalties-and, in some cases, rewards-that introduce a component of financial risk to an electric power utility due to the uncertainty associated with preserving a specific level of system reliability. In Brazil, for instance, one of the reliability indices receiving special attention by the utilities is the maximum continuous interruption duration (MCID) per customer.This parameter is responsible for the majority of penalties in many electric distribution utilities. This paper describes analytical and Monte Carlo simulation approaches to evaluate probability distributions of interruption duration indices. More emphasis will be given to the development of an analytical method to assess the probability distribution associated with the parameter MCID and the correspond ng penalties. Case studies on a simple distribution network and on a real Brazilian distribution system are presented and discussed.
Resumo:
Recent studies have shown that adaptive X control charts are quicker than traditional X charts in detecting small to moderate shifts in a process. In this article, we propose a joint statistical design of adaptive X and R charts having all design parameters varying adaptively. The process is subjected to two independent assignable causes. One cause changes the process mean and the other changes the process variance. However, the occurrence of one kind of assignable cause does not preclude the occurrence of the other. It is assumed that the quality characteristic is normally distributed and the time that the process remains in control has exponential distribution. Performance measures of these adaptive control charts are obtained through a Markov chain approach. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Motilin-immunoreactive cells in the duodenum, pyloric stomach and pancreas of Caiman latirostris and Caiman crocodilus were investigated using region specific antisera for porcine and canine motilin molecules. Motilin-immunoreactive cells were found in the duodenum, pyloric stomach and pancreas of both caiman species. These cells were primarily open-type endocrine ones in the epithelium of the duodenum and pyloric stomach. Motilin-immunoreactive cells were observed in both the exocrine and endocrine portions of the pancreas, and frequently exhibited one or more cytoplasmic processes of variable length. Since motilin-immunoreactive cells do not cross-react with serotonin or any of the other pancreatic and gut hormones, they are considered to be cell type independent from any of the other known pancreatic or gut endocrine cells. The molecular similarity between caiman motilin and porcine and canine motilins and the heterogeneity of the motilin molecule in the caiman digestive system is discussed.
Resumo:
Regulatory authorities in many countries, in order to maintain an acceptable balance between appropriate customer service qualities and costs, are introducing a performance-based regulation. These regulations impose penalties, and in some cases rewards, which introduce a component of financial risk to an electric power utility due to the uncertainty associated with preserving a specific level of system reliability. In Brazil, for instance, one of the reliability indices receiving special attention by the utilities is the Maximum Continuous Interruption Duration per customer (MCID). This paper describes a chronological Monte Carlo simulation approach to evaluate probability distributions of reliability indices, including the MCID, and the corresponding penalties. In order to get the desired efficiency, modern computational techniques are used for modeling (UML -Unified Modeling Language) as well as for programming (Object- Oriented Programming). Case studies on a simple distribution network and on real Brazilian distribution systems are presented and discussed. © Copyright KTH 2006.
Resumo:
In this paper, a novel methodology to price the reactive power support ancillary service of Distributed Generators (DGs) with primary energy source uncertainty is shown. The proposed methodology provides the service pricing based on the Loss of Opportunity Costs (LOC) calculation. An algorithm is proposed to reduce the uncertainty present in these generators using Multiobjective Power Flows (MOPFs) implemented in multiple probabilistic scenarios through Monte Carlo Simulations (MCS), and modeling the time series associated with the generation of active power from DGs through Markov Chains (MC). © 2011 IEEE.
Resumo:
Distributed Generation, microgrid technologies, two-way communication systems, and demand response programs are issues that are being studied in recent years within the concept of smart grids. At some level of enough penetration, the Distributed Generators (DGs) can provide benefits for sub-transmission and transmission systems through the so-called ancillary services. This work is focused on the ancillary service of reactive power support provided by DGs, specifically Wind Turbine Generators (WTGs), with high level of impact on transmission systems. The main objective of this work is to propose an optimization methodology to price this service by determining the costs in which a DG incurs when it loses sales opportunity of active power, i.e, by determining the Loss of Opportunity Costs (LOC). LOC occur when more reactive power is required than available, and the active power generation has to be reduced in order to increase the reactive power capacity. In the optimization process, three objectives are considered: active power generation costs of DGs, voltage stability margin of the system, and losses in the lines of the network. Uncertainties of WTGs are reduced solving multi-objective optimal power flows in multiple probabilistic scenarios constructed by Monte Carlo simulations, and modeling the time series associated with the active power generation of each WTG via Fuzzy Logic and Markov Chains. The proposed methodology was tested using the IEEE 14 bus test system with two WTGs installed. © 2011 IEEE.
Resumo:
Pós-graduação em Ciências Biológicas (Zoologia) - IBRC
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)