246 resultados para automata


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present OBDD transformation problem representing finite labeled transition systems corresponding to some congruence relation. Transformations are oriented toward obtaining the OBDD of a minimized transition system for this congruence relation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A general technique for transforming a timed finite state automaton into an equivalent automated planning domain based on a numerical parameter model is introduced. Timed transition automata have many applications in control systems and agents models; they are used to describe sequential processes, where actions are labelling by automaton transitions subject to temporal constraints. The language of timed words accepted by a timed automaton, the possible sequences of system or agent behaviour, can be described in term of an appropriate planning domain encapsulating the timed actions patterns and constraints. The time words recognition problem is then posed as a planning problem where the goal is to reach a final state by a sequence of actions, which corresponds to the timed symbols labeling the automaton transitions. The transformation is proved to be correct and complete and it is space/time linear on the automaton size. Experimental results shows that the performance of the planning domain obtained by transformation is scalable for real world applications. A major advantage of the planning based approach, beside of the solving the parsing problem, is to represent in a single automated reasoning framework problems of plan recognitions, plan synthesis and plan optimisation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of finite automata minimization is important for software and hardware designing. Different types of automata are used for modeling systems or machines with finite number of states. The limitation of number of states gives savings in resources and time. In this article we show specific type of probabilistic automata: the reactive probabilistic finite automata with accepting states (in brief the reactive probabilistic automata), and definitions of languages accepted by it. We present definition of bisimulation relation for automata's states and define relation of indistinguishableness of automata states, on base of which we could effectuate automata minimization. Next we present detailed algorithm reactive probabilistic automata’s minimization with determination of its complexity and analyse example solved with help of this algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the 1950s, the theory of deterministic and nondeterministic finite automata (DFAs and NFAs, respectively) has been a cornerstone of theoretical computer science. In this dissertation, our main object of study is minimal NFAs. In contrast with minimal DFAs, minimal NFAs are computationally challenging: first, there can be more than one minimal NFA recognizing a given language; second, the problem of converting an NFA to a minimal equivalent NFA is NP-hard, even for NFAs over a unary alphabet. Our study is based on the development of two main theories, inductive bases and partials, which in combination form the foundation for an incremental algorithm, ibas, to find minimal NFAs. An inductive basis is a collection of languages with the property that it can generate (through union) each of the left quotients of its elements. We prove a fundamental characterization theorem which says that a language can be recognized by an n-state NFA if and only if it can be generated by an n-element inductive basis. A partial is an incompletely-specified language. We say that an NFA recognizes a partial if its language extends the partial, meaning that the NFA’s behavior is unconstrained on unspecified strings; it follows that a minimal NFA for a partial is also minimal for its language. We therefore direct our attention to minimal NFAs recognizing a given partial. Combining inductive bases and partials, we generalize our characterization theorem, showing that a partial can be recognized by an n-state NFA if and only if it can be generated by an n-element partial inductive basis. We apply our theory to develop and implement ibas, an incremental algorithm that finds minimal partial inductive bases generating a given partial. In the case of unary languages, ibas can often find minimal NFAs of up to 10 states in about an hour of computing time; with brute-force search this would require many trillions of years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Urban growth models have been used for decades to forecast urban development in metropolitan areas. Since the 1990s cellular automata, with simple computational rules and an explicitly spatial architecture, have been heavily utilized in this endeavor. One such cellular-automata-based model, SLEUTH, has been successfully applied around the world to better understand and forecast not only urban growth but also other forms of land-use and land-cover change, but like other models must be fed important information about which particular lands in the modeled area are available for development. Some of these lands are in categories for the purpose of excluding urban growth that are difficult to quantify since their function is dictated by policy. One such category includes voluntary differential assessment programs, whereby farmers agree not to develop their lands in exchange for significant tax breaks. Since they are voluntary, today’s excluded lands may be available for development at some point in the future. Mapping the shifting mosaic of parcels that are enrolled in such programs allows this information to be used in modeling and forecasting. In this study, we added information about California’s Williamson Act into SLEUTH’s excluded layer for Tulare County. Assumptions about the voluntary differential assessments were used to create a sophisticated excluded layer that was fed into SLEUTH’s urban growth forecasting routine. The results demonstrate not only a successful execution of this method but also yielded high goodness-of-fit metrics for both the calibration of enrollment termination as well as the urban growth modeling itself.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the 1950s, the theory of deterministic and nondeterministic finite automata (DFAs and NFAs, respectively) has been a cornerstone of theoretical computer science. In this dissertation, our main object of study is minimal NFAs. In contrast with minimal DFAs, minimal NFAs are computationally challenging: first, there can be more than one minimal NFA recognizing a given language; second, the problem of converting an NFA to a minimal equivalent NFA is NP-hard, even for NFAs over a unary alphabet. Our study is based on the development of two main theories, inductive bases and partials, which in combination form the foundation for an incremental algorithm, ibas, to find minimal NFAs. An inductive basis is a collection of languages with the property that it can generate (through union) each of the left quotients of its elements. We prove a fundamental characterization theorem which says that a language can be recognized by an n-state NFA if and only if it can be generated by an n-element inductive basis. A partial is an incompletely-specified language. We say that an NFA recognizes a partial if its language extends the partial, meaning that the NFA's behavior is unconstrained on unspecified strings; it follows that a minimal NFA for a partial is also minimal for its language. We therefore direct our attention to minimal NFAs recognizing a given partial. Combining inductive bases and partials, we generalize our characterization theorem, showing that a partial can be recognized by an n-state NFA if and only if it can be generated by an n-element partial inductive basis. We apply our theory to develop and implement ibas, an incremental algorithm that finds minimal partial inductive bases generating a given partial. In the case of unary languages, ibas can often find minimal NFAs of up to 10 states in about an hour of computing time; with brute-force search this would require many trillions of years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

International audience

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A susceptible-infective-recovered (SIR) epidemiological model based on probabilistic cellular automaton (PCA) is employed for simulating the temporal evolution of the registered cases of chickenpox in Arizona, USA, between 1994 and 2004. At each time step, every individual is in one of the states S, I, or R. The parameters of this model are the probabilities of each individual (each cell forming the PCA lattice ) passing from a state to another state. Here, the values of these probabilities are identified by using a genetic algorithm. If nonrealistic values are allowed to the parameters, the predictions present better agreement with the historical series than if they are forced to present realistic values. A discussion about how the size of the PCA lattice affects the quality of the model predictions is presented. Copyright (C) 2009 L. H. A. Monteiro et al.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Identification, prediction, and control of a system are engineering subjects, regardless of the nature of the system. Here, the temporal evolution of the number of individuals with dengue fever weekly recorded in the city of Rio de Janeiro, Brazil, during 2007, is used to identify SIS (susceptible-infective-susceptible) and SIR (susceptible-infective-removed) models formulated in terms of cellular automaton (CA). In the identification process, a genetic algorithm (GA) is utilized to find the probabilities of the state transition S -> I able of reproducing in the CA lattice the historical series of 2007. These probabilities depend on the number of infective neighbors. Time-varying and non-time-varying probabilities, three different sizes of lattices, and two kinds of coupling topology among the cells are taken into consideration. Then, these epidemiological models built by combining CA and GA are employed for predicting the cases of sick persons in 2008. Such models can be useful for forecasting and controlling the spreading of this infectious disease.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper contains a new proposal for the definition of the fundamental operation of query under the Adaptive Formalism, one capable of locating functional nuclei from descriptions of their semantics. To demonstrate the method`s applicability, an implementation of the query procedure constrained to a specific class of devices is shown, and its asymptotic computational complexity is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Susceptible-infective-removed (SIR) models are commonly used for representing the spread of contagious diseases. A SIR model can be described in terms of a probabilistic cellular automaton (PCA), where each individual (corresponding to a cell of the PCA lattice) is connected to others by a random network favoring local contacts. Here, this framework is employed for investigating the consequences of applying vaccine against the propagation of a contagious infection, by considering vaccination as a game, in the sense of game theory. In this game, the players are the government and the susceptible newborns. In order to maximize their own payoffs, the government attempts to reduce the costs for combating the epidemic, and the newborns may be vaccinated only when infective individuals are found in their neighborhoods and/or the government promotes an immunization program. As a consequence of these strategies supported by cost-benefit analysis and perceived risk, numerical simulations show that the disease is not fully eliminated and the government implements quasi-periodic vaccination campaigns. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are several ways of controlling the propagation of a contagious disease. For instance, to reduce the spreading of an airborne infection, individuals can be encouraged to remain in their homes and/or to wear face masks outside their domiciles. However, when a limited amount of masks is available, who should use them: the susceptible subjects, the infective persons or both populations? Here we employ susceptible-infective-recovered (SIR) models described in terms of ordinary differential equations and probabilistic cellular automata in order to investigate how the deletion of links in the random complex network representing the social contacts among individuals affects the dynamics of a contagious disease. The inspiration for this study comes from recent discussions about the impact of measures usually recommended by health public organizations for preventing the propagation of the swine influenza A (H1N1) virus. Our answer to this question can be valid for other eco-epidemiological systems. (C) 2010 Elsevier BM. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Urbanization and the ability to manage for a sustainable future present numerous challenges for geographers and planners in metropolitan regions. Remotely sensed data are inherently suited to provide information on urban land cover characteristics, and their change over time, at various spatial and temporal scales. Data models for establishing the range of urban land cover types and their biophysical composition (vegetation, soil, and impervious surfaces) are integrated to provide a hierarchical approach to classifying land cover within urban environments. These data also provide an essential component for current simulation models of urban growth patterns, as both calibration and validation data. The first stages of the approach have been applied to examine urban growth between 1988 and 1995 for a rapidly developing area in southeast Queensland, Australia. Landsat Thematic Mapper image data provided accurate (83% adjusted overall accuracy) classification of broad land cover types and their change over time. The combination of commonly available remotely sensed data, image processing methods, and emerging urban growth models highlights an important application for current and next generation moderate spatial resolution image data in studies of urban environments.