931 resultados para Rule-based techniques


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Delineation of commuting regions has always been based on statistical units, often municipalities or wards. However, using these units has certain disadvantages as their land areas differ considerably. Much information is lost in the larger spatial base units and distortions in self-containment values, the main criterion in rule-based delineation procedures, occur. Alternatively, one can start from relatively small standard size units such as hexagons. In this way, much greater detail in spatial patterns is obtained. In this paper, regions are built by means of intrazonal maximization (Intramax) on the basis of hexagons. The use of geoprocessing tools, specifically developed for the processing ofcommuting data, speeds up processing time considerably. The results of the Intramax analysis are evaluated with travel-to-work area constraints, and comparisons are made with commuting fields, accessibility to employment, commuting flow density and network commuting flow size. From selected steps in the regionalization process, a hierarchy of nested commuting regions emerges, revealing the complexity of commuting patterns.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In all applications of clone detection it is important to have precise and efficient clone identification algorithms. This paper proposes and outlines a new algorithm, KClone for clone detection that incorporates a novel combination of lexical and local dependence analysis to achieve precision, while retaining speed. The paper also reports on the initial results of a case study using an implementation of KClone with which we have been experimenting. The results indi- cate the ability of KClone to find types-1,2, and 3 clones compared to token-based and PDG-based techniques. The paper also reports results of an initial empirical study of the performance of KClone compared to CCFinderX.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Due to the increase in water demand and hydropower energy, it is getting more important to operate hydraulic structures in an efficient manner while sustaining multiple demands. Especially, companies, governmental agencies, consultant offices require effective, practical integrated tools and decision support frameworks to operate reservoirs, cascades of run-of-river plants and related elements such as canals by merging hydrological and reservoir simulation/optimization models with various numerical weather predictions, radar and satellite data. The model performance is highly related with the streamflow forecast, related uncertainty and its consideration in the decision making. While deterministic weather predictions and its corresponding streamflow forecasts directly restrict the manager to single deterministic trajectories, probabilistic forecasts can be a key solution by including uncertainty in flow forecast scenarios for dam operation. The objective of this study is to compare deterministic and probabilistic streamflow forecasts on an earlier developed basin/reservoir model for short term reservoir management. The study is applied to the Yuvacık Reservoir and its upstream basin which is the main water supply of Kocaeli City located in the northwestern part of Turkey. The reservoir represents a typical example by its limited capacity, downstream channel restrictions and high snowmelt potential. Mesoscale Model 5 and Ensemble Prediction System data are used as a main input and the flow forecasts are done for 2012 year using HEC-HMS. Hydrometeorological rule-based reservoir simulation model is accomplished with HEC-ResSim and integrated with forecasts. Since EPS based hydrological model produce a large number of equal probable scenarios, it will indicate how uncertainty spreads in the future. Thus, it will provide risk ranges in terms of spillway discharges and reservoir level for operator when it is compared with deterministic approach. The framework is fully data driven, applicable, useful to the profession and the knowledge can be transferred to other similar reservoir systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neste artigo estimamos e simulamos um modelo macroeconômico aberto de expectativas racionais (Batini e Haldane [4]) para a economia brasileira, com o objetivo de identificar as características das regras monetárias ótimas e a dinâmica de curto prazo gerada por elas. Trabalhamos com uma versão forward-Iooking e uma versão backward-Iooking a fim de comparar o desempenho de três parametrizações de regras monetárias, que diferem em relação à variável de inflação: a tradicional regra de Taylor, que se baseia na inflação passada; uma regra que combina inflação e taxa de câmbio real (ver Ball [5]) e uma regra que utiliza previsões de inflação (ver Bank af England [3]). Resolvemos o modelo numericamente e contruímos fronteiras eficientes em relação às variâncias do produto e da infiação por simulações estocásticas, para choques i.i.d. ou correlacionados. Os conjuntos de regras ótimas para as duas versões são qualitativamente distintos. Devido à incerteza quanto ao grau de forward-Iookingness sugerimos a escolha das regras pela soma das funções objetivos nas duas versões. Concluímos que as regras escolhidas com base neste critério têm perdas moderadas em relação às regras ótimas, mas previnem perdas maiores que resultariam da escolha da regra com base na versão errada. Finalmente calculamos funções de resposta a impulso dos dois modelos para algumas regras selecionadas, a fim de avaliar como diferentes regras monetárias alteram a dinâmica de curto prazo dos dois modelos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Vague words and expressions are present throughout the standards that comprise the accounting and auditing professions. Vagueness is considered to be a significant source of inexactness in many accounting decision problems and many authors have argued that the neglect of this issue may cause accounting information to be less useful. On the other hand, we can assume that the use of vague terms in accounting standards is inherent to principle based standards (different from rule based standards) and that to avoid vague terms, standard setters would have to incur excessive transaction costs. Auditors are required to exercise their own professional judgment throughout the audit process and it has been argued that the inherent vagueness in accounting standards may influence their decision making processes. The main objective of this paper is to analyze the decision making process of auditors and to investigate whether vague accounting standards create a problem for the decision making process of auditors, or lead to a better outcome. This paper makes the argument that vague standards prompt the use of System 2 type processing by auditors, allowing more comprehensive analytical thinking; therefore, reducing the biases associated with System 1 heuristic processing. If our argument is valid, the repercussions of vague accounting standards are not as negative as presented in previous literature, instead they are positive.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we construct sunspot equilibria that arrise from chaotic deterministic dynamics. These equilibria are robust and therefore observables. We prove that they may be learned by a sim pie rule based on the histograms or past state variables. This work gives the theoretical justification or deterministic models that might compete with stochastic models to explain real data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Market risk exposure plays a key role for nancial institutions risk management. A possible measure for this exposure is to evaluate losses likely to incurwhen the price of the portfolio's assets declines using Value-at-Risk (VaR) estimates, one of the most prominent measure of nancial downside market risk. This paper suggests an evolving possibilistic fuzzy modeling approach for VaR estimation. The approach is based on an extension of the possibilistic fuzzy c-means clustering and functional fuzzy rule-based modeling, which employs memberships and typicalities to update clusters and creates new clusters based on a statistical control distance-based criteria. ePFM also uses an utility measure to evaluate the quality of the current cluster structure. Computational experiments consider data of the main global equity market indexes of United States, London, Germany, Spain and Brazil from January 2000 to December 2012 for VaR estimation using ePFM, traditional VaR benchmarks such as Historical Simulation, GARCH, EWMA, and Extreme Value Theory and state of the art evolving approaches. The results show that ePFM is a potential candidate for VaR modeling, with better performance than alternative approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work aims to examine the television social representation by mothers/educators as a TV viewers, to understand the meaning of this media in their quotidian and which relations occur between teachers and students in the classroom. The study purpose is the educational television rule, based on a social representation approach. It look for to reveal, through the discourses of five educators who are engaged in pedagogic activities in the Public Elementary School of the Natal city, a significant experience in the media education field progress. It’s also a way to understand which representations the educators have about the television can contribute to aid the idea and critic analysis about the media meaning in the teacher’s formation. Some questions were in the basis of the investigation as: What is the television for the educators who are also TV viewers? How it reaches the classroom? Their relation with the media interfere in the pedagogic practice? Assuming that the verbal technical is one of the formal ways to access the representations, the methodological strategy employed was the open interview, guided by a wide and flexible schedule, leaving the interviewees free to expose their ideas, a attitude adopted to avoid the imposition of interviwer’s points of view, that result in a rich material. Through this strategy it was possible to confirm or to reject presumptions raised in the beginning of the investigation and modify some planning direction lines. The study has as the theory presupposition the contribution of the Mexican researcher Guillermo Orozco Gómez, who, based on the Paulo Freire e Jesús Martín-Barbero ideas, establishes a dialogue between popular Education and the communication theories, mainly the television reception, when he develops an integral view focused on the audience or on the multiple mediations model. The school – and the family, as well – is an important mediator of the media information. The relationship which the teachers establish between the television and their representations about it in their lives reflects effectively and directly on their professional practice and on the media dialogue within the school, it can contribute to the critic reflection which students establish with the media trough the educators mediation

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The progressing cavity pump artificial lift system, PCP, is a main lift system used in oil production industry. As this artificial lift application grows the knowledge of it s dynamics behavior, the application of automatic control and the developing of equipment selection design specialist systems are more useful. This work presents tools for dynamic analysis, control technics and a specialist system for selecting lift equipments for this artificial lift technology. The PCP artificial lift system consists of a progressing cavity pump installed downhole in the production tubing edge. The pump consists of two parts, a stator and a rotor, and is set in motion by the rotation of the rotor transmitted through a rod string installed in the tubing. The surface equipment generates and transmits the rotation to the rod string. First, is presented the developing of a complete mathematical dynamic model of PCP system. This model is simplified for use in several conditions, including steady state for sizing PCP equipments, like pump, rod string and drive head. This model is used to implement a computer simulator able to help in system analysis and to operates as a well with a controller and allows testing and developing of control algorithms. The next developing applies control technics to PCP system to optimize pumping velocity to achieve productivity and durability of downhole components. The mathematical model is linearized to apply conventional control technics including observability and controllability of the system and develop design rules for PI controller. Stability conditions are stated for operation point of the system. A fuzzy rule-based control system are developed from a PI controller using a inference machine based on Mandami operators. The fuzzy logic is applied to develop a specialist system that selects PCP equipments too. The developed technics to simulate and the linearized model was used in an actual well where a control system is installed. This control system consists of a pump intake pressure sensor, an industrial controller and a variable speed drive. The PI control was applied and fuzzy controller was applied to optimize simulated and actual well operation and the results was compared. The simulated and actual open loop response was compared to validate simulation. A case study was accomplished to validate equipment selection specialist system

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Operating industrial processes is becoming more complex each day, and one of the factors that contribute to this growth in complexity is the integration of new technologies and smart solutions employed in the industry, such as the decision support systems. In this regard, this dissertation aims to develop a decision support system based on an computational tool called expert system. The main goal is to turn operation more reliable and secure while maximizing the amount of relevant information to each situation by using an expert system based on rules designed for a particular area of expertise. For the modeling of such rules has been proposed a high-level environment, which allows the creation and manipulation of rules in an easier way through visual programming. Despite its wide range of possible applications, this dissertation focuses only in the context of real-time filtering of alarms during the operation, properly validated in a case study based on a real scenario occurred in an industrial plant of an oil and gas refinery

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this paper was to evaluate attributes derived from fully polarimetric PALSAR data to discriminate and map macrophyte species in the Amazon floodplain wetlands. Fieldwork was carried out almost simultaneously to the radar acquisition, and macrophyte biomass and morphological variables were measured in the field. Attributes were calculated from the covariance matrix [C] derived from the single-look complex data. Image attributes and macrophyte variables were compared and analyzed to investigate the sensitivity of the attributes for discriminating among species. Based on these analyses, a rule-based classification was applied to map macrophyte species. Other classification approaches were tested and compared to the rule-based method: a classification based on the Freeman-Durden and Cloude-Pottier decomposition models, a hybrid classification (Wishart classifier with the input classes based on the H/a plane), and a statistical-based classification (supervised classification using Wishart distance measures). The findings show that attributes derived from fully polarimetric L-band data have good potential for discriminating herbaceous plant species based on morphology and that estimation of plant biomass and productivity could be improved by using these polarimetric attributes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, we propose a two-stage algorithm for real-time fault detection and identification of industrial plants. Our proposal is based on the analysis of selected features using recursive density estimation and a new evolving classifier algorithm. More specifically, the proposed approach for the detection stage is based on the concept of density in the data space, which is not the same as probability density function, but is a very useful measure for abnormality/outliers detection. This density can be expressed by a Cauchy function and can be calculated recursively, which makes it memory and computational power efficient and, therefore, suitable for on-line applications. The identification/diagnosis stage is based on a self-developing (evolving) fuzzy rule-based classifier system proposed in this work, called AutoClass. An important property of AutoClass is that it can start learning from scratch". Not only do the fuzzy rules not need to be prespecified, but neither do the number of classes for AutoClass (the number may grow, with new class labels being added by the on-line learning process), in a fully unsupervised manner. In the event that an initial rule base exists, AutoClass can evolve/develop it further based on the newly arrived faulty state data. In order to validate our proposal, we present experimental results from a level control didactic process, where control and error signals are used as features for the fault detection and identification systems, but the approach is generic and the number of features can be significant due to the computationally lean methodology, since covariance or more complex calculations, as well as storage of old data, are not required. The obtained results are significantly better than the traditional approaches used for comparison

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents an approach to integrate an artificial intelligence (AI) technique, concretely rule-based processing, into mobile agents. In particular, it focuses on the aspects of designing and implementing an appropriate inference engine of small size to reduce migration costs. The main goal is combine two lines of agent research, First, the engineering oriented approach on mobile agent architectures, and, second, the AI related approach on inference engines driven by rules expressed in a restricted subset of first-order predicate logic (FOPL). In addition to size reduction, the main functions of this type of engine were isolated, generalized and implemented as dynamic components, making possible not only their migration with the agent, but also their dynamic migration and loading on demand. A set of classes for representing and exchanging knowledge between rule-based systems was also proposed.