146 resultados para Algoritmos


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The central interest of this thesis is to comprehend how the public action impels the formation and transformation of the tourist destinies. The research was based on the premise that the public actions are the result of the mediation process of state and non-state actors considered important in a section, which interact aiming for prevailing their interests and world visions above the others. The case of Porto de Galinhas beach, in Pernambuco, locus of the investigation of this thesis, allowed the analysis of a multiplicity of actors on the formation and implementation of local actions toward the development of the tourism between the years 1970 and 2010, as well as permitted the comprehension of the construction of the referential on the interventions made. This thesis, of a qualitative nature, has as theoretical support the cognitive approach of analysis of the public policies developed in France, and it has as main exponents the authors Bruno Jobert and Pierre Muller. This choice was made by the emphasis on the cognitive and normative factors of the politics, which aspects are not very explored in the studies of public policies in Brazil. As the source of the data collection, documental, bibliographic and field researches were utilized to the (re)constitution of the formation and transformation in the site concerned. The analysis techniques applied were the content and the documental analysis. To trace the public action referential, it started by the characterization of the touristic section frontiers and the creation of images by the main international body: the World Tourism Organization, of which analysis of the minutes of the meetings underscored guidelines to the member countries, including Brazil, which compounds the global-sectorial reference of the section. As from the analysis of the evolution of the tourism in the country, was identified that public policies in Brazil passed by transformations in their organization over the years, indicating changes in the referential that guided the interventions. These guidelines and transformations were identified in the construction of the tourist destination of Porto de Galinhas, of which data was systematized and presented in four historical periods, in which were discussed the values, the standard, the algorithms, the images and the important mediators. It has been revealed that the State worked in different roles in the decades analyzed in local tourism. From the 1990s, however, new actors were inserted in the formulation and implementation of policies developed, especially for local hotelkeepers. These, through their association, establishes a leadership relation in the local touristic section, thereby, they could set their hegemony and spread their own interest. The leadership acquired by a group of actors, in the case of Porto de Galinhas, does not mean that trade within the industry were neutralized, but that there is a cognitive framework that confronts the actors involved. In spite of the advances achieved by the work of the mediators in the last decades, that resulted in an amplification and diversification of the activity in the area, as well as the consolidation at the beach, as a tourist destiny of national standout, the position of the place is instable, concerned to the competitiveness, once that there is an situation of social and environmental unsustainability

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present study we elaborated algorithms by using concepts from percolation theory which analyze the connectivity conditions in geological models of petroleum reservoirs. From the petrophysical parameters such as permeability, porosity, transmittivity and others, which may be generated by any statistical process, it is possible to determine the portion of the model with more connected cells, what the interconnected wells are, and the critical path between injector and source wells. This allows to classify the reservoir according to the modeled petrophysical parameters. This also make it possible to determine the percentage of the reservoir to which each well is connected. Generally, the connected regions and the respective minima and/or maxima in the occurrence of the petrophysical parameters studied constitute a good manner to characterize a reservoir volumetrically. Therefore, the algorithms allow to optimize the positioning of wells, offering a preview of the general conditions of the given model s connectivity. The intent is not to evaluate geological models, but to show how to interpret the deposits, how their petrophysical characteristics are spatially distributed, and how the connections between the several parts of the system are resolved, showing their critical paths and backbones. The execution of these algorithms allows us to know the properties of the model s connectivity before the work on reservoir flux simulation is started

Relevância:

10.00% 10.00%

Publicador:

Resumo:

From their early days, Electrical Submergible Pumping (ESP) units have excelled in lifting much greater liquid rates than most of the other types of artificial lift and developed by good performance in wells with high BSW, in onshore and offshore environments. For all artificial lift system, the lifetime and frequency of interventions are of paramount importance, given the high costs of rigs and equipment, plus the losses coming from a halt in production. In search of a better life of the system comes the need to work with the same efficiency and security within the limits of their equipment, this implies the need for periodic adjustments, monitoring and control. How is increasing the prospect of minimizing direct human actions, these adjustments should be made increasingly via automation. The automated system not only provides a longer life, but also greater control over the production of the well. The controller is the brain of most automation systems, it is inserted the logic and strategies in the work process in order to get you to work efficiently. So great is the importance of controlling for any automation system is expected that, with better understanding of ESP system and the development of research, many controllers will be proposed for this method of artificial lift. Once a controller is proposed, it must be tested and validated before they take it as efficient and functional. The use of a producing well or a test well could favor the completion of testing, but with the serious risk that flaws in the design of the controller were to cause damage to oil well equipment, many of them expensive. Given this reality, the main objective of the present work is to present an environment for evaluation of fuzzy controllers for wells equipped with ESP system, using a computer simulator representing a virtual oil well, a software design fuzzy controllers and a PLC. The use of the proposed environment will enable a reduction in time required for testing and adjustments to the controller and evaluated a rapid diagnosis of their efficiency and effectiveness. The control algorithms are implemented in both high-level language, through the controller design software, such as specific language for programming PLCs, Ladder Diagram language.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the growth of energy consumption worldwide, conventional reservoirs, the reservoirs called "easy exploration and production" are not meeting the global energy demand. This has led many researchers to develop projects that will address these needs, companies in the oil sector has invested in techniques that helping in locating and drilling wells. One of the techniques employed in oil exploration process is the reverse time migration (RTM), in English, Reverse Time Migration, which is a method of seismic imaging that produces excellent image of the subsurface. It is algorithm based in calculation on the wave equation. RTM is considered one of the most advanced seismic imaging techniques. The economic value of the oil reserves that require RTM to be localized is very high, this means that the development of these algorithms becomes a competitive differentiator for companies seismic processing. But, it requires great computational power, that it still somehow harms its practical success. The objective of this work is to explore the implementation of this algorithm in unconventional architectures, specifically GPUs using the CUDA by making an analysis of the difficulties in developing the same, as well as the performance of the algorithm in the sequential and parallel version

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper artificial neural network (ANN) based on supervised and unsupervised algorithms were investigated for use in the study of rheological parameters of solid pharmaceutical excipients, in order to develop computational tools for manufacturing solid dosage forms. Among four supervised neural networks investigated, the best learning performance was achieved by a feedfoward multilayer perceptron whose architectures was composed by eight neurons in the input layer, sixteen neurons in the hidden layer and one neuron in the output layer. Learning and predictive performance relative to repose angle was poor while to Carr index and Hausner ratio (CI and HR, respectively) showed very good fitting capacity and learning, therefore HR and CI were considered suitable descriptors for the next stage of development of supervised ANNs. Clustering capacity was evaluated for five unsupervised strategies. Network based on purely unsupervised competitive strategies, classic "Winner-Take-All", "Frequency-Sensitive Competitive Learning" and "Rival-Penalize Competitive Learning" (WTA, FSCL and RPCL, respectively) were able to perform clustering from database, however this classification was very poor, showing severe classification errors by grouping data with conflicting properties into the same cluster or even the same neuron. On the other hand it could not be established what was the criteria adopted by the neural network for those clustering. Self-Organizing Maps (SOM) and Neural Gas (NG) networks showed better clustering capacity. Both have recognized the two major groupings of data corresponding to lactose (LAC) and cellulose (CEL). However, SOM showed some errors in classify data from minority excipients, magnesium stearate (EMG) , talc (TLC) and attapulgite (ATP). NG network in turn performed a very consistent classification of data and solve the misclassification of SOM, being the most appropriate network for classifying data of the study. The use of NG network in pharmaceutical technology was still unpublished. NG therefore has great potential for use in the development of software for use in automated classification systems of pharmaceutical powders and as a new tool for mining and clustering data in drug development

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Telecommunications play a key role in contemporary society. However, as new technologies are put into the market, it also grows the demanding for new products and services that depend on the offered infrastructure, making the problems of planning telecommunications networks, despite the advances in technology, increasingly larger and complex. However, many of these problems can be formulated as models of combinatorial optimization, and the use of heuristic algorithms can help solving these issues in the planning phase. In this project it was developed two pure metaheuristic implementations Genetic algorithm (GA) and Memetic Algorithm (MA) plus a third hybrid implementation Memetic Algorithm with Vocabulary Building (MA+VB) for a problem in telecommunications that is known in the literature as Problem SONET Ring Assignment Problem or SRAP. The SRAP arises during the planning stage of the physical network and it consists in the selection of connections between a number of locations (customers) in order to meet a series of restrictions on the lowest possible cost. This problem is NP-hard, so efficient exact algorithms (in polynomial complexity ) are not known and may, indeed, even exist

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The telecommunications play a fundamental role in the contemporary society, having as one of its main roles to give people the possibility to connect them and integrate them into society in which they operate and, therewith, accelerate development through knowledge. But as new technologies are introduced on the market, increases the demand for new products and services that depend on the infrastructure offered, making the problems of planning of telecommunication networks become increasingly large and complex. Many of these problems, however, can be formulated as combinatorial optimization models, and the use of heuristic algorithms can help solve these issues in the planning phase. This paper proposes the development of a Parallel Evolutionary Algorithm to be applied to telecommunications problem known in the literature as SONET Ring Assignment Problem SRAP. This problem is the class NP-hard and arises during the physical planning of a telecommunication network and consists of determining the connections between locations (customers), satisfying a series of constrains of the lowest possible cost. Experimental results illustrate the effectiveness of the Evolutionary Algorithm parallel, over other methods, to obtain solutions that are either optimal or very close to it

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents metaheuristic strategies based on the framework of evolutionary algorithms (Genetic and Memetic) with the addition of Technical Vocabulary Building for solving the Problem of Optimizing the Use of Multiple Mobile Units Recovery of Oil (MRO units). Because it is an NP-hard problem, a mathematical model is formulated for the problem, allowing the construction of test instances that are used to validate the evolutionary metaheuristics developed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The progressing cavity pump artificial lift system, PCP, is a main lift system used in oil production industry. As this artificial lift application grows the knowledge of it s dynamics behavior, the application of automatic control and the developing of equipment selection design specialist systems are more useful. This work presents tools for dynamic analysis, control technics and a specialist system for selecting lift equipments for this artificial lift technology. The PCP artificial lift system consists of a progressing cavity pump installed downhole in the production tubing edge. The pump consists of two parts, a stator and a rotor, and is set in motion by the rotation of the rotor transmitted through a rod string installed in the tubing. The surface equipment generates and transmits the rotation to the rod string. First, is presented the developing of a complete mathematical dynamic model of PCP system. This model is simplified for use in several conditions, including steady state for sizing PCP equipments, like pump, rod string and drive head. This model is used to implement a computer simulator able to help in system analysis and to operates as a well with a controller and allows testing and developing of control algorithms. The next developing applies control technics to PCP system to optimize pumping velocity to achieve productivity and durability of downhole components. The mathematical model is linearized to apply conventional control technics including observability and controllability of the system and develop design rules for PI controller. Stability conditions are stated for operation point of the system. A fuzzy rule-based control system are developed from a PI controller using a inference machine based on Mandami operators. The fuzzy logic is applied to develop a specialist system that selects PCP equipments too. The developed technics to simulate and the linearized model was used in an actual well where a control system is installed. This control system consists of a pump intake pressure sensor, an industrial controller and a variable speed drive. The PI control was applied and fuzzy controller was applied to optimize simulated and actual well operation and the results was compared. The simulated and actual open loop response was compared to validate simulation. A case study was accomplished to validate equipment selection specialist system

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The so-called Dual Mode Adaptive Robust Control (DMARC) is proposed. The DMARC is a control strategy which interpolates the Model Reference Adaptive Control (MRAC) and the Variable Structure Model Reference Adaptive Control (VS-MRAC). The main idea is to incorporate the transient performance advantages of the VS-MRAC controller with the smoothness control signal in steady-state of the MRAC controller. Two basic algorithms are developed for the DMARC controller. In the first algorithm the controller's adjustment is made, in real time, through the variation of a parameter in the adaptation law. In the second algorithm the control law is generated, using fuzzy logic with Takagi-Sugeno s model, to obtain a combination of the MRAC and VS-MRAC control laws. In both cases, the combined control structure is shown to be robust to the parametric uncertainties and external disturbances, with a fast transient performance, practically without oscillations, and a smoothness steady-state control signal

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Techniques of optimization known as metaheuristics have achieved success in the resolution of many problems classified as NP-Hard. These methods use non deterministic approaches that reach very good solutions which, however, don t guarantee the determination of the global optimum. Beyond the inherent difficulties related to the complexity that characterizes the optimization problems, the metaheuristics still face the dilemma of xploration/exploitation, which consists of choosing between a greedy search and a wider exploration of the solution space. A way to guide such algorithms during the searching of better solutions is supplying them with more knowledge of the problem through the use of a intelligent agent, able to recognize promising regions and also identify when they should diversify the direction of the search. This way, this work proposes the use of Reinforcement Learning technique - Q-learning Algorithm - as exploration/exploitation strategy for the metaheuristics GRASP (Greedy Randomized Adaptive Search Procedure) and Genetic Algorithm. The GRASP metaheuristic uses Q-learning instead of the traditional greedy-random algorithm in the construction phase. This replacement has the purpose of improving the quality of the initial solutions that are used in the local search phase of the GRASP, and also provides for the metaheuristic an adaptive memory mechanism that allows the reuse of good previous decisions and also avoids the repetition of bad decisions. In the Genetic Algorithm, the Q-learning algorithm was used to generate an initial population of high fitness, and after a determined number of generations, where the rate of diversity of the population is less than a certain limit L, it also was applied to supply one of the parents to be used in the genetic crossover operator. Another significant change in the hybrid genetic algorithm is the proposal of a mutually interactive cooperation process between the genetic operators and the Q-learning algorithm. In this interactive/cooperative process, the Q-learning algorithm receives an additional update in the matrix of Q-values based on the current best solution of the Genetic Algorithm. The computational experiments presented in this thesis compares the results obtained with the implementation of traditional versions of GRASP metaheuristic and Genetic Algorithm, with those obtained using the proposed hybrid methods. Both algorithms had been applied successfully to the symmetrical Traveling Salesman Problem, which was modeled as a Markov decision process

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Internet applications such as media streaming, collaborative computing and massive multiplayer are on the rise,. This leads to the need for multicast communication, but unfortunately group communications support based on IP multicast has not been widely adopted due to a combination of technical and non-technical problems. Therefore, a number of different application-layer multicast schemes have been proposed in recent literature to overcome the drawbacks. In addition, these applications often behave as both providers and clients of services, being called peer-topeer applications, and where participants come and go very dynamically. Thus, servercentric architectures for membership management have well-known problems related to scalability and fault-tolerance, and even peer-to-peer traditional solutions need to have some mechanism that takes into account member's volatility. The idea of location awareness distributes the participants in the overlay network according to their proximity in the underlying network allowing a better performance. Given this context, this thesis proposes an application layer multicast protocol, called LAALM, which takes into account the actual network topology in the assembly process of the overlay network. The membership algorithm uses a new metric, IPXY, to provide location awareness through the processing of local information, and it was implemented using a distributed shared and bi-directional tree. The algorithm also has a sub-optimal heuristic to minimize the cost of membership process. The protocol has been evaluated in two ways. First, through an own simulator developed in this work, where we evaluated the quality of distribution tree by metrics such as outdegree and path length. Second, reallife scenarios were built in the ns-3 network simulator where we evaluated the network protocol performance by metrics such as stress, stretch, time to first packet and reconfiguration group time

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a new paradigm for collective learning in multi-agent systems (MAS) as a solution to the problem in which several agents acting over the same environment must learn how to perform tasks, simultaneously, based on feedbacks given by each one of the other agents. We introduce the proposed paradigm in the form of a reinforcement learning algorithm, nominating it as reinforcement learning with influence values. While learning by rewards, each agent evaluates the relation between the current state and/or action executed at this state (actual believe) together with the reward obtained after all agents that are interacting perform their actions. The reward is a result of the interference of others. The agent considers the opinions of all its colleagues in order to attempt to change the values of its states and/or actions. The idea is that the system, as a whole, must reach an equilibrium, where all agents get satisfied with the obtained results. This means that the values of the state/actions pairs match the reward obtained by each agent. This dynamical way of setting the values for states and/or actions makes this new reinforcement learning paradigm the first to include, naturally, the fact that the presence of other agents in the environment turns it a dynamical model. As a direct result, we implicitly include the internal state, the actions and the rewards obtained by all the other agents in the internal state of each agent. This makes our proposal the first complete solution to the conceptual problem that rises when applying reinforcement learning in multi-agent systems, which is caused by the difference existent between the environment and agent models. With basis on the proposed model, we create the IVQ-learning algorithm that is exhaustive tested in repetitive games with two, three and four agents and in stochastic games that need cooperation and in games that need collaboration. This algorithm shows to be a good option for obtaining solutions that guarantee convergence to the Nash optimum equilibrium in cooperative problems. Experiments performed clear shows that the proposed paradigm is theoretical and experimentally superior to the traditional approaches. Yet, with the creation of this new paradigm the set of reinforcement learning applications in MAS grows up. That is, besides the possibility of applying the algorithm in traditional learning problems in MAS, as for example coordination of tasks in multi-robot systems, it is possible to apply reinforcement learning in problems that are essentially collaborative

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new multi-model technique of dentification in ANFIS for nonlinear systems. In this technique, the structure used is of the fuzzy Takagi-Sugeno of which the consequences are local linear models that represent the system of different points of operation and the precursors are membership functions whose adjustments are realized by the learning phase of the neuro-fuzzy ANFIS technique. The models that represent the system at different points of the operation can be found with linearization techniques like, for example, the Least Squares method that is robust against sounds and of simple application. The fuzzy system is responsible for informing the proportion of each model that should be utilized, using the membership functions. The membership functions can be adjusted by ANFIS with the use of neural network algorithms, like the back propagation error type, in such a way that the models found for each area are correctly interpolated and define an action of each model for possible entries into the system. In multi-models, the definition of action of models is known as metrics and, since this paper is based on ANFIS, it shall be denominated in ANFIS metrics. This way, ANFIS metrics is utilized to interpolate various models, composing a system to be identified. Differing from the traditional ANFIS, the created technique necessarily represents the system in various well defined regions by unaltered models whose pondered activation as per the membership functions. The selection of regions for the application of the Least Squares method is realized manually from the graphic analysis of the system behavior or from the physical characteristics of the plant. This selection serves as a base to initiate the linear model defining technique and generating the initial configuration of the membership functions. The experiments are conducted in a teaching tank, with multiple sections, designed and created to show the characteristics of the technique. The results from this tank illustrate the performance reached by the technique in task of identifying, utilizing configurations of ANFIS, comparing the developed technique with various models of simple metrics and comparing with the NNARX technique, also adapted to identification

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the rapid growth of databases of various types (text, multimedia, etc..), There exist a need to propose methods for ordering, access and retrieve data in a simple and fast way. The images databases, in addition to these needs, require a representation of the images so that the semantic content characteristics are considered. Accordingly, several proposals such as the textual annotations based retrieval has been made. In the annotations approach, the recovery is based on the comparison between the textual description that a user can make of images and descriptions of the images stored in database. Among its drawbacks, it is noted that the textual description is very dependent on the observer, in addition to the computational effort required to describe all the images in database. Another approach is the content based image retrieval - CBIR, where each image is represented by low-level features such as: color, shape, texture, etc. In this sense, the results in the area of CBIR has been very promising. However, the representation of the images semantic by low-level features is an open problem. New algorithms for the extraction of features as well as new methods of indexing have been proposed in the literature. However, these algorithms become increasingly complex. So, doing an analysis, it is natural to ask whether there is a relationship between semantics and low-level features extracted in an image? and if there is a relationship, which descriptors better represent the semantic? which leads us to a new question: how to use descriptors to represent the content of the images?. The work presented in this thesis, proposes a method to analyze the relationship between low-level descriptors and semantics in an attempt to answer the questions before. Still, it was observed that there are three possibilities of indexing images: Using composed characteristic vectors, using parallel and independent index structures (for each descriptor or set of them) and using characteristic vectors sorted in sequential order. Thus, the first two forms have been widely studied and applied in literature, but there were no records of the third way has even been explored. So this thesis also proposes to index using a sequential structure of descriptors and also the order of these descriptors should be based on the relationship that exists between each descriptor and semantics of the users. Finally, the proposed index in this thesis revealed better than the traditional approachs and yet, was showed experimentally that the order in this sequence is important and there is a direct relationship between this order and the relationship of low-level descriptors with the semantics of the users