65 resultados para Algoritmo de Colisão de Partículas
Resumo:
This work develops a methodology for defining the maximum active power being injected into predefined nodes in the studied distribution networks, considering the possibility of multiple accesses of generating units. The definition of these maximum values is obtained from an optimization study, in which further losses should not exceed those of the base case, i.e., without the presence of distributed generation. The restrictions on the loading of the branches and voltages of the system are respected. To face the problem it is proposed an algorithm, which is based on the numerical method called particle swarm optimization, applied to the study of AC conventional load flow and optimal load flow for maximizing the penetration of distributed generation. Alternatively, the Newton-Raphson method was incorporated to resolution of the load flow. The computer program is performed with the SCILAB software. The proposed algorithm is tested with the data from the IEEE network with 14 nodes and from another network, this one from the Rio Grande do Norte State, at a high voltage (69 kV), with 25 nodes. The algorithm defines allowed values of nominal active power of distributed generation, in percentage terms relative to the demand of the network, from reference values
Resumo:
The heat transfer between plasma and a solid occurs mostly due the radiation and the collision of the particles on the material surface, heating the material from the surface to the bulk. The thermal gradient inside the sample depends of the rate of particles collisions and thermal conductivity of the solid. In order to study that effect, samples of AISI M35 steel, with 9,5 mm X 3,0 mm (diameter X thickness) were quenched in resistive furnace and tempereds in plasma using the plane configuration and hollow cathode, working with pressures of 4 and 10 mbar respectively. Analyzing the samples microstructure and measuring the hardness along the transversal profile, it was possible to associate the tempered temperature evaluating indirectly the thermal profile. This relation was obtained by microstructural analyzes and through the hardness curve x tempered sample temperature in resistive furnace, using temperatures of 500, 550, 600, 650 and 700°C. The microstructural characterization of the samples was obtained by the scanning electron microscopy, optic microscopy and X-ray diffraction. It was verified that all samples treated in plasma presented a superficial layer, denominated affected shelling zone, wich was not present in the samples treated in resistive furnace. Moreover, the samples that presented larger thermal gradient were treated in hollow cathode with pressure of 4 mbar
Resumo:
The separation methods are reduced applications as a result of the operational costs, the low output and the long time to separate the uids. But, these treatment methods are important because of the need for extraction of unwanted contaminants in the oil production. The water and the concentration of oil in water should be minimal (around 40 to 20 ppm) in order to take it to the sea. Because of the need of primary treatment, the objective of this project is to study and implement algorithms for identification of polynomial NARX (Nonlinear Auto-Regressive with Exogenous Input) models in closed loop, implement a structural identification, and compare strategies using PI control and updated on-line NARX predictive models on a combination of three-phase separator in series with three hydro cyclones batteries. The main goal of this project is to: obtain an optimized process of phase separation that will regulate the system, even in the presence of oil gushes; Show that it is possible to get optimized tunings for controllers analyzing the mesh as a whole, and evaluate and compare the strategies of PI and predictive control applied to the process. To accomplish these goals a simulator was used to represent the three phase separator and hydro cyclones. Algorithms were developed for system identification (NARX) using RLS(Recursive Least Square), along with methods for structure models detection. Predictive Control Algorithms were also implemented with NARX model updated on-line, and optimization algorithms using PSO (Particle Swarm Optimization). This project ends with a comparison of results obtained from the use of PI and predictive controllers (both with optimal state through the algorithm of cloud particles) in the simulated system. Thus, concluding that the performed optimizations make the system less sensitive to external perturbations and when optimized, the two controllers show similar results with the assessment of predictive control somewhat less sensitive to disturbances
Resumo:
The heat transfer between plasma and a solid occurs mostly due the radiation and the collision of the particles on the material surface, heating the material from the surface to the bulk. The thermal gradient inside the sample depends of the rate of particles collisions and thermal conductivity of the solid. In order to study that effect, samples of AISI M35 steel, with 9,5 mm X 3,0 mm (diameter X thickness) were quenched in resistive furnace and tempereds in plasma using the plane configuration and hollow cathode, working with pressures of 4 and 10 mbar respectively. Analyzing the samples microstructure and measuring the hardness along the transversal profile, it was possible to associate the tempered temperature evaluating indirectly the thermal profile. This relation was obtained by microstructural analyzes and through the hardness curve x tempered sample temperature in resistive furnace, using temperatures of 500, 550, 600, 650 and 700°C. The microstructural characterization of the samples was obtained by the scanning electron microscopy, optic microscopy and X-ray diffraction. It was verified that all samples treated in plasma presented a superficial layer, denominated affected shelling zone, wich was not present in the samples treated in resistive furnace. Moreover, the samples that presented larger thermal gradient were treated in hollow cathode with pressure of 4 mbar
Resumo:
Recentemente diversas técnicas de computação evolucionárias têm sido utilizadas em áreas como estimação de parâmetros de processos dinâmicos lineares e não lineares ou até sujeitos a incertezas. Isso motiva a utilização de algoritmos como o otimizador por nuvem de partículas (PSO) nas referidas áreas do conhecimento. Porém, pouco se sabe sobre a convergência desse algoritmo e, principalmente, as análises e estudos realizados têm se concentrado em resultados experimentais. Por isso, é objetivo deste trabalho propor uma nova estrutura para o PSO que permita analisar melhor a convergência do algoritmo de forma analítica. Para isso, o PSO é reestruturado para assumir uma forma matricial e reformulado como um sistema linear por partes. As partes serão analisadas de forma separada e será proposta a inserção de um fator de esquecimento que garante que a parte mais significativa deste sistema possua autovalores dentro do círculo de raio unitário. Também será realizada a análise da convergência do algoritmo como um todo, utilizando um critério de convergência quase certa, aplicável a sistemas chaveados. Na sequência, serão realizados testes experimentais de maneira a verificar o comportamento dos autovalores após a inserção do fator de esquecimento. Posteriormente, os algoritmos de identificação de parâmetros tradicionais serão combinados com o PSO matricial, de maneira a tornar os resultados da identificação tão bons ou melhores que a identificação apenas com o PSO ou, apenas com os algoritmos tradicionais. Os resultados mostram a convergência das partículas em uma região delimitada e que as funções obtidas após a combinação do algoritmo PSO matricial com os algoritmos convencionais, apresentam maior generalização para o sistema apresentado. As conclusões a que se chega é que a hibridização, apesar de limitar a busca por uma partícula mais apta do PSO, permite um desempenho mínimo para o algoritmo e ainda possibilita melhorar o resultado obtido com os algoritmos tradicionais, permitindo a representação do sistema aproximado em quantidades maiores de frequências.
Resumo:
This work proposes a new autonomous navigation strategy assisted by genetic algorithm with dynamic planning for terrestrial mobile robots, called DPNA-GA (Dynamic Planning Navigation Algorithm optimized with Genetic Algorithm). The strategy was applied in environments - both static and dynamic - in which the location and shape of the obstacles is not known in advance. In each shift event, a control algorithm minimizes the distance between the robot and the object and maximizes the distance from the obstacles, rescheduling the route. Using a spatial location sensor and a set of distance sensors, the proposed navigation strategy is able to dynamically plan optimal collision-free paths. Simulations performed in different environments demonstrated that the technique provides a high degree of flexibility and robustness. For this, there were applied several variations of genetic parameters such as: crossing rate, population size, among others. Finally, the simulation results successfully demonstrate the effectiveness and robustness of DPNA-GA technique, validating it for real applications in terrestrial mobile robots.
Resumo:
This work proposes a new autonomous navigation strategy assisted by genetic algorithm with dynamic planning for terrestrial mobile robots, called DPNA-GA (Dynamic Planning Navigation Algorithm optimized with Genetic Algorithm). The strategy was applied in environments - both static and dynamic - in which the location and shape of the obstacles is not known in advance. In each shift event, a control algorithm minimizes the distance between the robot and the object and maximizes the distance from the obstacles, rescheduling the route. Using a spatial location sensor and a set of distance sensors, the proposed navigation strategy is able to dynamically plan optimal collision-free paths. Simulations performed in different environments demonstrated that the technique provides a high degree of flexibility and robustness. For this, there were applied several variations of genetic parameters such as: crossing rate, population size, among others. Finally, the simulation results successfully demonstrate the effectiveness and robustness of DPNA-GA technique, validating it for real applications in terrestrial mobile robots.
Resumo:
Metal powder sintering appears to be promising option to achieve new physical and mechanical properties combining raw material with new processing improvements. It interest over many years and continue to gain wide industrial application. Stainless steel is a widely accepted material because high corrosion resistance. However stainless steels have poor sinterability and poor wear resistance due to their low hardness. Metal matrix composite (MMC) combining soft metallic matrix reinforced with carbides or oxides has attracted considerable attention for researchers to improve density and hardness in the bulk material. This thesis focuses on processing 316L stainless steel by addition of 3% wt niobium carbide to control grain growth and improve densification and hardness. The starting powder were water atomized stainless steel manufactured for Höganäs (D 50 = 95.0 μm) and NbC produced in the UFRN and supplied by Aesar Alpha Johnson Matthey Company with medium crystallite size 16.39 nm and 80.35 nm respectively. Samples with addition up to 3% of each NbC were mixed and mechanically milled by 3 routes. The route1 (R1) milled in planetary by 2 hours. The routes 2 (R2) and 3 (R3) milled in a conventional mill by 24 and 48 hours. Each milled samples and pure sample were cold compacted uniaxially in a cylindrical steel die (Ø 5 .0 mm) at 700 MPa, carried out in a vacuum furnace, heated at 1290°C, heating rate 20°C stand by 30 and 60 minutes. The samples containing NbC present higher densities and hardness than those without reinforcement. The results show that nanosized NbC particles precipitate on grain boundary. Thus, promote densification eliminating pores, control grain growth and increase the hardness values
Resumo:
The new oil reservoirs discoveries in onshore and ultra deep water offshore fields and complex trajectories require the optimization of procedures to reduce the stops operation during the well drilling, especially because the platforms and equipment high cost, and risks which are inherent to the operation. Among the most important aspects stands out the drilling fluids project and their behavior against different situations that may occur during the process. By means of sedimentation experiments, a correlation has been validated to determe the sedimentation particles velocity in variable viscosity fluids over time, applying the correction due to effective viscosity that is a shear rate and time function. The viscosity evolution over time was obtained by carrying out rheologic tests using a fixed shear rate, small enough to not interfere in the fluid gelling process. With the sedimentation particles velocity and the fluid viscosity over time equations an iterative procedure was proposed to determine the particles displacement over time. These equations were implemented in a case study to simulate the cuttings sedimentation generated in the oil well drilling during stops operation, especially in the connections and tripping, allowing the drilling fluid project in order to maintain the cuttings in suspension, avoiding risks, such as stuck pipe and in more drastic conditions, the loss of the well
Resumo:
With the growth of energy consumption worldwide, conventional reservoirs, the reservoirs called "easy exploration and production" are not meeting the global energy demand. This has led many researchers to develop projects that will address these needs, companies in the oil sector has invested in techniques that helping in locating and drilling wells. One of the techniques employed in oil exploration process is the reverse time migration (RTM), in English, Reverse Time Migration, which is a method of seismic imaging that produces excellent image of the subsurface. It is algorithm based in calculation on the wave equation. RTM is considered one of the most advanced seismic imaging techniques. The economic value of the oil reserves that require RTM to be localized is very high, this means that the development of these algorithms becomes a competitive differentiator for companies seismic processing. But, it requires great computational power, that it still somehow harms its practical success. The objective of this work is to explore the implementation of this algorithm in unconventional architectures, specifically GPUs using the CUDA by making an analysis of the difficulties in developing the same, as well as the performance of the algorithm in the sequential and parallel version
Resumo:
Many challenges have been presented in petroleum industry. One of them is the preventing of fluids influx during drilling and cementing. Gas migration can occur as result of pressure imbalance inside the well when well pressure becomes lower than gas zone pressure and in cementing operation this occurs during cement slurry transition period (solid to fluid). In this work it was developed a methodology to evaluate gas migration during drilling and cementing operations. It was considered gel strength concept and through experimental tests determined gas migration initial time. A mechanistic model was developed to obtain equation that evaluates bubble displacement through the fluid while it gels. Being a time-dependant behavior, dynamic rheological measurements were made to evaluate viscosity along the time. For drilling fluids analyzed it was verified that it is desirable fast and non-progressive gelation in order to reduce gas migration without affect operational window (difference between pore and fracture pressure). For cement slurries analyzed, the most appropriate is that remains fluid for more time below critical gel strength, maintaining hydrostatic pressure above gas zone pressure, and after that gels quickly, reducing gas migration. The model developed simulates previously operational conditions and allow changes in operational and fluids design to obtain a safer condition for well construction
Resumo:
Discrepancies between classical model predictions and experimental data for deep bed filtration have been reported by various authors. In order to understand these discrepancies, an analytic continuum model for deep bed filtration is proposed. In this model, a filter coefficient is attributed to each distinct retention mechanism (straining, diffusion, gravity interception, etc.). It was shown that these coefficients generally cannot be merged into an effective filter coefficient, as considered in the classical model. Furthermore, the derived analytic solutions for the proposed model were applied for fitting experimental data, and a very good agreement between experimental data and proposed model predictions were obtained. Comparison of the obtained results with empirical correlations allowed identifying the dominant retention mechanisms. In addition, it was shown that the larger the ratio of particle to pore sizes, the more intensive the straining mechanism and the larger the discrepancies between experimental data and classical model predictions. The classical model and proposed model were compared via statistical analysis. The obtained p values allow concluding that the proposed model should be preferred especially when straining plays an important role. In addition, deep bed filtration with finite retention capacity was studied. This work also involves the study of filtration of particles through porous media with a finite capacity of filtration. It was observed, in this case, that is necessary to consider changes in the boundary conditions through time evolution. It was obtained a solution for such a model using different functions of filtration coefficients. Besides that, it was shown how to build a solution for any filtration coefficient. It was seen that, even considering the same filtration coefficient, the classic model and the one here propposed, show different predictions for the concentration of particles retained in the porous media and for the suspended particles at the exit of the media
Resumo:
The apparent virtuosity that if could wait of the globalization and the neoliberalism has given signals of deterioration in the contractual relations, especially in contracts of mass consumption, generating innumerable offensive situations to the basic rights and the goods constitutionally protected of the contractors. In the world of today, still that it does not reveal any desire, the individual practically is compelled to contract, for force of necessities and customs completely imposed, mainly in face of the essentiality of the services or agreed to goods. Ahead of as much and unexpected changes in the civil liames and of consumption, dictated for the globalization, it comes to surface the reflection if the private law e, more specifically, the civil law, meet prepared adequately to deal with these new parameters of the economy. The present dissertation has the intention to investigate if the globalization and the consequent neoliberalism, in this beginning of third millennium, will imply to revive of the principles and the basics paradigms of the contracts that consolidated and had kept, for more than two centuries, the liberal State. One notices that the study of this phenomenon it gains importance to the measure where if it aggravates the decline of the social State (Welfare State), with the embrittlement and the loss of the autonomy of the state authority, over all in countries of delayed modernity, as it is the case of Brazil, that presents deep deficiencies to give or to promote, with a minimum of quality and efficiency, essential considered public services to the collective and that if they find consecrated in the Federal Constitution, as basic rights or as goods constitutionally protecting, the example of the health, the education, the housing, the security, the providence, the insurance, the protection the maternity, the infancy and of aged and deficient. To the end, the incidence of constant basic rights of the man in the Constitution is concluded that, in the process of interpretation of the right contractual conflicts that have as object rights or goods constitutionally proteges, in the universe of the globalized perhaps economy and of the neoliberalismo, it consists in one of the few ways - unless the only one - that still they remain to over all deal with more adequately the contractual relations, exactly that if considers the presence of clauses generalities in the scope of the legislation infraconstitutional civil and of consumption, front the private detainers of social-economic power. To be able that it matters necessarily in disequilibrium between the parts, whose realignment depends on the effect and the graduation that if it intends to confer to the basic right in game in the private relation. The Constitution, when allowing the entailing of the basic rights in the privates relations, would be assuming contours of a statute basic of all the collective, giving protection to the man against the power, if public or independently private
Resumo:
The domination of the violence for the Rule of law awakened a tension between the practice of the punitive power and the right to counsel. However, throughout the recent history of the Criminal law, this shock of forces has been determined for the punitive power. In this perspective, the present work intends to submit the guarantee of defense to a critical judgment, in search to conciliate its content to the Constitutional State of Right. For in such a way, it will be necessary to recognize the disequilibrium of the situation, but without considering the superiority of any of these elements. The State in such a way must fulfill the function to punish the culprits as to acquit the innocents. Despite the law is far from obtaining a harmonious speech, it is necessary that the defense guarantee coexists the punitive power as part of an only public interest, which is, to make criminal justice. In such a way, the existence of a sustainable balance between the punitive power and the guarantee of defense depend on the minimum interference of Criminal law and, also, of the judicial position in the concrete case. The present work faces, therefore, the moment of crisis of the Criminal law, consolidated with the advent of a new way of thinking according to the procedural guarantees, that will demand the overcoming of the old concepts. The Constitutional State of Right not only constitutes an efectiveness of the regime of the right to counsel, but in a similar way it searchs to accomplish the right of action and criminal justice as a whole. Knowing that the philosophy of the language raises doubts on the certainty, the truth and the judgement, it is imposed to understand that the defense guarantee is no more about a simple idea, but, in the crooked ways of the communication, we intend to find what the judge s function is when he faces this new reality
Resumo:
Telecommunications play a key role in contemporary society. However, as new technologies are put into the market, it also grows the demanding for new products and services that depend on the offered infrastructure, making the problems of planning telecommunications networks, despite the advances in technology, increasingly larger and complex. However, many of these problems can be formulated as models of combinatorial optimization, and the use of heuristic algorithms can help solving these issues in the planning phase. In this project it was developed two pure metaheuristic implementations Genetic algorithm (GA) and Memetic Algorithm (MA) plus a third hybrid implementation Memetic Algorithm with Vocabulary Building (MA+VB) for a problem in telecommunications that is known in the literature as Problem SONET Ring Assignment Problem or SRAP. The SRAP arises during the planning stage of the physical network and it consists in the selection of connections between a number of locations (customers) in order to meet a series of restrictions on the lowest possible cost. This problem is NP-hard, so efficient exact algorithms (in polynomial complexity ) are not known and may, indeed, even exist