160 resultados para TERCEIRA DIMENSÃO (MODELAGEM COMPUTACIONAL) $$2larpcal
Resumo:
LINS, Filipe C. A. et al. Modelagem dinâmica e simulação computacional de poços de petróleo verticais e direcionais com elevação por bombeio mecânico. In: CONGRESSO BRASILEIRO DE PESQUISA E DESENVOLVIMENTO EM PETRÓLEO E GÁS, 5. 2009, Fortaleza, CE. Anais... Fortaleza: CBPDPetro, 2009.
Resumo:
This work aims presenting the development of a model and computer simulation of a sucker rod pumping system. This system take into account the well geometry, the flow through the tubing, the dynamic behavior of the rod string and the use of a induction motor model. The rod string were modeled using concentrated parameters, allowing the use of ordinary differential equations systems to simulate it s behavior
Resumo:
A numerical study on the behavior of tied-back retaining walls in sand, using the finite element method (FEM) is presented. The analyses were performed using the software Plaxis 2D, and were focused on the development of horizontal displacements, horizontal stresses, shear forces and bending moments in the structure during the construction process. Emphasis was placed on the evaluation of wall embedment, tie-back horizontal spacing, wall thickness, and free anchor length on wall behavior. A representative soil profile of a specific region at the City of Natal, Brazil, was used in the numerical analyses. New facilities built on this region often include retaining structures of the same type studied herein. Soil behavior was modeled using the Mohr-Coulomb constitutive model, whereas the structural elements were modeled using the linear elastic model. Shear strength parameters of the soil layers were obtained from direct shear test results conducted with samples collected at the studied site. Deformation parameters were obtained from empirical correlations from SPT test results carried out on the studied site. The results of the numerical analyses revealed that the effect of wall embedment on the investigated parameters is virtually negligible. Conversely, the tie-back horizontal spacing plays an important role on the investigated parameters. The results also demonstrated that the wall thickness significantly affects the wall horizontal displacements, and the shear forces and bending moments within the retaining structure. However, wall thickness was not found to influence horizontal stresses in the structure
Resumo:
Due of industrial informatics several attempts have been done to develop notations and semantics, which are used for classifying and describing different kind of system behavior, particularly in the modeling phase. Such attempts provide the infrastructure to resolve some real problems of engineering and construct practical systems that aim at, mainly, to increase the productivity, quality, and security of the process. Despite the many studies that have attempted to develop friendly methods for industrial controller programming, they are still programmed by conventional trial-and-error methods and, in practice, there is little written documentation on these systems. The ideal solution would be to use a computational environment that allows industrial engineers to implement the system using high-level language and that follows international standards. Accordingly, this work proposes a methodology for plant and control modelling of the discrete event systems that include sequential, parallel and timed operations, using a formalism based on Statecharts, denominated Basic Statechart (BSC). The methodology also permits automatic procedures to validate and implement these systems. To validate our methodology, we presented two case studies with typical examples of the manufacturing sector. The first example shows a sequential control for a tagged machine, which is used to illustrated dependences between the devices of the plant. In the second example, we discuss more than one strategy for controlling a manufacturing cell. The model with no control has 72 states (distinct configurations) and, the model with sequential control generated 20 different states, but they only act in 8 distinct configurations. The model with parallel control generated 210 different states, but these 210 configurations act only in 26 distinct configurations, therefore, one strategy control less restrictive than previous. Lastly, we presented one example for highlight the modular characteristic of our methodology, which it is very important to maintenance of applications. In this example, the sensors for identifying pieces in the plant were removed. So, changes in the control model are needed to transmit the information of the input buffer sensor to the others positions of the cell
Resumo:
The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.
Resumo:
This study aims to use a computational model that considers the statistical characteristics of the wind and the reliability characteristics of a wind turbine, such as failure rates and repair, representing the wind farm by a Markov process to determine the estimated annual energy generated, and compare it with a real case. This model can also be used in reliability studies, and provides some performance indicators that will help in analyzing the feasibility of setting up a wind farm, once the power curve is known and the availability of wind speed measurements. To validate this model, simulations were done using the database of the wind farm of Macau PETROBRAS. The results were very close to the real, thereby confirming that the model successfully reproduced the behavior of all components involved. Finally, a comparison was made of the results presented by this model, with the result of estimated annual energy considering the modeling of the distribution wind by a statistical distribution of Weibull
Resumo:
This work presents a modelling and identification method for a wheeled mobile robot, including the actuator dynamics. Instead of the classic modelling approach, where the robot position coordinates (x,y) are utilized as state variables (resulting in a non linear model), the proposed discrete model is based on the travelled distance increment Delta_l. Thus, the resulting model is linear and time invariant and it can be identified through classical methods such as Recursive Least Mean Squares. This approach has a problem: Delta_l can not be directly measured. In this paper, this problem is solved using an estimate of Delta_l based on a second order polynomial approximation. Experimental data were colected and the proposed method was used to identify the model of a real robot
Resumo:
This work has as main objective the application of Artificial Neural Networks, ANN, in the resolution of problems of RF /microwaves devices, as for example the prediction of the frequency response of some structures in an interest region. Artificial Neural Networks, are presently a alternative to the current methods of analysis of microwaves structures. Therefore they are capable to learn, and the more important to generalize the acquired knowledge, from any type of available data, keeping the precision of the original technique and adding the low computational cost of the neural models. For this reason, artificial neural networks are being increasily used for modeling microwaves devices. Multilayer Perceptron and Radial Base Functions models are used in this work. The advantages/disadvantages of these models and the referring algorithms of training of each one are described. Microwave planar devices, as Frequency Selective Surfaces and microstrip antennas, are in evidence due the increasing necessities of filtering and separation of eletromagnetic waves and the miniaturization of RF devices. Therefore, it is of fundamental importance the study of the structural parameters of these devices in a fast and accurate way. The presented results, show to the capacities of the neural techniques for modeling both Frequency Selective Surfaces and antennas
Resumo:
Currently there is still a high demand for quality control in manufacturing processes of mechanical parts. This keeps alive the need for the inspection activity of final products ranging from dimensional analysis to chemical composition of products. Usually this task may be done through various nondestructive and destructive methods that ensure the integrity of the parts. The result generated by these modern inspection tools ends up not being able to geometrically define the real damage and, therefore, cannot be properly displayed on a computing environment screen. Virtual 3D visualization may help identify damage that would hardly be detected by any other methods. One may find some commercial softwares that seek to address the stages of a design and simulation of mechanical parts in order to predict possible damages trying to diminish potential undesirable events. However, the challenge of developing softwares capable of integrating the various design activities, product inspection, results of non-destructive testing as well as the simulation of damage still needs the attention of researchers. This was the motivation to conduct a methodological study for implementation of a versatile CAD/CAE computer kernel capable of helping programmers in developing softwares applied to the activities of design and simulation of mechanics parts under stress. In this research it is presented interesting results obtained from the use of the developed kernel showing that it was successfully applied to case studies of design including parts presenting specific geometries, namely: mechanical prostheses, heat exchangers and piping of oil and gas. Finally, the conclusions regarding the experience of merging CAD and CAE theories to develop the kernel, so as to result in a tool adaptable to various applications of the metalworking industry are presented
Resumo:
This work aims at the implementation and adaptation of a computational model for the study of the Fischer-Tropsch reaction in a slurry bed reactor from synthesis gas (CO+H2) for the selective production of hydrocarbons (CnHm), with emphasis on evaluation of the influence of operating conditions on the distribution of products formed during the reaction.The present model takes into account effects of rigorous phase equilibrium in a reactive flash drum, a detailed kinetic model able of predicting the formation of each chemical species of the reaction system, as well as control loops of the process variables for pressure and level of slurry phase. As a result, a system of Differential Algebraic Equations was solved using the computational code DASSL (Petzold, 1982). The consistent initialization for the problem was based on phase equilibrium formed by the existing components in the reactor. In addition, the index of the system was reduced to 1 by the introduction of control laws that govern the output of the reactor products. The results were compared qualitatively with experimental data collected in the Fischer-Tropsch Synthesis plant installed at Laboratório de Processamento de Gás - CTGÁS-ER-Natal/RN
Resumo:
Digital Elevation Models (DEM) are numerical representations of a portion of the earth surface. Among several factors which affect the quality of a DEM, it should be emphasized the attention on the input data and the choice of the interpolating algorithm. On the other hand, several numerical models are used nowadays to characterize nearshore hydrodynamics and morphological changes in coastal areas, whose validation is based on field data collection. Independent on the complexity of the physical processes which are modeled, little attention has been given to the intrinsic bathymetric interpolation built within the numerical models of the specific application. Therefore, this study aims to investigate and to quantify the influence of the bathymetry, as obtained by a DEM, on the hydrodynamic circulation model at a coastal stretch, off the coast of the State of Rio Grande do Norte, Northeast Brazil. This coastal region is characterized by strong hydrodynamic and littoral processes, resulting in a very dynamic morphology with shallow coastal bathymetry. Important economic activities, such as oil exploitation and production, fisheries, salt ponds, shrimp farms and tourism, also bring impacts upon the local ecosystems and influence themselves the local hydrodynamics. This fact makes the region one of the most important for the development of the State, but also enhances the possibility of serious environmental accidents. As a hydrodynamic model, SisBaHiA® - Environmental Hydrodynamics System ( Sistema Básico de Hidrodinâmica Ambiental ) was chosen, for it has been successfully employed at several locations along the Brazilian coast. This model was developed at the Coastal and Oceanographical Engineering Group of the Ocean Engineering Program at the Federal University of Rio de Janeiro. Several interpolating methods were tested for the construction of the DEM, namely Natural Neighbor, Kriging, Triangulation with Linear Interpolation, Inverse Distance to a Power, Nearest Neighbor, and Minimum Curvature, all implemented within the software Surfer®. The bathymetry which was used as reference for the DEM was obtained from nautical charts provided by the Brazilian Hydrographic Service of the Brazilian Navy and from a field survey conducted in 2005. Changes in flow velocity and free surface elevation were evaluated under three aspects: a spatial vision along three profiles perpendicular to the coast and one profile longitudinal to the coast as shown; a temporal vision from three central nodes of the grid during 30 days; a hodograph analysis of components of speed in U and V, by different tidal cycles. Small, but negligible, variations in sea surface elevation were identified. However, the differences in flow and direction of velocities were significant, depending on the DEM
Resumo:
In this thesis we study some problems related to petroleum reservoirs using methods and concepts of Statistical Physics. The thesis could be divided percolation problem in random multifractal support motivated by its potential application in modelling oil reservoirs. We develped an heterogeneous and anisotropic grid that followin two parts. The first one introduce a study of the percolations a random multifractal distribution of its sites. After, we determine the percolation threshold for this grid, the fractal dimension of the percolating cluster and the critical exponents ß and v. In the second part, we propose an alternative systematic of modelling and simulating oil reservoirs. We introduce a statistical model based in a stochastic formulation do Darcy Law. In this model, the distribution of permeabilities is localy equivalent to the basic model of bond percolation
Resumo:
The Quadratic Minimum Spanning Tree Problem (QMST) is a version of the Minimum Spanning Tree Problem in which, besides the traditional linear costs, there is a quadratic structure of costs. This quadratic structure models interaction effects between pairs of edges. Linear and quadratic costs are added up to constitute the total cost of the spanning tree, which must be minimized. When these interactions are restricted to adjacent edges, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). AQMST and QMST are NP-hard problems that model several problems of transport and distribution networks design. In general, AQMST arises as a more suitable model for real problems. Although, in literature, linear and quadratic costs are added, in real applications, they may be conflicting. In this case, it may be interesting to consider these costs separately. In this sense, Multiobjective Optimization provides a more realistic model for QMST and AQMST. A review of the state-of-the-art, so far, was not able to find papers regarding these problems under a biobjective point of view. Thus, the objective of this Thesis is the development of exact and heuristic algorithms for the Biobjective Adjacent Only Quadratic Spanning Tree Problem (bi-AQST). In order to do so, as theoretical foundation, other NP-hard problems directly related to bi-AQST are discussed: the QMST and AQMST problems. Bracktracking and branch-and-bound exact algorithms are proposed to the target problem of this investigation. The heuristic algorithms developed are: Pareto Local Search, Tabu Search with ejection chain, Transgenetic Algorithm, NSGA-II and a hybridization of the two last-mentioned proposals called NSTA. The proposed algorithms are compared to each other through performance analysis regarding computational experiments with instances adapted from the QMST literature. With regard to exact algorithms, the analysis considers, in particular, the execution time. In case of the heuristic algorithms, besides execution time, the quality of the generated approximation sets is evaluated. Quality indicators are used to assess such information. Appropriate statistical tools are used to measure the performance of exact and heuristic algorithms. Considering the set of instances adopted as well as the criteria of execution time and quality of the generated approximation set, the experiments showed that the Tabu Search with ejection chain approach obtained the best results and the transgenetic algorithm ranked second. The PLS algorithm obtained good quality solutions, but at a very high computational time compared to the other (meta)heuristics, getting the third place. NSTA and NSGA-II algorithms got the last positions
Resumo:
This thesis presents the results of application of SWAN Simulating WAves Nearshore numerical model, OF third generation, which simulates the propagation and dissipation of energy from sea waves, on the north continental shelf at Rio Grande do Norte, to determine the wave climate, calibrate and validate the model, and assess their potential and limitations for the region of interest. After validation of the wave climate, the results were integrated with information from the submarine relief, and plant morphology of beaches and barrier islands systems. On the second phase, the objective was to analyze the evolution of the wave and its interaction with the shallow seabed, from three transverse profiles orientation from N to S, distributed according to the parallel longitudinal, X = 774000-W, 783000-W e 800000-W. Subsequently, it was were extracted the values of directional waves and winds through all the months between november 2010 to november 2012, to analyze the impact of these forces on the movement area, and then understand the behavior of the morphological variations according to temporal year variability. Based on the results of modeling and its integration with correlated data, and planimetric variations of Soledade and Minhoto beach systems and Ponta do Tubarão and Barra do Fernandes barrier islands systems, it was obtained the following conclusions: SWAN could reproduce and determine the wave climate on the north continental shelf at RN, the results show a similar trend for the measurements of temporal variations of significant height (HS, m) and the mean wave period (Tmed, s); however, the results of parametric statistics were low for the estimates of the maximum values in most of the analyzed periods compared data of PT 1 and PT 2 (measurement points), with alternation of significant wave heights, at times overrated with occasional overlap of swell episodes. By analyzing the spatial distribution of the wave climate and its interaction with the underwater compartmentalization, it was concluded that there is interaction of wave propagation with the seafloor, showing change in significant heights whenever it interacts with the seafloor features (beachrocks, symmetric and asymmetric longitudinal dunes, paleochannel, among others) in the regions of outer, middle and inner shelf. And finally, it is concluded that the study of the stability areas allows identifications of the most unstable regions, confirming that the greatest range of variation indicates greater instability and consequent sensitivity to hydrodynamic processes operating in the coastal region, with positive or negative variation, especially at Ponta do Tubarão and Barra do Fernandes barrier islands systems, where they are more susceptible to waves impacts, as evidenced in retreat of the shoreline
Resumo:
In Percolation Theory, functions like the probability that a given site belongs to the infinite cluster, average size of clusters, etc. are described through power laws and critical exponents. This dissertation uses a method called Finite Size Scaling to provide a estimative of those exponents. The dissertation is divided in four parts. The first one briefly presents the main results for Site Percolation Theory for d = 2 dimension. Besides, some important quantities for the determination of the critical exponents and for the phase transistions understanding are defined. The second shows an introduction to the fractal concept, dimension and classification. Concluded the base of our study, in the third part the Scale Theory is mentioned, wich relates critical exponents and the quantities described in Chapter 2. In the last part, through the Finite Size Scaling method, we determine the critical exponents fi and. Based on them, we used the previous Chapter scale relations in order to determine the remaining critical exponents