147 resultados para Algoritmos computacionales


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation presents a new proposal for the Direction of Arrival (DOA) detection problem for more than one signal inciding simultaneously on an antennas array with linear or planar geometry by using intelligent algorithms. The DOA estimator is developed by using techniques of Conventional Beam-forming (CBF), Blind Source Separation (BSS), and the neural estimator MRBF (Modular Structure of Radial Basis Functions). The developed MRBF estimator has its capacity extended due to the interaction with the BSS technique. The BSS makes an estimation of the steering vectors of the multiple plane waves that reach the array in the same frequency, that means, obtains to separate mixed signals without information a priori. The technique developed in this work makes possible to identify the multiple sources directions and to identify and to exclude interference sources

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Amongst the results of the AutPoc Project - Automation of Wells, established between UFRN and Petrobras with the support of the CNPq, FINEP, CTPETRO, FUNPEC, was developed a simulator for equipped wells of oil with the method of rise for continuous gas-lift. The gas-lift is a method of rise sufficiently used in production offshore (sea production), and its basic concept is to inject gas in the deep one of the producing well of oil transform it less dense in order to facilitate its displacement since the reservoir until the surface. Based in the use of tables and equations that condense the biggest number of information on characteristics of the reservoir, the well and the valves of gas injection, it is allowed, through successive interpolations, to simulate representative curves of the physical behavior of the existing characteristic variable. With a simulator that approaches a computer of real the physical conditions of an oil well is possible to analyze peculiar behaviors with very bigger speeds, since the constants of time of the system in question well are raised e, moreover, to optimize costs with assays in field. The simulator presents great versatility, with prominance the analysis of the influence of parameters, as the static pressure, relation gas-liquid, pressure in the head of the well, BSW (Relation Basic Sediments and Water) in curves of request in deep of the well and the attainment of the curve of performance of the well where it can be simulated rules of control and otimization. In moving the rules of control, the simulator allows the use in two ways of simulation: the application of the control saw software simulated enclosed in the proper simulator, as well as the use of external controllers. This implies that the simulator can be used as tool of validation of control algorithms. Through the potentialities above cited, of course one another powerful application for the simulator appears: the didactic use of the tool. It will be possible to use it in formation courses and recycling of engineers

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reinforcement learning is a machine learning technique that, although finding a large number of applications, maybe is yet to reach its full potential. One of the inadequately tested possibilities is the use of reinforcement learning in combination with other methods for the solution of pattern classification problems. It is well documented in the literature the problems that support vector machine ensembles face in terms of generalization capacity. Algorithms such as Adaboost do not deal appropriately with the imbalances that arise in those situations. Several alternatives have been proposed, with varying degrees of success. This dissertation presents a new approach to building committees of support vector machines. The presented algorithm combines Adaboost algorithm with a layer of reinforcement learning to adjust committee parameters in order to avoid that imbalances on the committee components affect the generalization performance of the final hypothesis. Comparisons were made with ensembles using and not using the reinforcement learning layer, testing benchmark data sets widely known in area of pattern classification

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work uses computer vision algorithms related to features in the identification of medicine boxes for the visually impaired. The system is for people who have a disease that compromises his vision, hindering the identification of the correct medicine to be ingested. We use the camera, available in several popular devices such as computers, televisions and phones, to identify the box of the correct medicine and audio through the image, showing the poor information about the medication, such: as the dosage, indication and contraindications of the medication. We utilize a model of object detection using algorithms to identify the features in the boxes of drugs and playing the audio at the time of detection of feauteres in those boxes. Experiments carried out with 15 people show that where 93 % think that the system is useful and very helpful in identifying drugs for boxes. So, it is necessary to make use of this technology to help several people with visual impairments to take the right medicine, at the time indicated in advance by the physician

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work proposes a computer simulator for sucker rod pumped vertical wells. The simulator is able to represent the dynamic behavior of the systems and the computation of several important parameters, allowing the easy visualization of several pertinent phenomena. The use of the simulator allows the execution of several tests at lower costs and shorter times, than real wells experiments. The simulation uses a model based on the dynamic behavior of the rod string. This dynamic model is represented by a second order partial differencial equation. Through this model, several common field situations can be verified. Moreover, the simulation includes 3D animations, facilitating the physical understanding of the process, due to a better visual interpretation of the phenomena. Another important characteristic is the emulation of the main sensors used in sucker rod pumping automation. The emulation of the sensors is implemented through a microcontrolled interface between the simulator and the industrial controllers. By means of this interface, the controllers interpret the simulator as a real well. A "fault module" was included in the simulator. This module incorporates the six more important faults found in sucker rod pumping. Therefore, the analysis and verification of these problems through the simulator, allows the user to identify such situations that otherwise could be observed only in the field. The simulation of these faults receives a different treatment due to the different boundary conditions imposed to the numeric solution of the problem. Possible applications of the simulator are: the design and analysis of wells, training of technicians and engineers, execution of tests in controllers and supervisory systems, and validation of control algorithms

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyzes the performance of a parallel implementation of Coupled Simulated Annealing (CSA) for the unconstrained optimization of continuous variables problems. Parallel processing is an efficient form of information processing with emphasis on exploration of simultaneous events in the execution of software. It arises primarily due to high computational performance demands, and the difficulty in increasing the speed of a single processing core. Despite multicore processors being easily found nowadays, several algorithms are not yet suitable for running on parallel architectures. The algorithm is characterized by a group of Simulated Annealing (SA) optimizers working together on refining the solution. Each SA optimizer runs on a single thread executed by different processors. In the analysis of parallel performance and scalability, these metrics were investigated: the execution time; the speedup of the algorithm with respect to increasing the number of processors; and the efficient use of processing elements with respect to the increasing size of the treated problem. Furthermore, the quality of the final solution was verified. For the study, this paper proposes a parallel version of CSA and its equivalent serial version. Both algorithms were analysed on 14 benchmark functions. For each of these functions, the CSA is evaluated using 2-24 optimizers. The results obtained are shown and discussed observing the analysis of the metrics. The conclusions of the paper characterize the CSA as a good parallel algorithm, both in the quality of the solutions and the parallel scalability and parallel efficiency

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bayesian networks are powerful tools as they represent probability distributions as graphs. They work with uncertainties of real systems. Since last decade there is a special interest in learning network structures from data. However learning the best network structure is a NP-Hard problem, so many heuristics algorithms to generate network structures from data were created. Many of these algorithms use score metrics to generate the network model. This thesis compare three of most used score metrics. The K-2 algorithm and two pattern benchmarks, ASIA and ALARM, were used to carry out the comparison. Results show that score metrics with hyperparameters that strength the tendency to select simpler network structures are better than score metrics with weaker tendency to select simpler network structures for both metrics (Heckerman-Geiger and modified MDL). Heckerman-Geiger Bayesian score metric works better than MDL with large datasets and MDL works better than Heckerman-Geiger with small datasets. The modified MDL gives similar results to Heckerman-Geiger for large datasets and close results to MDL for small datasets with stronger tendency to select simpler network structures

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Oil spill on the sea, accidental or not, generates enormous negative consequences for the affected area. The damages are ambient and economic, mainly with the proximity of these spots of preservation areas and/or coastal zones. The development of automatic techniques for identification of oil spots on the sea surface, captured through Radar images, assist in a complete monitoring of the oceans and seas. However spots of different origins can be visualized in this type of imaging, which is a very difficult task. The system proposed in this work, based on techniques of digital image processing and artificial neural network, has the objective to identify the analyzed spot and to discern between oil and other generating phenomena of spot. Tests in functional blocks that compose the proposed system allow the implementation of different algorithms, as well as its detailed and prompt analysis. The algorithms of digital image processing (speckle filtering and gradient), as well as classifier algorithms (Multilayer Perceptron, Radial Basis Function, Support Vector Machine and Committe Machine) are presented and commented.The final performance of the system, with different kind of classifiers, is presented by ROC curve. The true positive rates are considered agreed with the literature about oil slick detection through SAR images presents

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work has as main objective the application of Artificial Neural Networks, ANN, in the resolution of problems of RF /microwaves devices, as for example the prediction of the frequency response of some structures in an interest region. Artificial Neural Networks, are presently a alternative to the current methods of analysis of microwaves structures. Therefore they are capable to learn, and the more important to generalize the acquired knowledge, from any type of available data, keeping the precision of the original technique and adding the low computational cost of the neural models. For this reason, artificial neural networks are being increasily used for modeling microwaves devices. Multilayer Perceptron and Radial Base Functions models are used in this work. The advantages/disadvantages of these models and the referring algorithms of training of each one are described. Microwave planar devices, as Frequency Selective Surfaces and microstrip antennas, are in evidence due the increasing necessities of filtering and separation of eletromagnetic waves and the miniaturization of RF devices. Therefore, it is of fundamental importance the study of the structural parameters of these devices in a fast and accurate way. The presented results, show to the capacities of the neural techniques for modeling both Frequency Selective Surfaces and antennas

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Navigation based on visual feedback for robots, working in a closed environment, can be obtained settling a camera in each robot (local vision system). However, this solution requests a camera and capacity of local processing for each robot. When possible, a global vision system is a cheapest solution for this problem. In this case, one or a little amount of cameras, covering all the workspace, can be shared by the entire team of robots, saving the cost of a great amount of cameras and the associated processing hardware needed in a local vision system. This work presents the implementation and experimental results of a global vision system for mobile mini-robots, using robot soccer as test platform. The proposed vision system consists of a camera, a frame grabber and a computer (PC) for image processing. The PC is responsible for the team motion control, based on the visual feedback, sending commands to the robots through a radio link. In order for the system to be able to unequivocally recognize each robot, each one has a label on its top, consisting of two colored circles. Image processing algorithms were developed for the eficient computation, in real time, of all objects position (robot and ball) and orientation (robot). A great problem found was to label the color, in real time, of each colored point of the image, in time-varying illumination conditions. To overcome this problem, an automatic camera calibration, based on clustering K-means algorithm, was implemented. This method guarantees that similar pixels will be clustered around a unique color class. The obtained experimental results shown that the position and orientation of each robot can be obtained with a precision of few millimeters. The updating of the position and orientation was attained in real time, analyzing 30 frames per second

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study is to create an artificial neural network (ANN) capable of modeling the transverse elasticity modulus (E2) of unidirectional composites. To that end, we used a dataset divided into two parts, one for training and the other for ANN testing. Three types of architectures from different networks were developed, one with only two inputs, one with three inputs and the third with mixed architecture combining an ANN with a model developed by Halpin-Tsai. After algorithm training, the results demonstrate that the use of ANNs is quite promising, given that when they were compared with those of the Halpín-Tsai mathematical model, higher correlation coefficient values and lower root mean square values were observed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La disertación, desarrollada en el Programa de PostGraduación en Enseñanza de las Ciencias Naturales y de la Matemática de la UFRN, estudia las necesidades formativas de licenciandos en Química sobre el uso de las nuevas tecnologías de la información y de las comunicaciones (NTIC), en especial, las relacionadas con la utilización de programas computacionales para la enseñanza de la química. En la actual sociedad del conocimiento, se torna imperativo la eclosión de nuevas formas de aprender y de enseñar, que requieren de nuevas concepciones del trabajo pedagógico. En ese sentido, se exige de los profesores el desarrollo de nuevas habilidades y competencias. El presente trabajo está comprometido con la búsqueda de elementos que puedan nortear los procesos formativos de profesores de química con el objetivo de contribuir con una mejor preparación de la formación inicial, tomando en cuenta sus necesidades de formación. Fue utilizado el cuestionario como instrumento de investigación para diagnosticar y caracterizar las necesidades fomativas, buscando establecer correlaciones entre diferentes variables que cacarterizan el estudio, con el fin de establecer semenazas y discrepancias entre las habilidades docentes en estudio y las necesidades formativas. Los analises de los datos tomo elementos de la estadistica descriptivo e inferencial, con analises multivariados, lo que posibilitó identificar las necesidades formativas. Los resultados muestran ue de forma general los licenciandos evaluan como de bajo, el grado de desarrollo de las habilidades para enseñar usando las NTIC, así como manifiestan sentir necesidades formativas en todas las habilidades referentes a esa esfera del trabajo docente