877 resultados para Redes Neurais Artificiais em Cascata
Resumo:
This work proposes the development of an intelligent system for analysis of digital mammograms, capable to detect and to classify masses and microcalcifications. The digital mammograms will be pre-processed through techniques of digital processing of images with the purpose of adapting the image to the detection system and automatic classification of the existent calcifications in the suckles. The model adopted for the detection and classification of the mammograms uses the neural network of Kohonen by the algorithm Self Organization Map - SOM. The algorithm of Vector quantization, Kmeans it is also used with the same purpose of the SOM. An analysis of the performance of the two algorithms in the automatic classification of digital mammograms is developed. The developed system will aid the radiologist in the diagnosis and accompaniment of the development of abnormalities
Resumo:
ln this work the implementation of the SOM (Self Organizing Maps) algorithm or Kohonen neural network is presented in the form of hierarchical structures, applied to the compression of images. The main objective of this approach is to develop an Hierarchical SOM algorithm with static structure and another one with dynamic structure to generate codebooks (books of codes) in the process of the image Vector Quantization (VQ), reducing the time of processing and obtaining a good rate of compression of images with a minimum degradation of the quality in relation to the original image. Both self-organizing neural networks developed here, were denominated HSOM, for static case, and DHSOM, for the dynamic case. ln the first form, the hierarchical structure is previously defined and in the later this structure grows in an automatic way in agreement with heuristic rules that explore the data of the training group without use of external parameters. For the network, the heuristic mIes determine the dynamics of growth, the pruning of ramifications criteria, the flexibility and the size of children maps. The LBO (Linde-Buzo-Oray) algorithm or K-means, one ofthe more used algorithms to develop codebook for Vector Quantization, was used together with the algorithm of Kohonen in its basic form, that is, not hierarchical, as a reference to compare the performance of the algorithms here proposed. A performance analysis between the two hierarchical structures is also accomplished in this work. The efficiency of the proposed processing is verified by the reduction in the complexity computational compared to the traditional algorithms, as well as, through the quantitative analysis of the images reconstructed in function of the parameters: (PSNR) peak signal-to-noise ratio and (MSE) medium squared error
Resumo:
Equipment maintenance is the major cost factor in industrial plants, it is very important the development of fault predict techniques. Three-phase induction motors are key electrical equipments used in industrial applications mainly because presents low cost and large robustness, however, it isn t protected from other fault types such as shorted winding and broken bars. Several acquisition ways, processing and signal analysis are applied to improve its diagnosis. More efficient techniques use current sensors and its signature analysis. In this dissertation, starting of these sensors, it is to make signal analysis through Park s vector that provides a good visualization capability. Faults data acquisition is an arduous task; in this way, it is developed a methodology for data base construction. Park s transformer is applied into stationary reference for machine modeling of the machine s differential equations solution. Faults detection needs a detailed analysis of variables and its influences that becomes the diagnosis more complex. The tasks of pattern recognition allow that systems are automatically generated, based in patterns and data concepts, in the majority cases undetectable for specialists, helping decision tasks. Classifiers algorithms with diverse learning paradigms: k-Neighborhood, Neural Networks, Decision Trees and Naïves Bayes are used to patterns recognition of machines faults. Multi-classifier systems are used to improve classification errors. It inspected the algorithms homogeneous: Bagging and Boosting and heterogeneous: Vote, Stacking and Stacking C. Results present the effectiveness of constructed model to faults modeling, such as the possibility of using multi-classifiers algorithm on faults classification
Resumo:
Embedded systems are widely spread nowadays. An example is the Digital Signal Processor (DSP), which is a high processing power device. This work s contribution consist of exposing DSP implementation of the system logic for detecting leaks in real time. Among the various methods of leak detection available today this work uses a technique based on the pipe pressure analysis and usesWavelet Transform and Neural Networks. In this context, the DSP, in addition to do the pressure signal digital processing, also communicates to a Global Positioning System (GPS), which helps in situating the leak, and to a SCADA, sharing information. To ensure robustness and reliability in communication between DSP and SCADA the Modbus protocol is used. As it is a real time application, special attention is given to the response time of each of the tasks performed by the DSP. Tests and leak simulations were performed using the structure of Laboratory of Evaluation of Measurement in Oil (LAMP), at Federal University of Rio Grande do Norte (UFRN)
Resumo:
This study aims to seek a more viable alternative for the calculation of differences in images of stereo vision, using a factor that reduces heel the amount of points that are considered on the captured image, and a network neural-based radial basis functions to interpolate the results. The objective to be achieved is to produce an approximate picture of disparities using algorithms with low computational cost, unlike the classical algorithms
Resumo:
This work presents an analysis of the control law based on an indirect hybrid scheme using neural network, initially proposed for O. Adetona, S. Sathanathan and L. H. Keel. Implementations of this control law, for a level plant of second order, was resulted an oscillatory behavior, even if the neural identifier has converged. Such results had motivated the investigation of the applicability of that law. Starting from that, had been made stability mathematical analysis and several implementations, with simulated plants and with real plants, for analyze the problem. The analysis has been showed the law was designed being despised some components of dynamic of the plant to be controlled. Thus, for plants that these components have a significant influence in its dynamic, the law tends to fail
Resumo:
One of the most important goals of bioinformatics is the ability to identify genes in uncharacterized DNA sequences on world wide database. Gene expression on prokaryotes initiates when the RNA-polymerase enzyme interacts with DNA regions called promoters. In these regions are located the main regulatory elements of the transcription process. Despite the improvement of in vitro techniques for molecular biology analysis, characterizing and identifying a great number of promoters on a genome is a complex task. Nevertheless, the main drawback is the absence of a large set of promoters to identify conserved patterns among the species. Hence, a in silico method to predict them on any species is a challenge. Improved promoter prediction methods can be one step towards developing more reliable ab initio gene prediction methods. In this work, we present an empirical comparison of Machine Learning (ML) techniques such as Na¨ýve Bayes, Decision Trees, Support Vector Machines and Neural Networks, Voted Perceptron, PART, k-NN and and ensemble approaches (Bagging and Boosting) to the task of predicting Bacillus subtilis. In order to do so, we first built two data set of promoter and nonpromoter sequences for B. subtilis and a hybrid one. In order to evaluate of ML methods a cross-validation procedure is applied. Good results were obtained with methods of ML like SVM and Naïve Bayes using B. subtilis. However, we have not reached good results on hybrid database
Resumo:
The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.
Resumo:
The stability of synchronous generators connected to power grid has been the object of study and research for years. The interest in this matter is justified by the fact that much of the electricity produced worldwide is obtained with the use of synchronous generators. In this respect, studies have been proposed using conventional and unconventional control techniques such as fuzzy logic, neural networks, and adaptive controllers to increase the stabilitymargin of the systemduring sudden failures and transient disturbances. Thismaster thesis presents a robust unconventional control strategy for maintaining the stability of power systems and regulation of output voltage of synchronous generators connected to the grid. The proposed control strategy comprises the integration of a sliding surface with a linear controller. This control structure is designed to prevent the power system losing synchronism after a sudden failure and regulation of the terminal voltage of the generator after the fault. The feasibility of the proposed control strategy was experimentally tested in a salient pole synchronous generator of 5 kVA in a laboratory structure
Resumo:
This work holds the purpose of presenting an auxiliary way of bone density measurement through the attenuation of electromagnetic waves. In order to do so, an arrangement of two microstrip antennas with rectangular configuration has been used, operating in a frequency of 2,49 GHz, and fed by a microstrip line on a substrate of fiberglass with permissiveness of 4.4 and height of 0,9 cm. Simulations were done with silica, bone meal, silica and gypsum blocks samples to prove the variation on the attenuation level of different combinations. Because of their good reproduction of the human beings anomaly aspects, samples of bovine bone were used. They were subjected to weighing, measurement and microwave radiation. The samples had their masses altered after mischaracterization and the process was repeated. The obtained data were inserted in a neural network and its training was proceeded with the best results gathered by correct classification on 100% of the samples. It comes to the conclusion that through only one non-ionizing wave in the 2,49 GHz zone it is possible to evaluate the attenuation level in the bone tissue, and that with the appliance of neural network fed with obtained characteristics in the experiment it is possible to classify a sample as having low or high bone density
Sistema inteligente para detecção de manchas de óleo na superfície marinha através de imagens de SAR
Resumo:
Oil spill on the sea, accidental or not, generates enormous negative consequences for the affected area. The damages are ambient and economic, mainly with the proximity of these spots of preservation areas and/or coastal zones. The development of automatic techniques for identification of oil spots on the sea surface, captured through Radar images, assist in a complete monitoring of the oceans and seas. However spots of different origins can be visualized in this type of imaging, which is a very difficult task. The system proposed in this work, based on techniques of digital image processing and artificial neural network, has the objective to identify the analyzed spot and to discern between oil and other generating phenomena of spot. Tests in functional blocks that compose the proposed system allow the implementation of different algorithms, as well as its detailed and prompt analysis. The algorithms of digital image processing (speckle filtering and gradient), as well as classifier algorithms (Multilayer Perceptron, Radial Basis Function, Support Vector Machine and Committe Machine) are presented and commented.The final performance of the system, with different kind of classifiers, is presented by ROC curve. The true positive rates are considered agreed with the literature about oil slick detection through SAR images presents
Resumo:
The main objective of the present thesis was the seismic interpretation and seismic attribute analysis of the 3D seismic data from the Siririzinho high, located in the Sergipe Sub-basin (southern portion of Sergipe-Alagoas Basin). This study has enabled a better understanding of the stratigraphy and structure that the Siririzinho high experienced during its development. In a first analysis, we used two types of filters: the dip-steered median filter, was used to remove random noise and increase the lateral continuity of reflections, and fault-enhancement filter was applied to enhance the reflection discontinuities. After this filtering step similarity and curvature attributes were applied in order to identify and enhance the distribution of faults and fractures. The use of attributes and filtering greatly contributed to the identification and enhancement of continuity of faults. Besides the application of typical attributes (similarity and curvature) neural network and fingerprint techniques were also used, which generate meta-attributes, also aiming to highlight the faults; however, the results were not satisfactory. In a subsequent step, well log and seismic data analysis were performed, which allowed the understanding of the distribution and arrangement of sequences that occur in the Siririzinho high, as well as an understanding of how these units are affected by main structures in the region. The Siririzinho high comprises an elongated structure elongated in the NS direction, capped by four seismo-sequences (informally named, from bottom to top, the sequences I to IV, plus the top of the basement). It was possible to recognize the main NS-oriented faults, which especially affect the sequences I and II, and faults oriented NE-SW, that reach the younger sequences, III and IV. Finally, with the interpretation of seismic horizons corresponding to each of these sequences, it was possible to define a better understanding of geometry, deposition and structural relations in the area.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS