18 resultados para Parallel computing. Multilayer perceptron. OpenMP
Resumo:
This Thesis presents the elaboration of a methodological propose for the development of an intelligent system, able to automatically achieve the effective porosity, in sedimentary layers, from a data bank built with information from the Ground Penetrating Radar GPR. The intelligent system was built to model the relation between the porosity (response variable) and the electromagnetic attribute from the GPR (explicative variables). Using it, the porosity was estimated using the artificial neural network (Multilayer Perceptron MLP) and the multiple linear regression. The data from the response variable and from the explicative variables were achieved in laboratory and in GPR surveys outlined in controlled sites, on site and in laboratory. The proposed intelligent system has the capacity of estimating the porosity from any available data bank, which has the same variables used in this Thesis. The architecture of the neural network used can be modified according to the existing necessity, adapting to the available data bank. The use of the multiple linear regression model allowed the identification and quantification of the influence (level of effect) from each explicative variable in the estimation of the porosity. The proposed methodology can revolutionize the use of the GPR, not only for the imaging of the sedimentary geometry and faces, but mainly for the automatically achievement of the porosity one of the most important parameters for the characterization of reservoir rocks (from petroleum or water)
Resumo:
ln this work, it was deveIoped a parallel cooperative genetic algorithm with different evolution behaviors to train and to define architectures for MuItiIayer Perceptron neural networks. MuItiIayer Perceptron neural networks are very powerful tools and had their use extended vastIy due to their abiIity of providing great resuIts to a broad range of appIications. The combination of genetic algorithms and parallel processing can be very powerful when applied to the Iearning process of the neural network, as well as to the definition of its architecture since this procedure can be very slow, usually requiring a lot of computational time. AIso, research work combining and appIying evolutionary computation into the design of neural networks is very useful since most of the Iearning algorithms deveIoped to train neural networks only adjust their synaptic weights, not considering the design of the networks architecture. Furthermore, the use of cooperation in the genetic algorithm allows the interaction of different populations, avoiding local minima and helping in the search of a promising solution, acceIerating the evolutionary process. Finally, individuaIs and evolution behavior can be exclusive on each copy of the genetic algorithm running in each task enhancing the diversity of populations
Resumo:
This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing