100 resultados para Redes Neurais Artificiais
Resumo:
In last decades, neural networks have been established as a major tool for the identification of nonlinear systems. Among the various types of networks used in identification, one that can be highlighted is the wavelet neural network (WNN). This network combines the characteristics of wavelet multiresolution theory with learning ability and generalization of neural networks usually, providing more accurate models than those ones obtained by traditional networks. An extension of WNN networks is to combine the neuro-fuzzy ANFIS (Adaptive Network Based Fuzzy Inference System) structure with wavelets, leading to generate the Fuzzy Wavelet Neural Network - FWNN structure. This network is very similar to ANFIS networks, with the difference that traditional polynomials present in consequent of this network are replaced by WNN networks. This paper proposes the identification of nonlinear dynamical systems from a network FWNN modified. In the proposed structure, functions only wavelets are used in the consequent. Thus, it is possible to obtain a simplification of the structure, reducing the number of adjustable parameters of the network. To evaluate the performance of network FWNN with this modification, an analysis of network performance is made, verifying advantages, disadvantages and cost effectiveness when compared to other existing FWNN structures in literature. The evaluations are carried out via the identification of two simulated systems traditionally found in the literature and a real nonlinear system, consisting of a nonlinear multi section tank. Finally, the network is used to infer values of temperature and humidity inside of a neonatal incubator. The execution of such analyzes is based on various criteria, like: mean squared error, number of training epochs, number of adjustable parameters, the variation of the mean square error, among others. The results found show the generalization ability of the modified structure, despite the simplification performed
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
In this dissertation new models of propagation path loss predictions are proposed by from techniques of optimization recent and measures of power levels for the urban and suburban areas of Natal, city of Brazilian northeast. These new proposed models are: (i) a statistical model that was implemented based in the addition of second-order statistics for the power and the altimetry of the relief in model of linear losses; (ii) a artificial neural networks model used the training of the algorithm backpropagation, in order to get the equation of propagation losses; (iii) a model based on the technique of the random walker, that considers the random of the absorption and the chaos of the environment and than its unknown parameters for the equation of propagation losses are determined through of a neural network. The digitalization of the relief for the urban and suburban areas of Natal were carried through of the development of specific computational programs and had been used available maps in the Statistics and Geography Brazilian Institute. The validations of the proposed propagation models had been carried through comparisons with measures and propagation classic models, and numerical good agreements were observed. These new considered models could be applied to any urban and suburban scenes with characteristic similar architectural to the city of Natal
Resumo:
abstract
Resumo:
This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing
Resumo:
A neuro-fuzzy system consists of two or more control techniques in only one structure. The main characteristic of this structure is joining one or more good aspects from each technique to make a hybrid controller. This controller can be based in Fuzzy systems, artificial Neural Networks, Genetics Algorithms or rein forced learning techniques. Neuro-fuzzy systems have been shown as a promising technique in industrial applications. Two models of neuro-fuzzy systems were developed, an ANFIS model and a NEFCON model. Both models were applied to control a ball and beam system and they had their results and needed changes commented. Choose of inputs to controllers and the algorithms used to learning, among other information about the hybrid systems, were commented. The results show the changes in structure after learning and the conditions to use each one controller based on theirs characteristics
Resumo:
This work aims to predict the total maximum demand of a transformer that will be used in power systems to attend a Multiple Unit Consumption (MUC) in design. In 1987, COSERN noted that calculation of maximum total demand for a building should be different from that which defines the scaling of the input protection extension in order to not overestimate the power of the transformer. Since then there have been many changes, both in consumption habits of the population, as in electrical appliances, so that this work will endeavor to improve the estimation of peak demand. For the survey, data were collected for identification and electrical projects in different MUCs located in Natal. In some of them, measurements were made of demand for 7 consecutive days and adjusted for an integration interval of 30 minutes. The estimation of the maximum demand was made through mathematical models that calculate the desired response from a set of information previously known of MUCs. The models tested were simple linear regressions, multiple linear regressions and artificial neural networks. The various calculated results over the study were compared, and ultimately, the best answer found was put into comparison with the previously proposed model
Resumo:
Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated
Resumo:
Artificial neural networks are usually applied to solve complex problems. In problems with more complexity, by increasing the number of layers and neurons, it is possible to achieve greater functional efficiency. Nevertheless, this leads to a greater computational effort. The response time is an important factor in the decision to use neural networks in some systems. Many argue that the computational cost is higher in the training period. However, this phase is held only once. Once the network trained, it is necessary to use the existing computational resources efficiently. In the multicore era, the problem boils down to efficient use of all available processing cores. However, it is necessary to consider the overhead of parallel computing. In this sense, this paper proposes a modular structure that proved to be more suitable for parallel implementations. It is proposed to parallelize the feedforward process of an RNA-type MLP, implemented with OpenMP on a shared memory computer architecture. The research consistes on testing and analizing execution times. Speedup, efficiency and parallel scalability are analyzed. In the proposed approach, by reducing the number of connections between remote neurons, the response time of the network decreases and, consequently, so does the total execution time. The time required for communication and synchronization is directly linked to the number of remote neurons in the network, and so it is necessary to investigate which one is the best distribution of remote connections
Resumo:
Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature
Resumo:
Nowadays, optic fiber is one of the most used communication methods, mainly due to the fact that the data transmission rates of those systems exceed all of the other means of digital communication. Despite the great advantage, there are problems that prevent full utilization of the optical channel: by increasing the transmission speed and the distances involved, the data is subjected to non-linear inter symbolic interference caused by the dispersion phenomena in the fiber. Adaptive equalizers can be used to solve this problem, they compensate non-ideal responses of the channel in order to restore the signal that was transmitted. This work proposes an equalizer based on artificial neural networks and evaluates its performance in optical communication systems. The proposal is validated through a simulated optic channel and the comparison with other adaptive equalization techniques
Resumo:
The petroleum industry, in consequence of an intense activity of exploration and production, is responsible by great part of the generation of residues, which are considered toxic and pollutants to the environment. Among these, the oil sludge is found produced during the production, transportation and refine phases. This work had the purpose to develop a process to recovery the oil present in oil sludge, in order to use the recovered oil as fuel or return it to the refining plant. From the preliminary tests, were identified the most important independent variables, like: temperature, contact time, solvents and acid volumes. Initially, a series of parameters to characterize the oil sludge was determined to characterize its. A special extractor was projected to work with oily waste. Two experimental designs were applied: fractional factorial and Doehlert. The tests were carried out in batch process to the conditions of the experimental designs applied. The efficiency obtained in the oil extraction process was 70%, in average. Oil sludge is composed of 36,2% of oil, 16,8% of ash, 40% of water and 7% of volatile constituents. However, the statistical analysis showed that the quadratic model was not well fitted to the process with a relative low determination coefficient (60,6%). This occurred due to the complexity of the oil sludge. To obtain a model able to represent the experiments, the mathematical model was used, the so called artificial neural networks (RNA), which was generated, initially, with 2, 4, 5, 6, 7 and 8 neurons in the hidden layer, 64 experimental results and 10000 presentations (interactions). Lesser dispersions were verified between the experimental and calculated values using 4 neurons, regarding the proportion of experimental points and estimated parameters. The analysis of the average deviations of the test divided by the respective training showed up that 2150 presentations resulted in the best value parameters. For the new model, the determination coefficient was 87,5%, which is quite satisfactory for the studied system
Resumo:
Photo-oxidation processes of toxic organic compounds have been widely studied. This work seeks the application of the photo-Fenton process for the degradation of hydrocarbons in water. The gasoline found in the refinery, without additives and alcohol, was used as the model pollutant. The effects of the concentration of the following substances have been properly evaluated: hydrogen peroxide (100-200 mM), iron ions (0.5-1 mM) and sodium chloride (200 2000 ppm). The experiments were accomplished in reactor with UV lamp and in a falling film solar reactor. The photo-oxidation process was monitored by measurements of the absorption spectra, total organic carbon (TOC) and chemical oxygen demand (COD). Experimental results demonstrated that the photo-Fenton process is feasible for the treatment of wastewaters containing aliphatic hydrocarbons, inclusive in the presence of salts. These conditions are similar to the water produced by the petroleum fields, generated in the extraction and production of petroleum. A neural network model of process correlated well the observed data for the photooxidation process of hydrocarbons
Resumo:
The use of non-human primates in scientific research has contributed significantly to the biomedical area and, in the case of Callithrix jacchus, has provided important evidence on physiological mechanisms that help explain its biology, making the species a valuable experimental model in different pathologies. However, raising non-human primates in captivity for long periods of time is accompanied by behavioral disorders and chronic diseases, as well as progressive weight loss in most of the animals. The Primatology Center of the Universidade Federal do Rio Grande do Norte (UFRN) has housed a colony of C. jacchus for nearly 30 years and during this period these animals have been weighed systematically to detect possible alterations in their clinical conditions. This procedure has generated a volume of data on the weight of animals at different age ranges. These data are of great importance in the study of this variable from different perspectives. Accordingly, this paper presents three studies using weight data collected over 15 years (1985-2000) as a way of verifying the health status and development of the animals. The first study produced the first article, which describes the histopathological findings of animals with probable diagnosis of permanent wasting marmoset syndrome (WMS). All the animals were carriers of trematode parasites (Platynosomum spp) and had obstruction in the hepatobiliary system; it is suggested that this agent is one of the etiological factors of the syndrome. In the second article, the analysis focused on comparing environmental profile and cortisol levels between the animals with normal weight curve evolution and those with WMS. We observed a marked decrease in locomotion, increased use of lower cage extracts and hypocortisolemia. The latter is likely associated to an adaptation of the mechanisms that make up the hypothalamus-hypophysis-adrenal axis, as observed in other mammals under conditions of chronic malnutrition. Finally, in the third study, the animals with weight alterations were excluded from the sample and, using computational tools (K-means and SOM) in a non-supervised way, we suggest found new ontogenetic development classes for C. jacchus. These were redimensioned from five to eight classes: infant I, infant II, infant III, juvenile I, juvenile II, sub-adult, young adult and elderly adult, in order to provide a more suitable classification for more detailed studies that require better control over the animal development
Resumo:
In this work, the quantitative analysis of glucose, triglycerides and cholesterol (total and HDL) in both rat and human blood plasma was performed without any kind of pretreatment of samples, by using near infrared spectroscopy (NIR) combined with multivariate methods. For this purpose, different techniques and algorithms used to pre-process data, to select variables and to build multivariate regression models were compared between each other, such as partial least squares regression (PLS), non linear regression by artificial neural networks, interval partial least squares regression (iPLS), genetic algorithm (GA), successive projections algorithm (SPA), amongst others. Related to the determinations of rat blood plasma samples, the variables selection algorithms showed satisfactory results both for the correlation coefficients (R²) and for the values of root mean square error of prediction (RMSEP) for the three analytes, especially for triglycerides and cholesterol-HDL. The RMSEP values for glucose, triglycerides and cholesterol-HDL obtained through the best PLS model were 6.08, 16.07 e 2.03 mg dL-1, respectively. In the other case, for the determinations in human blood plasma, the predictions obtained by the PLS models provided unsatisfactory results with non linear tendency and presence of bias. Then, the ANN regression was applied as an alternative to PLS, considering its ability of modeling data from non linear systems. The root mean square error of monitoring (RMSEM) for glucose, triglycerides and total cholesterol, for the best ANN models, were 13.20, 10.31 e 12.35 mg dL-1, respectively. Statistical tests (F and t) suggest that NIR spectroscopy combined with multivariate regression methods (PLS and ANN) are capable to quantify the analytes (glucose, triglycerides and cholesterol) even when they are present in highly complex biological fluids, such as blood plasma