790 resultados para ARTIFICIAL NEURAL NETWORK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim is to obtain computationally more powerful, neuro physiologically founded, artificial neurons and neural nets. Artificial Neural Nets (ANN) of the Perceptron type evolved from the original proposal by McCulloch an Pitts classical paper [1]. Essentially, they keep the computing structure of a linear machine followed by a non linear operation. The McCulloch-Pitts formal neuron (which was never considered by the author’s to be models of real neurons) consists of the simplest case of a linear computation of the inputs followed by a threshold. Networks of one layer cannot compute anylogical function of the inputs, but only those which are linearly separable. Thus, the simple exclusive OR (contrast detector) function of two inputs requires two layers of formal neurons

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The training algorithm studied in this paper is inspired by the biological metaplasticity property of neurons. Tested on different multidisciplinary applications, it achieves a more efficient training and improves Artificial Neural Network Performance. The algorithm has been recently proposed for Artificial Neural Networks in general, although for the purpose of discussing its biological plausibility, a Multilayer Perceptron has been used. During the training phase, the artificial metaplasticity multilayer perceptron could be considered a new probabilistic version of the presynaptic rule, as during the training phase the algorithm assigns higher values for updating the weights in the less probable activations than in the ones with higher probability

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neutron spectra unfolding and dose equivalent calculation are complicated tasks in radiation protection, are highly dependent of the neutron energy, and a precise knowledge on neutron spectrometry is essential for all dosimetry-related studies as well as many nuclear physics experiments. In previous works have been reported neutron spectrometry and dosimetry results, by using the ANN technology as alternative solution, starting from the count rates of a Bonner spheres system with a LiI(Eu) thermal neutrons detector, 7 polyethylene spheres and the UTA4 response matrix with 31 energy bins. In this work, an ANN was designed and optimized by using the RDANN methodology for the Bonner spheres system used at CIEMAT Spain, which is composed of a He neutron detector, 12 moderator spheres and a response matrix for 72 energy bins. For the ANN design process a neutrons spectra catalogue compiled by the IAEA was used. From this compilation, the neutrons spectra were converted from lethargy to energy spectra. Then, the resulting energy ?uence spectra were re-binned by using the MCNP code to the corresponding energy bins of the He response matrix before mentioned. With the response matrix and the re-binned spectra the counts rate of the Bonner spheres system were calculated and the resulting re-binned neutrons spectra and calculated counts rate were used as the ANN training data set.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes the optimization relaxation approach based on the analogue Hopfield Neural Network (HNN) for cluster refinement of pre-classified Polarimetric Synthetic Aperture Radar (PolSAR) image data. We consider the initial classification provided by the maximum-likelihood classifier based on the complex Wishart distribution, which is then supplied to the HNN optimization approach. The goal is to improve the classification results obtained by the Wishart approach. The classification improvement is verified by computing a cluster separability coefficient and a measure of homogeneity within the clusters. During the HNN optimization process, for each iteration and for each pixel, two consistency coefficients are computed, taking into account two types of relations between the pixel under consideration and its corresponding neighbors. Based on these coefficients and on the information coming from the pixel itself, the pixel under study is re-classified. Different experiments are carried out to verify that the proposed approach outperforms other strategies, achieving the best results in terms of separability and a trade-off with the homogeneity preserving relevant structures in the image. The performance is also measured in terms of computational central processing unit (CPU) times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we describe the development of a control system for Demand-Side Management in the residential sector with Distributed Generation. The electrical system under study incorporates local PV energy generation, an electricity storage system, connection to the grid and a home automation system. The distributed control system is composed of two modules: a scheduler and a coordinator, both implemented with neural networks. The control system enhances the local energy performance, scheduling the tasks demanded by the user and maximizing the use of local generation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last ten years, Salamanca has been considered among the most polluted cities in México. This paper presents a Self-Organizing Maps (SOM) Neural Network application to classify pollution data and automatize the air pollution level determination for Sulphur Dioxide (SO2) in Salamanca. Meteorological parameters are well known to be important factors contributing to air quality estimation and prediction. In order to observe the behavior and clarify the influence of wind parameters on the SO2 concentrations a SOM Neural Network have been implemented along a year. The main advantages of the SOM is that it allows to integrate data from different sensors and provide readily interpretation results. Especially, it is powerful mapping and classification tool, which others information in an easier way and facilitates the task of establishing an order of priority between the distinguished groups of concentrations depending on their need for further research or remediation actions in subsequent management steps. The results show a significative correlation between pollutant concentrations and some environmental variables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work evaluates a spline-based smoothing method applied to the output of a glucose predictor. Methods:Our on-line prediction algorithm is based on a neural network model (NNM). We trained/validated the NNM with a prediction horizon of 30 minutes using 39/54 profiles of patients monitored with the Guardian® Real-Time continuous glucose monitoring system The NNM output is smoothed by fitting a causal cubic spline. The assessment parameters are the error (RMSE), mean delay (MD) and the high-frequency noise (HFCrms). The HFCrms is the root-mean-square values of the high-frequency components isolated with a zero-delay non-causal filter. HFCrms is 2.90±1.37 (mg/dl) for the original profiles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract This paper presents a new method to extract knowledge from existing data sets, that is, to extract symbolic rules using the weights of an Artificial Neural Network. The method has been applied to a neural network with special architecture named Enhanced Neural Network (ENN). This architecture improves the results that have been obtained with multilayer perceptron (MLP). The relationship among the knowledge stored in the weights, the performance of the network and the new implemented algorithm to acquire rules from the weights is explained. The method itself gives a model to follow in the knowledge acquisition with ENN.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A nivel mundial, el cáncer de mama es el tipo de cáncer más frecuente además de una de las principales causas de muerte entre la población femenina. Actualmente, el método más eficaz para detectar lesiones mamarias en una etapa temprana es la mamografía. Ésta contribuye decisivamente al diagnóstico precoz de esta enfermedad que, si se detecta a tiempo, tiene una probabilidad de curación muy alta. Uno de los principales y más frecuentes hallazgos en una mamografía, son las microcalcificaciones, las cuales son consideradas como un indicador importante de cáncer de mama. En el momento de analizar las mamografías, factores como la capacidad de visualización, la fatiga o la experiencia profesional del especialista radiólogo hacen que el riesgo de omitir ciertas lesiones presentes se vea incrementado. Para disminuir dicho riesgo es importante contar con diferentes alternativas como por ejemplo, una segunda opinión por otro especialista o un doble análisis por el mismo. En la primera opción se eleva el coste y en ambas se prolonga el tiempo del diagnóstico. Esto supone una gran motivación para el desarrollo de sistemas de apoyo o asistencia en la toma de decisiones. En este trabajo de tesis se propone, se desarrolla y se justifica un sistema capaz de detectar microcalcificaciones en regiones de interés extraídas de mamografías digitalizadas, para contribuir a la detección temprana del cáncer demama. Dicho sistema estará basado en técnicas de procesamiento de imagen digital, de reconocimiento de patrones y de inteligencia artificial. Para su desarrollo, se tienen en cuenta las siguientes consideraciones: 1. Con el objetivo de entrenar y probar el sistema propuesto, se creará una base de datos de imágenes, las cuales pertenecen a regiones de interés extraídas de mamografías digitalizadas. 2. Se propone la aplicación de la transformada Top-Hat, una técnica de procesamiento digital de imagen basada en operaciones de morfología matemática. La finalidad de aplicar esta técnica es la de mejorar el contraste entre las microcalcificaciones y el tejido presente en la imagen. 3. Se propone un algoritmo novel llamado sub-segmentación, el cual está basado en técnicas de reconocimiento de patrones aplicando un algoritmo de agrupamiento no supervisado, el PFCM (Possibilistic Fuzzy c-Means). El objetivo es encontrar las regiones correspondientes a las microcalcificaciones y diferenciarlas del tejido sano. Además, con la finalidad de mostrar las ventajas y desventajas del algoritmo propuesto, éste es comparado con dos algoritmos del mismo tipo: el k-means y el FCM (Fuzzy c-Means). Por otro lado, es importante destacar que en este trabajo por primera vez la sub-segmentación es utilizada para detectar regiones pertenecientes a microcalcificaciones en imágenes de mamografía. 4. Finalmente, se propone el uso de un clasificador basado en una red neuronal artificial, específicamente un MLP (Multi-layer Perceptron). El propósito del clasificador es discriminar de manera binaria los patrones creados a partir de la intensidad de niveles de gris de la imagen original. Dicha clasificación distingue entre microcalcificación y tejido sano. ABSTRACT Breast cancer is one of the leading causes of women mortality in the world and its early detection continues being a key piece to improve the prognosis and survival. Currently, the most reliable and practical method for early detection of breast cancer is mammography.The presence of microcalcifications has been considered as a very important indicator ofmalignant types of breast cancer and its detection and classification are important to prevent and treat the disease. However, the detection and classification of microcalcifications continue being a hard work due to that, in mammograms there is a poor contrast between microcalcifications and the tissue around them. Factors such as visualization, tiredness or insufficient experience of the specialist increase the risk of omit some present lesions. To reduce this risk, is important to have alternatives such as a second opinion or a double analysis for the same specialist. In the first option, the cost increases and diagnosis time also increases for both of them. This is the reason why there is a great motivation for development of help systems or assistance in the decision making process. This work presents, develops and justifies a system for the detection of microcalcifications in regions of interest extracted fromdigitizedmammographies to contribute to the early detection of breast cancer. This systemis based on image processing techniques, pattern recognition and artificial intelligence. For system development the following features are considered: With the aim of training and testing the system, an images database is created, belonging to a region of interest extracted from digitized mammograms. The application of the top-hat transformis proposed. This image processing technique is based on mathematical morphology operations. The aim of this technique is to improve the contrast betweenmicrocalcifications and tissue present in the image. A novel algorithm called sub-segmentation is proposed. The sub-segmentation is based on pattern recognition techniques applying a non-supervised clustering algorithm known as Possibilistic Fuzzy c-Means (PFCM). The aim is to find regions corresponding to the microcalcifications and distinguish them from the healthy tissue. Furthermore,with the aim of showing themain advantages and disadvantages this is compared with two algorithms of same type: the k-means and the fuzzy c-means (FCM). On the other hand, it is important to highlight in this work for the first time the sub-segmentation is used for microcalcifications detection. Finally, a classifier based on an artificial neural network such as Multi-layer Perceptron is used. The purpose of this classifier is to discriminate froma binary perspective the patterns built from gray level intensity of the original image. This classification distinguishes between microcalcifications and healthy tissue.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the accurate characterization of the reflection coefficients of a multilayered reflectarray element by means of artificial neural networks. The procedure has been tested with different RA elements related to actual specifications. Up to 9 parameters were considered and the complete reflection coefficient matrix was accurately obtained, including cross polar reflection coefficients. Results show a good agreement between simulations carried out by the Method of Moments and the ANN model outputs at RA element level, as well as with performances of the complete RA antenna designed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Una de las barreras para la aplicación de las técnicas de monitorización de la integridad estructural (SHM) basadas en ondas elásticas guiadas (GLW) en aeronaves es la influencia perniciosa de las condiciones ambientales y de operación (EOC). En esta tesis se ha estudiado dicha influencia y la compensación de la misma, particularizando en variaciones del estado de carga y temperatura. La compensación de dichos efectos se fundamenta en Redes Neuronales Artificiales (ANN) empleando datos experimentales procesados con la Transformada Chirplet. Los cambios en la geometría y en las propiedades del material respecto al estado inicial de la estructura (lo daños) provocan cambios en la forma de onda de las GLW (lo que denominamos característica sensible al daño o DSF). Mediante técnicas de tratamiento de señal se puede buscar una relación entre dichas variaciones y los daños, esto se conoce como SHM. Sin embargo, las variaciones en las EOC producen también cambios en los datos adquiridos relativos a las GLW (DSF) que provocan errores en los algoritmos de diagnóstico de daño (SHM). Esto sucede porque las firmas de daño y de las EOC en la DSF son del mismo orden. Por lo tanto, es necesario cuantificar y compensar el efecto de las EOC sobre la GLW. Si bien existen diversas metodologías para compensar los efectos de las EOC como por ejemplo “Optimal Baseline Selection” (OBS) o “Baseline Signal Stretching” (BSS), estas, se emplean exclusivamente en la compensación de los efectos térmicos. El método propuesto en esta tesis mezcla análisis de datos experimentales, como en el método OBS, y modelos basados en Redes Neuronales Artificiales (ANN) que reemplazan el modelado físico requerido por el método BSS. El análisis de datos experimentales consiste en aplicar la Transformada Chirplet (CT) para extraer la firma de las EOC sobre la DSF. Con esta información, obtenida bajo diversas EOC, se entrena una ANN. A continuación, la ANN actuará como un interpolador de referencias de la estructura sin daño, generando información de referencia para cualquier EOC. La comparación de las mediciones reales de la DSF con los valores simulados por la ANN, dará como resultado la firma daño en la DSF, lo que permite el diagnóstico de daño. Este esquema se ha aplicado y verificado, en diversas EOC, para una estructura unidimensional con un único camino de daño, y para una estructura representativa de un fuselaje de una aeronave, con curvatura y múltiples elementos rigidizadores, sometida a un estado de cargas complejo, con múltiples caminos de daños. Los efectos de las EOC se han estudiado en detalle en la estructura unidimensional y se han generalizado para el fuselaje, demostrando la independencia del método respecto a la configuración de la estructura y el tipo de sensores utilizados para la adquisición de datos GLW. Por otra parte, esta metodología se puede utilizar para la compensación simultánea de una variedad medible de EOC, que afecten a la adquisición de datos de la onda elástica guiada. El principal resultado entre otros, de esta tesis, es la metodología CT-ANN para la compensación de EOC en técnicas SHM basadas en ondas elásticas guiadas para el diagnóstico de daño. ABSTRACT One of the open problems to implement Structural Health Monitoring techniques based on elastic guided waves in real aircraft structures at operation is the influence of the environmental and operational conditions (EOC) on the damage diagnosis problem. This thesis deals with the compensation of these environmental and operational effects, specifically, the temperature and the external loading, by the use of the Chirplet Transform working with Artificial Neural Networks. It is well known that the guided elastic wave form is affected by the damage appearance (what is known as the damage sensitive feature or DSF). The DSF is modified by the temperature and by the load applied to the structure. The EOC promotes variations in the acquired data (DSF) and cause mistakes in damage diagnosis algorithms. This effect promotes changes on the waveform due to the EOC variations of the same order than the damage occurrence. It is difficult to separate both effects in order to avoid damage diagnosis mistakes. Therefore it is necessary to quantify and compensate the effect of EOC over the GLW forms. There are several approaches to compensate the EOC effects such as Optimal Baseline Selection (OBS) or Baseline Signal Stretching (BSS). Usually, they are used for temperature compensation. The new method proposed here mixes experimental data analysis, as in the OBS method, and Artificial Neural Network (ANN) models to replace the physical modelling which involves the BSS method. The experimental data analysis studied is based on apply the Chirplet Transform (CT) to extract the EOC signature on the DSF. The information obtained varying EOC is employed to train an ANN. Then, the ANN will act as a baselines interpolator of the undamaged structure. The ANN generates reference information at any EOC. By comparing real measurements of the DSF against the ANN simulated values, the damage signature appears clearly in the DSF, enabling an accurate damage diagnosis. This schema has been applied in a range of EOC for a one-dimensional structure containing single damage path and two dimensional real fuselage structure with stiffener elements and multiple damage paths. The EOC effects tested in the one-dimensional structure have been generalized to the fuselage showing its independence from structural arrangement and the type of sensors used for GLW data acquisition. Moreover, it can be used for the simultaneous compensation of a variety of measurable EOC, which affects the guided wave data acquisition. The main result, among others, of this thesis is the CT-ANN methodology for the compensation of EOC in GLW based SHM technique for damage diagnosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A teoria de Jean Piaget sobre o desenvolvimento da inteligência tem sido utilizada na área de inteligência computacional como inspiração para a proposição de modelos de agentes cognitivos. Embora os modelos propostos implementem aspectos básicos importantes da teoria de Piaget, como a estrutura do esquema cognitivo, não consideram o problema da fundamentação simbólica e, portanto, não se preocupam com os aspectos da teoria que levam à aquisição autônoma da semântica básica para a organização cognitiva do mundo externo, como é o caso da aquisição da noção de objeto. Neste trabalho apresentamos um modelo computacional de esquema cognitivo inspirado na teoria de Piaget sobre a inteligência sensório-motora que se desenvolve autonomamente construindo mecanismos por meio de princípios computacionais pautados pelo problema da fundamentação simbólica. O modelo de esquema proposto tem como base a classificação de situações sensório-motoras utilizadas para a percepção, captação e armazenamento das relações causais determiníscas de menor granularidade. Estas causalidades são então expandidas espaço-temporalmente por estruturas mais complexas que se utilizam das anteriores e que também são projetadas de forma a possibilitar que outras estruturas computacionais autônomas mais complexas se utilizem delas. O modelo proposto é implementado por uma rede neural artificial feed-forward cujos elementos da camada de saída se auto-organizam para gerar um grafo sensóriomotor objetivado. Alguns mecanismos computacionais já existentes na área de inteligência computacional foram modificados para se enquadrarem aos paradigmas de semântica nula e do desenvolvimento mental autônomo, tomados como base para lidar com o problema da fundamentação simbólica. O grafo sensório-motor auto-organizável que implementa um modelo de esquema inspirado na teoria de Piaget proposto neste trabalho, conjuntamente com os princípios computacionais utilizados para sua concepção caminha na direção da busca pelo desenvolvimento cognitivo artificial autônomo da noção de objeto.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Short-term load forecasting of power system has been a classic problem for a long time. Not merely it has been researched extensively and intensively, but also a variety of forecasting methods has been raised. This thesis outlines some aspects and functions of smart meter. It also presents different policies and current statuses as well as future projects and objectives of SG development in several countries. Then the thesis compares main aspects about latest products of smart meter from different companies. Lastly, three types of prediction models are established in MATLAB to emulate the functions of smart grid in the short-term load forecasting, and then their results are compared and analyzed in terms of accuracy. For this thesis, more variables such as dew point temperature are used in the Neural Network model to achieve more accuracy for better short-term load forecasting results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rocks used as construction aggregate in temperate climates deteriorate to differing degrees because of repeated freezing and thawing. The magnitude of the deterioration depends on the rock's properties. Aggregate, including crushed carbonate rock, is required to have minimum geotechnical qualities before it can be used in asphalt and concrete. In order to reduce chances of premature and expensive repairs, extensive freeze-thaw tests are conducted on potential construction rocks. These tests typically involve 300 freeze-thaw cycles and can take four to five months to complete. Less time consuming tests that (1) predict durability as well as the extended freeze-thaw test or that (2) reduce the number of rocks subject to the extended test, could save considerable amounts of money. Here we use a probabilistic neural network to try and predict durability as determined by the freeze-thaw test using four rock properties measured on 843 limestone samples from the Kansas Department of Transportation. Modified freeze-thaw tests and less time consuming specific gravity (dry), specific gravity (saturated), and modified absorption tests were conducted on each sample. Durability factors of 95 or more as determined from the extensive freeze-thaw tests are viewed as acceptable—rocks with values below 95 are rejected. If only the modified freeze-thaw test is used to predict which rocks are acceptable, about 45% are misclassified. When 421 randomly selected samples and all four standardized and scaled variables were used to train aprobabilistic neural network, the rate of misclassification of 422 independent validation samples dropped to 28%. The network was trained so that each class (group) and each variable had its own coefficient (sigma). In an attempt to reduce errors further, an additional class was added to the training data to predict durability values greater than 84 and less than 98, resulting in only 11% of the samples misclassified. About 43% of the test data was classed by the neural net into the middle group—these rocks should be subject to full freeze-thaw tests. Thus, use of the probabilistic neural network would meanthat the extended test would only need be applied to 43% of the samples, and 11% of the rocks classed as acceptable would fail early.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The identification and characterization of genes that influence the risk of common, complex multifactorial disease primarily through interactions with other genes and environmental factors remains a statistical and computational challenge in genetic epidemiology. We have previously introduced a genetic programming optimized neural network (GPNN) as a method for optimizing the architecture of a neural network to improve the identification of gene combinations associated with disease risk. The goal of this study was to evaluate the power of GPNN for identifying high-order gene-gene interactions. We were also interested in applying GPNN to a real data analysis in Parkinson's disease. Results We show that GPNN has high power to detect even relatively small genetic effects (2–3% heritability) in simulated data models involving two and three locus interactions. The limits of detection were reached under conditions with very small heritability (