770 resultados para Neural network method
Resumo:
The paper focuses on the analysis of radial-gated spillways, which is carried out by the solution of a numerical model based on the finite element method (FEM). The Oliana Dam is considered as a case study and the discharge capacity is predicted both by the application of a level-set-based free-surface solver and by the use of traditional empirical formulations. The results of the analysis are then used for training an artificial neural network to allow real-time predictions of the discharge in any situation of energy head and gate opening within the operation range of the reservoir. The comparison of the results obtained with the different methods shows that numerical models such as the FEM can be useful as a predictive tool for the analysis of the hydraulic performance of radial-gated spillways.
Resumo:
Seepage flow measurement is an important behavior indicator when providing information about dam performance. The main objective of this study is to analyze seepage by means of an artificial neural network model. The model is trained and validated with data measured at a case study. The dam behavior towards different water level changes is reproduced by the model and a hysteresis phenomenon detected and studied. Artificial neural network models are shown to be a powerful tool for predicting and understanding seepage phenomenon.
Resumo:
En general, la distribución de una flota de vehículos que recorre rutas fijas no se realiza completamente en base a criterios objetivos, primando otros aspectos más difícilmente cuantificables. El análisis apropiado debería tener en consideración la variabilidad existente entre las diferentes rutas dentro de una misma ciudad para así determinar qué tecnología es la que mejor se adapta a las características de cada itinerario. Este trabajo presenta una metodología para optimizar la asignación de una flota de vehículos a sus rutas, consiguiendo reducir el consumo y las emisiones contaminantes. El método propuesto está organizado según el siguiente procedimiento: - Registro de las características cinemáticas de los vehículos que recorren un conjunto representativo de rutas. - Agrupamiento de las líneas en conglomerados de líneas similares empleando un algoritmo jerárquico que optimice el índice de semejanza entre rutas obtenido mediante contraste de hipótesis de las variables representativas. - Generación de un ciclo cinemático específico para cada conglomerado. - Tipificación de variables macroscópicas que faciliten la clasificación de las restantes líneas utilizando una red neuronal entrenada con la información recopilada en las rutas medidas. - Conocimiento de las características de la flota disponible. - Disponibilidad de un modelo que estime, según la tecnología del vehículo, el consumo y las emisiones asociados a las variables cinemáticas de los ciclos. - Desarrollo de un algoritmo de reasignación de vehículos que optimice una función objetivo dependiente de las emisiones. En el proceso de optimización de la flota se plantean dos escenarios de gran trascendencia en la evaluación ambiental, consistentes en minimizar la emisión de dióxido de carbono y su impacto como gas de efecto invernadero (GEI), y alternativamente, la producción de nitróxidos, por su influencia en la lluvia ácida y en la formación de ozono troposférico en núcleos urbanos. Además, en ambos supuestos se introducen en el problema restricciones adicionales para evitar que las emisiones de las restantes sustancias superen los valores estipulados según la organización de la flota actualmente realizada por el operador. La metodología ha sido aplicada en 160 líneas de autobús de la EMT de Madrid, conociéndose los datos cinemáticos de 25 rutas. Los resultados indican que, en ambos supuestos, es factible obtener una redistribución de la flota que consiga reducir significativamente la mayoría de las sustancias contaminantes, evitando que, en contraprestación, aumente la emisión de cualquier otro contaminante. ABSTRACT In general, the distribution of a fleet of vehicles that travel fixed routes is not usually implemented on the basis of objective criteria, thus prioritizing on other features that are more difficult to quantify. The appropriate analysis should consider the existing variability amongst the different routes within the city in order to determine which technology adapts better to the peculiarities of each itinerary. This study proposes a methodology to optimize the allocation of a fleet of vehicles to the routes in order to reduce fuel consumption and pollutant emissions. The suggested method is structured in accordance with the following procedure: - Recording of the kinematic characteristics of the vehicles that travel a representative set of routes. - Grouping of the lines in clusters of similar routes by utilizing a hierarchical algorithm that optimizes the similarity index between routes, which has been previously obtained by means of hypothesis contrast based on a set of representative variables. - Construction of a specific kinematic cycle to represent each cluster of routes. - Designation of macroscopic variables that allow the classification of the remaining lines using a neural network trained with the information gathered from a sample of routes. - Identification and comprehension of the operational characteristics of the existing fleet. - Availability of a model that evaluates, in accordance with the technology of the vehicle, the fuel consumption and the emissions related with the kinematic variables of the cycles. - Development of an algorithm for the relocation of the vehicle fleet by optimizing an objective function which relies on the values of the pollutant emissions. Two scenarios having great relevance in environmental evaluation are assessed during the optimization process of the fleet, these consisting in minimizing carbon dioxide emissions due to its impact as greenhouse gas (GHG), and alternatively, the production of nitroxides for their influence on acid rain and in the formation of tropospheric ozone in urban areas. Furthermore, additional restrictions are introduced in both assumptions in order to prevent that emission levels for the remaining substances exceed the stipulated values for the actual fleet organization implemented by the system operator. The methodology has been applied in 160 bus lines of the EMT of Madrid, for which kinematic information is known for a sample consisting of 25 routes. The results show that, in both circumstances, it is feasible to obtain a redistribution of the fleet that significantly reduces the emissions for the majority of the pollutant substances, while preventing an alternative increase in the emission level of any other contaminant.
Resumo:
Este proyecto tiene como objetivo la implementación de un sistema capaz de analizar el movimiento corporal a partir de unos puntos cinemáticos. Estos puntos cinemáticos se obtienen con un programa previo y se captan con la cámara kinect. Para ello el primer paso es realizar un estudio sobre las técnicas y conocimientos existentes relacionados con el movimiento de las personas. Se sabe que Rudolph Laban fue uno de sus mayores exponentes y gracias a sus observaciones se establece una relación entre la personalidad, el estado anímico y la forma de moverse de un individuo. Laban acuñó el término esfuerzo, que hace referencia al modo en que se administra la energía que genera el movimiento y de qué manera se modula en las secuencias, es una manera de describir la intención de las expresiones internas. El esfuerzo se divide en 4 categorías: peso, espacio, tiempo y flujo, y cada una de estas categorías tiene una polaridad denominada elemento de esfuerzo. Con estos 8 elementos de esfuerzo un movimiento queda caracterizado. Para poder cuantificar los citados elementos de esfuerzo se buscan movimientos que representen a alguno de ellos. Los movimientos se graban con la cámara kinect y se guardan sus valores en un archivo csv. Para el procesado de estos datos se establece que el sistema más adecuado es una red neuronal debido a su flexibilidad y capacidad a la hora de procesar entradas no lineales. Para la implementación de la misma se requiere un amplio estudio que incluye: topologías, funciones de activación, tipos de aprendizaje, algoritmos de entrenamiento entre otros. Se decide que la red tenga dos capas ocultas, para mejor procesado de los datos, que sea estática, siga un proceso de cálculo hacia delante (Feedforward) y el algoritmo por el que se rija su aprendizaje sea el de retropropagación (Backpropagation) En una red estática las entradas han de ser valores fijos, es decir, no pueden variar en el tiempo por lo que habrá que implementar un programa intermedio que haga una media aritmética de los valores. Una segunda prueba con la misma red trata de comprobar si sería capaz de reconocer movimientos que estuvieran caracterizados por más de un elemento de esfuerzo. Para ello se vuelven a grabar los movimientos, esta vez en parejas de dos, y el resto del proceso es igual. ABSTRACT. The aim of this project is the implementation of a system able to analyze body movement from cinematic data. This cinematic data was obtained with a previous program. The first step is carrying out a study about the techniques and knowledge existing nowadays related to people movement. It is known that Rudolf Laban was one the greatest exponents of this field and thanks to his observations a relation between personality, mood and the way the person moves was made. Laban coined the term effort, that refers to the way energy generated from a movement is managed and how it is modulated in the sequence, this is a method of describing the inner intention of the person. The effort is divided into 4 categories: weight, space, time and flow, and each of these categories have 2 polarities named elements of effort. These 8 elements typify a movement. We look for movements that are made of these elements so we can quantify them. The movements are recorded with the kinect camera and saved in a csv file. In order to process this data a neural network is chosen owe to its flexibility and capability of processing non-linear inputs. For its implementation it is required a wide study regarding: topology, activation functions, different types of learning methods and training algorithms among others. The neural network for this project will have 2 hidden layers, it will be static and follow a feedforward process ruled by backpropagation. In a static net the inputs must be fixed, this means they cannot vary in time, so we will have to implement an intermediate program to calculate the average of our data. A second test for our net will be checking its ability to recognize more than one effort element in just one movement. In order to do this all the movements are recorded again but this time in pairs, the rest of the process remains the same.
Resumo:
A nivel mundial, el cáncer de mama es el tipo de cáncer más frecuente además de una de las principales causas de muerte entre la población femenina. Actualmente, el método más eficaz para detectar lesiones mamarias en una etapa temprana es la mamografía. Ésta contribuye decisivamente al diagnóstico precoz de esta enfermedad que, si se detecta a tiempo, tiene una probabilidad de curación muy alta. Uno de los principales y más frecuentes hallazgos en una mamografía, son las microcalcificaciones, las cuales son consideradas como un indicador importante de cáncer de mama. En el momento de analizar las mamografías, factores como la capacidad de visualización, la fatiga o la experiencia profesional del especialista radiólogo hacen que el riesgo de omitir ciertas lesiones presentes se vea incrementado. Para disminuir dicho riesgo es importante contar con diferentes alternativas como por ejemplo, una segunda opinión por otro especialista o un doble análisis por el mismo. En la primera opción se eleva el coste y en ambas se prolonga el tiempo del diagnóstico. Esto supone una gran motivación para el desarrollo de sistemas de apoyo o asistencia en la toma de decisiones. En este trabajo de tesis se propone, se desarrolla y se justifica un sistema capaz de detectar microcalcificaciones en regiones de interés extraídas de mamografías digitalizadas, para contribuir a la detección temprana del cáncer demama. Dicho sistema estará basado en técnicas de procesamiento de imagen digital, de reconocimiento de patrones y de inteligencia artificial. Para su desarrollo, se tienen en cuenta las siguientes consideraciones: 1. Con el objetivo de entrenar y probar el sistema propuesto, se creará una base de datos de imágenes, las cuales pertenecen a regiones de interés extraídas de mamografías digitalizadas. 2. Se propone la aplicación de la transformada Top-Hat, una técnica de procesamiento digital de imagen basada en operaciones de morfología matemática. La finalidad de aplicar esta técnica es la de mejorar el contraste entre las microcalcificaciones y el tejido presente en la imagen. 3. Se propone un algoritmo novel llamado sub-segmentación, el cual está basado en técnicas de reconocimiento de patrones aplicando un algoritmo de agrupamiento no supervisado, el PFCM (Possibilistic Fuzzy c-Means). El objetivo es encontrar las regiones correspondientes a las microcalcificaciones y diferenciarlas del tejido sano. Además, con la finalidad de mostrar las ventajas y desventajas del algoritmo propuesto, éste es comparado con dos algoritmos del mismo tipo: el k-means y el FCM (Fuzzy c-Means). Por otro lado, es importante destacar que en este trabajo por primera vez la sub-segmentación es utilizada para detectar regiones pertenecientes a microcalcificaciones en imágenes de mamografía. 4. Finalmente, se propone el uso de un clasificador basado en una red neuronal artificial, específicamente un MLP (Multi-layer Perceptron). El propósito del clasificador es discriminar de manera binaria los patrones creados a partir de la intensidad de niveles de gris de la imagen original. Dicha clasificación distingue entre microcalcificación y tejido sano. ABSTRACT Breast cancer is one of the leading causes of women mortality in the world and its early detection continues being a key piece to improve the prognosis and survival. Currently, the most reliable and practical method for early detection of breast cancer is mammography.The presence of microcalcifications has been considered as a very important indicator ofmalignant types of breast cancer and its detection and classification are important to prevent and treat the disease. However, the detection and classification of microcalcifications continue being a hard work due to that, in mammograms there is a poor contrast between microcalcifications and the tissue around them. Factors such as visualization, tiredness or insufficient experience of the specialist increase the risk of omit some present lesions. To reduce this risk, is important to have alternatives such as a second opinion or a double analysis for the same specialist. In the first option, the cost increases and diagnosis time also increases for both of them. This is the reason why there is a great motivation for development of help systems or assistance in the decision making process. This work presents, develops and justifies a system for the detection of microcalcifications in regions of interest extracted fromdigitizedmammographies to contribute to the early detection of breast cancer. This systemis based on image processing techniques, pattern recognition and artificial intelligence. For system development the following features are considered: With the aim of training and testing the system, an images database is created, belonging to a region of interest extracted from digitized mammograms. The application of the top-hat transformis proposed. This image processing technique is based on mathematical morphology operations. The aim of this technique is to improve the contrast betweenmicrocalcifications and tissue present in the image. A novel algorithm called sub-segmentation is proposed. The sub-segmentation is based on pattern recognition techniques applying a non-supervised clustering algorithm known as Possibilistic Fuzzy c-Means (PFCM). The aim is to find regions corresponding to the microcalcifications and distinguish them from the healthy tissue. Furthermore,with the aim of showing themain advantages and disadvantages this is compared with two algorithms of same type: the k-means and the fuzzy c-means (FCM). On the other hand, it is important to highlight in this work for the first time the sub-segmentation is used for microcalcifications detection. Finally, a classifier based on an artificial neural network such as Multi-layer Perceptron is used. The purpose of this classifier is to discriminate froma binary perspective the patterns built from gray level intensity of the original image. This classification distinguishes between microcalcifications and healthy tissue.
Resumo:
In this paper a Glucose-Insulin regulator for Type 1 Diabetes using artificial neural networks (ANN) is proposed. This is done using a discrete recurrent high order neural network in order to identify and control a nonlinear dynamical system which represents the pancreas? beta-cells behavior of a virtual patient. The ANN which reproduces and identifies the dynamical behavior system, is configured as series parallel and trained on line using the extended Kalman filter algorithm to achieve a quickly convergence identification in silico. The control objective is to regulate the glucose-insulin level under different glucose inputs and is based on a nonlinear neural block control law. A safety block is included between the control output signal and the virtual patient with type 1 diabetes mellitus. Simulations include a period of three days. Simulation results are compared during the overnight fasting period in Open-Loop (OL) versus Closed- Loop (CL). Tests in Semi-Closed-Loop (SCL) are made feedforward in order to give information to the control algorithm. We conclude the controller is able to drive the glucose to target in overnight periods and the feedforward is necessary to control the postprandial period.
Resumo:
The purpose of the research work resulting from various studies undertaken in the CEDEX, as summarized in this article, is to make a comparative analysis of methods for calculating overtopping rates developed by different authors. To this effect, in the first place, existing formulas for estimating the overtopping rate on rubble mound and vertical breakwaters were summarised and analysed. Later, the above mentioned formulas were compared using the results obtained in a series of hydraulic model tests at the CEDEX. The results obtained in the Ferrol outer harbour breakwater and Melilla harbour breakwater tests are presented here. A calculation method based on the neural network theory, developed in the European CLASH Project, was applied to a series of sloping breakwater tests in order to complete this research and the results obtained in the Ferrol outer harbour breakwater test are presented in this article. A series of additional tests was also carried out in a physical model on the standard cross section of the Bilbao harbour sloping breakwater’s cross section, the results of which are under study using the empirical formulas applicable to the cross section, as well as the NN-OVERTOPPING neural network
Resumo:
The purpose of the research work resulting from various studies undertaken in the CEDEX, as summarized in this article, is to make a comparative analysis of methods for calculating overtopping rates developed by different authors. To this effect, in the first place, existing formulae for estimating the overtopping rate on rubble mound and vertical breakwaters were summarised and analysed. Later, the above mentioned formulae were compared using the results obtained in a series of hydraulic model tests at the CEDEX (the Center of Studies of Ports and Coasts of the CEDEX, Madrid, Spain). A calculation method based on the neural network theory, developed in the European CLASH Project, was applied to a series of sloping breakwater tests in order to complete this research. The results obtained in the Ferrol, Ciervana and Alicante breakwaters tests are presented here.
Resumo:
This paper reports extensive tests of empirical equations developed by different authors for harbour breakwater overtopping. First, the existing equations are compiled and evaluated as tools for estimating the overtopping rates on sloping and vertical breakwaters. These equations are then tested using the data obtained in a number of laboratory studies performed in the Centre for Harbours and Coastal Studies of the CEDEX, Spain. It was found that the recommended application ranges of the empirical equations typically deviate from those revealed in the experimental tests. In addition, a neural network model developed within the European CLASH Project is tested. The wind effects on overtopping are also assessed using a reduced scale physical model
Resumo:
Una de las barreras para la aplicación de las técnicas de monitorización de la integridad estructural (SHM) basadas en ondas elásticas guiadas (GLW) en aeronaves es la influencia perniciosa de las condiciones ambientales y de operación (EOC). En esta tesis se ha estudiado dicha influencia y la compensación de la misma, particularizando en variaciones del estado de carga y temperatura. La compensación de dichos efectos se fundamenta en Redes Neuronales Artificiales (ANN) empleando datos experimentales procesados con la Transformada Chirplet. Los cambios en la geometría y en las propiedades del material respecto al estado inicial de la estructura (lo daños) provocan cambios en la forma de onda de las GLW (lo que denominamos característica sensible al daño o DSF). Mediante técnicas de tratamiento de señal se puede buscar una relación entre dichas variaciones y los daños, esto se conoce como SHM. Sin embargo, las variaciones en las EOC producen también cambios en los datos adquiridos relativos a las GLW (DSF) que provocan errores en los algoritmos de diagnóstico de daño (SHM). Esto sucede porque las firmas de daño y de las EOC en la DSF son del mismo orden. Por lo tanto, es necesario cuantificar y compensar el efecto de las EOC sobre la GLW. Si bien existen diversas metodologías para compensar los efectos de las EOC como por ejemplo “Optimal Baseline Selection” (OBS) o “Baseline Signal Stretching” (BSS), estas, se emplean exclusivamente en la compensación de los efectos térmicos. El método propuesto en esta tesis mezcla análisis de datos experimentales, como en el método OBS, y modelos basados en Redes Neuronales Artificiales (ANN) que reemplazan el modelado físico requerido por el método BSS. El análisis de datos experimentales consiste en aplicar la Transformada Chirplet (CT) para extraer la firma de las EOC sobre la DSF. Con esta información, obtenida bajo diversas EOC, se entrena una ANN. A continuación, la ANN actuará como un interpolador de referencias de la estructura sin daño, generando información de referencia para cualquier EOC. La comparación de las mediciones reales de la DSF con los valores simulados por la ANN, dará como resultado la firma daño en la DSF, lo que permite el diagnóstico de daño. Este esquema se ha aplicado y verificado, en diversas EOC, para una estructura unidimensional con un único camino de daño, y para una estructura representativa de un fuselaje de una aeronave, con curvatura y múltiples elementos rigidizadores, sometida a un estado de cargas complejo, con múltiples caminos de daños. Los efectos de las EOC se han estudiado en detalle en la estructura unidimensional y se han generalizado para el fuselaje, demostrando la independencia del método respecto a la configuración de la estructura y el tipo de sensores utilizados para la adquisición de datos GLW. Por otra parte, esta metodología se puede utilizar para la compensación simultánea de una variedad medible de EOC, que afecten a la adquisición de datos de la onda elástica guiada. El principal resultado entre otros, de esta tesis, es la metodología CT-ANN para la compensación de EOC en técnicas SHM basadas en ondas elásticas guiadas para el diagnóstico de daño. ABSTRACT One of the open problems to implement Structural Health Monitoring techniques based on elastic guided waves in real aircraft structures at operation is the influence of the environmental and operational conditions (EOC) on the damage diagnosis problem. This thesis deals with the compensation of these environmental and operational effects, specifically, the temperature and the external loading, by the use of the Chirplet Transform working with Artificial Neural Networks. It is well known that the guided elastic wave form is affected by the damage appearance (what is known as the damage sensitive feature or DSF). The DSF is modified by the temperature and by the load applied to the structure. The EOC promotes variations in the acquired data (DSF) and cause mistakes in damage diagnosis algorithms. This effect promotes changes on the waveform due to the EOC variations of the same order than the damage occurrence. It is difficult to separate both effects in order to avoid damage diagnosis mistakes. Therefore it is necessary to quantify and compensate the effect of EOC over the GLW forms. There are several approaches to compensate the EOC effects such as Optimal Baseline Selection (OBS) or Baseline Signal Stretching (BSS). Usually, they are used for temperature compensation. The new method proposed here mixes experimental data analysis, as in the OBS method, and Artificial Neural Network (ANN) models to replace the physical modelling which involves the BSS method. The experimental data analysis studied is based on apply the Chirplet Transform (CT) to extract the EOC signature on the DSF. The information obtained varying EOC is employed to train an ANN. Then, the ANN will act as a baselines interpolator of the undamaged structure. The ANN generates reference information at any EOC. By comparing real measurements of the DSF against the ANN simulated values, the damage signature appears clearly in the DSF, enabling an accurate damage diagnosis. This schema has been applied in a range of EOC for a one-dimensional structure containing single damage path and two dimensional real fuselage structure with stiffener elements and multiple damage paths. The EOC effects tested in the one-dimensional structure have been generalized to the fuselage showing its independence from structural arrangement and the type of sensors used for GLW data acquisition. Moreover, it can be used for the simultaneous compensation of a variety of measurable EOC, which affects the guided wave data acquisition. The main result, among others, of this thesis is the CT-ANN methodology for the compensation of EOC in GLW based SHM technique for damage diagnosis.
Resumo:
A technique for systematic peptide variation by a combination of rational and evolutionary approaches is presented. The design scheme consists of five consecutive steps: (i) identification of a “seed peptide” with a desired activity, (ii) generation of variants selected from a physicochemical space around the seed peptide, (iii) synthesis and testing of this biased library, (iv) modeling of a quantitative sequence-activity relationship by an artificial neural network, and (v) de novo design by a computer-based evolutionary search in sequence space using the trained neural network as the fitness function. This strategy was successfully applied to the identification of novel peptides that fully prevent the positive chronotropic effect of anti-β1-adrenoreceptor autoantibodies from the serum of patients with dilated cardiomyopathy. The seed peptide, comprising 10 residues, was derived by epitope mapping from an extracellular loop of human β1-adrenoreceptor. A set of 90 peptides was synthesized and tested to provide training data for neural network development. De novo design revealed peptides with desired activities that do not match the seed peptide sequence. These results demonstrate that computer-based evolutionary searches can generate novel peptides with substantial biological activity.
Resumo:
Dynamic importance weighting is proposed as a Monte Carlo method that has the capability to sample relevant parts of the configuration space even in the presence of many steep energy minima. The method relies on an additional dynamic variable (the importance weight) to help the system overcome steep barriers. A non-Metropolis theory is developed for the construction of such weighted samplers. Algorithms based on this method are designed for simulation and global optimization tasks arising from multimodal sampling, neural network training, and the traveling salesman problem. Numerical tests on these problems confirm the effectiveness of the method.
Resumo:
Deciphering the information that eyes, ears, and other sensory organs transmit to the brain is important for understanding the neural basis of behavior. Recordings from single sensory nerve cells have yielded useful insights, but single neurons generally do not mediate behavior; networks of neurons do. Monitoring the activity of all cells in a neural network of a behaving animal, however, is not yet possible. Taking an alternative approach, we used a realistic cell-based model to compute the ensemble of neural activity generated by one sensory organ, the lateral eye of the horseshoe crab, Limulus polyphemus. We studied how the neural network of this eye encodes natural scenes by presenting to the model movies recorded with a video camera mounted above the eye of an animal that was exploring its underwater habitat. Model predictions were confirmed by simultaneously recording responses from single optic nerve fibers of the same animal. We report here that the eye transmits to the brain robust “neural images” of objects having the size, contrast, and motion of potential mates. The neural code for such objects is not found in ambiguous messages of individual optic nerve fibers but rather in patterns of coherent activity that extend over small ensembles of nerve fibers and are bound together by stimulus motion. Integrative properties of neurons in the first synaptic layer of the brain appear well suited to detecting the patterns of coherent activity. Neural coding by this relatively simple eye helps explain how horseshoe crabs find mates and may lead to a better understanding of how more complex sensory organs process information.
Resumo:
Although much of the brain’s functional organization is genetically predetermined, it appears that some noninnate functions can come to depend on dedicated and segregated neural tissue. In this paper, we describe a series of experiments that have investigated the neural development and organization of one such noninnate function: letter recognition. Functional neuroimaging demonstrates that letter and digit recognition depend on different neural substrates in some literate adults. How could the processing of two stimulus categories that are distinguished solely by cultural conventions become segregated in the brain? One possibility is that correlation-based learning in the brain leads to a spatial organization in cortex that reflects the temporal and spatial clustering of letters with letters in the environment. Simulations confirm that environmental co-occurrence does indeed lead to spatial localization in a neural network that uses correlation-based learning. Furthermore, behavioral studies confirm one critical prediction of this co-occurrence hypothesis, namely, that subjects exposed to a visual environment in which letters and digits occur together rather than separately (postal workers who process letters and digits together in Canadian postal codes) do indeed show less behavioral evidence for segregated letter and digit processing.
Self-organized phase transitions in neural networks as a neural mechanism of information processing.
Resumo:
Transitions between dynamically stable activity patterns imposed on an associative neural network are shown to be induced by self-organized infinitesimal changes in synaptic connection strength and to be a kind of phase transition. A key event for the neural process of information processing in a population coding scheme is transition between the activity patterns encoding usual entities. We propose that the infinitesimal and short-term synaptic changes based on the Hebbian learning rule are the driving force for the transition. The phase transition between the following two dynamical stable states is studied in detail, the state where the firing pattern is changed temporally so as to itinerate among several patterns and the state where the firing pattern is fixed to one of several patterns. The phase transition from the pattern itinerant state to a pattern fixed state may be induced by the Hebbian learning process under a weak input relevant to the fixed pattern. The reverse transition may be induced by the Hebbian unlearning process without input. The former transition is considered as recognition of the input stimulus, while the latter is considered as clearing of the used input data to get ready for new input. To ensure that information processing based on the phase transition can be made by the infinitesimal and short-term synaptic changes, it is absolutely necessary that the network always stays near the critical state corresponding to the phase transition point.