936 resultados para Kohonen neural networks


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies several applications of genetic algorithms (GAs) within the neural networks field. After generating a robust GA engine, the system was used to generate neural network circuit architectures. This was accomplished by using the GA to determine the weights in a fully interconnected network. The importance of the internal genetic representation was shown by testing different approaches. The effects in speed of optimization of varying the constraints imposed upon the desired network were also studied. It was observed that relatively loose constraints provided results comparable to a fully constrained system. The type of neural network circuits generated were recurrent competitive fields as described by Grossberg (1982).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Genetic Algorithms (GAs) make use of an internal representation of a given system in order to perform optimization functions. The actual structural layout of this representation, called a genome, has a crucial impact on the outcome of the optimization process. The purpose of this paper is to study the effects of different internal representations in a GA, which generates neural networks. A second GA was used to optimize the genome structure. This structure produces an optimized system within a shorter time interval.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Posttraumatic stress disorder (PTSD) affects the functional recruitment and connectivity between neural regions during autobiographical memory (AM) retrieval that overlap with default and control networks. Whether such univariate changes relate to potential differences in the contributions of the large-scale neural networks supporting cognition in PTSD is unknown. In the present functional MRI study, we employed independent-component analysis to examine the influence of the engagement of neural networks during the recall of personal memories in a PTSD group (15 participants) as compared to non-trauma-exposed healthy controls (14 participants). We found that the PTSD group recruited similar neural networks when compared to the controls during AM recall, including default-network subsystems and control networks, but group differences emerged in the spatial and temporal characteristics of these networks. First, we found spatial differences in the contributions of the anterior and posterior midline across the networks, and of the amygdala in particular, for the medial temporal subsystem of the default network. Second, we found temporal differences within the medial prefrontal subsystem of the default network, with less temporal coupling of this network during AM retrieval in PTSD relative to controls. These findings suggest that the spatial and temporal characteristics of the default and control networks potentially differ in a PTSD group versus healthy controls and contribute to altered recall of personal memory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How do separate neural networks interact to support complex cognitive processes such as remembrance of the personal past? Autobiographical memory (AM) retrieval recruits a consistent pattern of activation that potentially comprises multiple neural networks. However, it is unclear how such large-scale neural networks interact and are modulated by properties of the memory retrieval process. In the present functional MRI (fMRI) study, we combined independent component analysis (ICA) and dynamic causal modeling (DCM) to understand the neural networks supporting AM retrieval. ICA revealed four task-related components consistent with the previous literature: 1) medial prefrontal cortex (PFC) network, associated with self-referential processes, 2) medial temporal lobe (MTL) network, associated with memory, 3) frontoparietal network, associated with strategic search, and 4) cingulooperculum network, associated with goal maintenance. DCM analysis revealed that the medial PFC network drove activation within the system, consistent with the importance of this network to AM retrieval. Additionally, memory accessibility and recollection uniquely altered connectivity between these neural networks. Recollection modulated the influence of the medial PFC on the MTL network during elaboration, suggesting that greater connectivity among subsystems of the default network supports greater re-experience. In contrast, memory accessibility modulated the influence of frontoparietal and MTL networks on the medial PFC network, suggesting that ease of retrieval involves greater fluency among the multiple networks contributing to AM. These results show the integration between neural networks supporting AM retrieval and the modulation of network connectivity by behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Artificial neural network (ANN) models for water loss (WL) and solid gain (SG) were evaluated as potential alternative to multiple linear regression (MLR) for osmotic dehydration of apple, banana and potato. The radial basis function (RBF) network with a Gaussian function was used in this study. The RBF employed the orthogonal least square learning method. When predictions of experimental data from MLR and ANN were compared, an agreement was found for ANN models than MLR models for SG than WL. The regression coefficient for determination (R2) for SG in MLR models was 0.31, and for ANN was 0.91. The R2 in MLR for WL was 0.89, whereas ANN was 0.84.Osmotic dehydration experiments found that the amount of WL and SG occurred in the following descending order: Golden Delicious apple > Cox apple > potato > banana. The effect of temperature and concentration of osmotic solution on WL and SG of the plant materials followed a descending order as: 55 > 40 > 32.2C and 70 > 60 > 50 > 40%, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a novel classification of wavelet neural networks based on the orthogonality/non-orthogonality of neurons and the type of nonlinearity employed. On the basis of this classification different network types are studied and their characteristics illustrated by means of simple one-dimensional nonlinear examples. For multidimensional problems, which are affected by the curse of dimensionality, the idea of spherical wavelet functions is considered. The behaviour of these networks is also studied for modelling of a low-dimension map.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the learning of a wide class of single-hidden-layer feedforward neural networks (SLFNs) with two sets of adjustable parameters, i.e., the nonlinear parameters in the hidden nodes and the linear output weights. The main objective is to both speed up the convergence of second-order learning algorithms such as Levenberg-Marquardt (LM), as well as to improve the network performance. This is achieved here by reducing the dimension of the solution space and by introducing a new Jacobian matrix. Unlike conventional supervised learning methods which optimize these two sets of parameters simultaneously, the linear output weights are first converted into dependent parameters, thereby removing the need for their explicit computation. Consequently, the neural network (NN) learning is performed over a solution space of reduced dimension. A new Jacobian matrix is then proposed for use with the popular second-order learning methods in order to achieve a more accurate approximation of the cost function. The efficacy of the proposed method is shown through an analysis of the computational complexity and by presenting simulation results from four different examples.