821 resultados para bidirectional associative memory neural networks


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a novel hybrid forward algorithm (HFA) for the construction of radial basis function (RBF) neural networks with tunable nodes. The main objective is to efficiently and effectively produce a parsimonious RBF neural network that generalizes well. In this study, it is achieved through simultaneous network structure determination and parameter optimization on the continuous parameter space. This is a mixed integer hard problem and the proposed HFA tackles this problem using an integrated analytic framework, leading to significantly improved network performance and reduced memory usage for the network construction. The computational complexity analysis confirms the efficiency of the proposed algorithm, and the simulation results demonstrate its effectiveness

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A continuous forward algorithm (CFA) is proposed for nonlinear modelling and identification using radial basis function (RBF) neural networks. The problem considered here is simultaneous network construction and parameter optimization, well-known to be a mixed integer hard one. The proposed algorithm performs these two tasks within an integrated analytic framework, and offers two important advantages. First, the model performance can be significantly improved through continuous parameter optimization. Secondly, the neural representation can be built without generating and storing all candidate regressors, leading to significantly reduced memory usage and computational complexity. Computational complexity analysis and simulation results confirm the effectiveness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes the deployment of a neural network computing environment on Active Networks. Active Networks are packet-switched computer networks in which packets can contain code fragments that are executed on the intermediate nodes. This feature allows the injection of small pieces of codes to deal with computer network problems directly into the network core, and the adoption of new computing techniques to solve networking problems. The goal of our project is the adoption of a distributed neural network for approaching tasks which are specific of the computer network environment. Dynamically reconfigurable neural networks are spread on an experimental wide area backbone of active nodes (ABone) to show the feasibility of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A connection between a fuzzy neural network model with the mixture of experts network (MEN) modelling approach is established. Based on this linkage, two new neuro-fuzzy MEN construction algorithms are proposed to overcome the curse of dimensionality that is inherent in the majority of associative memory networks and/or other rule based systems. The first construction algorithm employs a function selection manager module in an MEN system. The second construction algorithm is based on a new parallel learning algorithm in which each model rule is trained independently, for which the parameter convergence property of the new learning method is established. As with the first approach, an expert selection criterion is utilised in this algorithm. These two construction methods are equivalent in their effectiveness in overcoming the curse of dimensionality by reducing the dimensionality of the regression vector, but the latter has the additional computational advantage of parallel processing. The proposed algorithms are analysed for effectiveness followed by numerical examples to illustrate their efficacy for some difficult data based modelling problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Artificial neural networks are usually applied to solve complex problems. In problems with more complexity, by increasing the number of layers and neurons, it is possible to achieve greater functional efficiency. Nevertheless, this leads to a greater computational effort. The response time is an important factor in the decision to use neural networks in some systems. Many argue that the computational cost is higher in the training period. However, this phase is held only once. Once the network trained, it is necessary to use the existing computational resources efficiently. In the multicore era, the problem boils down to efficient use of all available processing cores. However, it is necessary to consider the overhead of parallel computing. In this sense, this paper proposes a modular structure that proved to be more suitable for parallel implementations. It is proposed to parallelize the feedforward process of an RNA-type MLP, implemented with OpenMP on a shared memory computer architecture. The research consistes on testing and analizing execution times. Speedup, efficiency and parallel scalability are analyzed. In the proposed approach, by reducing the number of connections between remote neurons, the response time of the network decreases and, consequently, so does the total execution time. The time required for communication and synchronization is directly linked to the number of remote neurons in the network, and so it is necessary to investigate which one is the best distribution of remote connections

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An accurate switched-current (SI) memory cell and suitable for low-voltage low-power (LVLP) applications is proposed. Information is memorized as the gate-voltage of the input transistor, in a tunable gain-boosting triode-transconductor. Additionally, four-quadrant multiplication between the input voltage to the transconductor regulation-amplifier (X-operand) and the stored voltage (Y-operand) is provided. A simplified 2 x 2-memory array was prototyped according to a standard 0.8 mum n-well CMOS process and 1.8-V supply. Measured current-reproduction error is less than 0.26% for 0.25 muA less than or equal to I-SAMPLE less than or equal to 0.75 muA. Standby consumption is 6.75 muW per cell @I-SAMPLE = 0.75 muA. At room temperature, leakage-rate is 1.56 nA/ms. Four-quadrant multiplier (4QM) full-scale operands are 2x(max) = 320 mV(pp) and 2y(max). = 448 mV(pp), yielding a maximum output swing of 0.9 muA(pp). 4QM worst-case nonlinearity is 7.9%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A RBFN implemented with quantized parameters is proposed and the relative or limited approximation property is presented. Simulation results for sinusoidal function approximation with various quantization levels are shown. The results indicate that the network presents good approximation capability even with severe quantization. The parameter quantization decreases the memory size and circuit complexity required to store the network parameters leading to compact mixed-signal circuits proper for low-power applications. ©2008 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effects upon memory of normal aging and two age-related neurodegenerative diseases, Alzheimer disease (AD) and Parkinson disease, are analyzed in terms of memory systems, specific neural networks that mediate specific mnemonic processes. An occipital memory system mediating implicit visual-perceptual memory appears to be unaffected by aging or AD. A frontal system that may mediate implicit conceptual memory is affected by AD but not by normal aging. Another frontal system that mediates aspects of working and strategic memory is affected by Parkinson disease and, to a lesser extent, by aging. The aging effect appears to occur during all ages of the adult life-span. Finally, a medial-temporal system that mediates declarative memory is affected by the late onset of AD. Studies of intact and impaired memory in age-related diseases suggest that normal aging has markedly different effects upon different memory systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fast Classification (FC) networks were inspired by a biologically plausible mechanism for short term memory where learning occurs instantaneously. Both weights and the topology for an FC network are mapped directly from the training samples by using a prescriptive training scheme. Only two presentations of the training data are required to train an FC network. Compared with iterative learning algorithms such as Back-propagation (which may require many hundreds of presentations of the training data), the training of FC networks is extremely fast and learning convergence is always guaranteed. Thus FC networks may be suitable for applications where real-time classification is needed. In this paper, the FC networks are applied for the real-time extraction of gene expressions for Chlamydia microarray data. Both the classification performance and learning time of the FC networks are compared with the Multi-Layer Proceptron (MLP) networks and support-vector-machines (SVM) in the same classification task. The FC networks are shown to have extremely fast learning time and comparable classification accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neuroimaging studies have consistently shown that working memory (WM) tasks engage a distributed neural network that primarily includes the dorsolateral prefrontal cortex, the parietal cortex, and the anterior cingulate cortex. The current challenge is to provide a mechanistic account of the changes observed in regional activity. To achieve this, we characterized neuroplastic responses in effective connectivity between these regions at increasing WM loads using dynamic causal modeling of functional magnetic resonance imaging data obtained from healthy individuals during a verbal n-back task. Our data demonstrate that increasing memory load was associated with (a) right-hemisphere dominance, (b) increasing forward (i.e., posterior to anterior) effective connectivity within the WM network, and (c) reduction in individual variability in WM network architecture resulting in the right-hemisphere forward model reaching an exceedance probability of 99% in the most demanding condition. Our results provide direct empirical support that task difficulty, in our case WM load, is a significant moderator of short-term plasticity, complementing existing theories of task-related reduction in variability in neural networks. Hum Brain Mapp, 2013. © 2013 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of multi-agent routing in static telecommunication networks with fixed configuration is considered. The problem is formulated in two ways: for centralized routing schema with the coordinator-agent (global routing) and for distributed routing schema with independent agents (local routing). For both schemas appropriate Hopfield neural networks (HNN) are constructed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On the basis of convolutional (Hamming) version of recent Neural Network Assembly Memory Model (NNAMM) for intact two-layer autoassociative Hopfield network optimal receiver operating characteristics (ROCs) have been derived analytically. A method of taking into account explicitly a priori probabilities of alternative hypotheses on the structure of information initiating memory trace retrieval and modified ROCs (mROCs, a posteriori probabilities of correct recall vs. false alarm probability) are introduced. The comparison of empirical and calculated ROCs (or mROCs) demonstrates that they coincide quantitatively and in this way intensities of cues used in appropriate experiments may be estimated. It has been found that basic ROC properties which are one of experimental findings underpinning dual-process models of recognition memory can be explained within our one-factor NNAMM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] The experiment discussed in this paper is a direct replication of Finkbeiner (2005) and an indirect replication of Jiang and Forster (2001) and Witzel and Forster(2012). The paper explores the use of episodic memory in L2 vocabulary processing. By administering an L1 episodic recognition task with L2 masked translation primes, reduced reaction times would suggest L2 vocabulary storage in episodic memory. The methodology follows Finkbeiner (2005) who argued that a blank screen introduced after the prime in Jiang Forster (2001) led to a ghosting effect, compromising the imperceptibility of the prime. The results here mostly corroborate Finkbeiner (2005) with no significant priming effects. While Finkbeiner discusses his findings in terms of the dissociability of episodic and semantic memory, and discounts Jiang and Forster’s (2001) results to participants’ strategic responding, I add a layer of analysis based on declarative and procedural constituents. From this perspective, Jiang and Forster (2001) and Witzel and Forster’s (2012) results can be seen as possible episodic memory activation, and Finkbeiner’s (2005) and my lack of priming effects might be due to the sole activation of procedural neural networks. Priming effects are found in concrete and abstract words but require verification through further experimentation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The synchronization of oscillatory activity in networks of neural networks is usually implemented through coupling the state variables describing neuronal dynamics. In this study we discuss another but complementary mechanism based on a learning process with memory. A driver network motif, acting as a teacher, exhibits winner-less competition (WLC) dynamics, while a driven motif, a learner, tunes its internal couplings according to the oscillations observed in the teacher. We show that under appropriate training the learner motif can dynamically copy the coupling pattern of the teacher and thus synchronize oscillations with the teacher. Then, we demonstrate that the replication of the WLC dynamics occurs for intermediate memory lengths only. In a unidirectional chain of N motifs coupled through teacher-learner paradigm the time interval required for pattern replication grows linearly with the chain size, hence the learning process does not blow up and at the end we observe phase synchronized oscillations along the chain. We also show that in a learning chain closed into a ring the network motifs come to a consensus, i.e. to a state with the same connectivity pattern corresponding to the mean initial pattern averaged over all network motifs.