992 resultados para neural architecture
Resumo:
This work describes a neural network based architecture that represents and estimates object motion in videos. This architecture addresses multiple computer vision tasks such as image segmentation, object representation or characterization, motion analysis and tracking. The use of a neural network architecture allows for the simultaneous estimation of global and local motion and the representation of deformable objects. This architecture also avoids the problem of finding corresponding features while tracking moving objects. Due to the parallel nature of neural networks, the architecture has been implemented on GPUs that allows the system to meet a set of requirements such as: time constraints management, robustness, high processing speed and re-configurability. Experiments are presented that demonstrate the validity of our architecture to solve problems of mobile agents tracking and motion analysis.
Resumo:
The explosive growth of the traffic in computer systems has made it clear that traditional control techniques are not adequate to provide the system users fast access to network resources and prevent unfair uses. In this paper, we present a reconfigurable digital hardware implementation of a specific neural model for intrusion detection. It uses a specific vector of characterization of the network packages (intrusion vector) which is starting from information obtained during the access intent. This vector will be treated by the system. Our approach is adaptative and to detecting these intrusions by using a complex artificial intelligence method known as multilayer perceptron. The implementation have been developed and tested into a reconfigurable hardware (FPGA) for embedded systems. Finally, the Intrusion detection system was tested in a real-world simulation to gauge its effectiveness and real-time response.
Resumo:
Measuring perceptions of customers can be a major problem for marketers of tourism and travel services. Much of the problem is to determine which attributes carry most weight in the purchasing decision. Older travellers weigh many travel features before making their travel decisions. This paper presents a descriptive analysis of neural network methodology and provides a research technique that assesses the weighting of different attributes and uses an unsupervised neural network model to describe a consumer-product relationship. The development of this rich class of models was inspired by the neural architecture of the human brain. These models mathematically emulate the neurophysical structure and decision making of the human brain, and, from a statistical perspective, are closely related to generalised linear models. Artificial neural networks or neural networks are, however, nonlinear and do not require the same restrictive assumptions about the relationship between the independent variables and dependent variables. Using neural networks is one way to determine what trade-offs older travellers make as they decide their travel plans. The sample of this study is from a syndicated data source of 200 valid cases from Western Australia. From senior groups, active learner, relaxed family body, careful participants and elementary vacation were identified and discussed. (C) 2003 Published by Elsevier Science Ltd.
Resumo:
This work provides a framework for the approximation of a dynamic system of the form x˙=f(x)+g(x)u by dynamic recurrent neural network. This extends previous work in which approximate realisation of autonomous dynamic systems was proven. Given certain conditions, the first p output neural units of a dynamic n-dimensional neural model approximate at a desired proximity a p-dimensional dynamic system with n>p. The neural architecture studied is then successfully implemented in a nonlinear multivariable system identification case study.
Resumo:
In this paper we consider the optimisation of Shannon mutual information (MI) in the context of two model neural systems The first is a stochastic pooling network (population) of McCulloch-Pitts (MP) type neurons (logical threshold units) subject to stochastic forcing; the second is (in a rate coding paradigm) a population of neurons that each displays Poisson statistics (the so called 'Poisson neuron'). The mutual information is optimised as a function of a parameter that characterises the 'noise level'-in the MP array this parameter is the standard deviation of the noise, in the population of Poisson neurons it is the window length used to determine the spike count. In both systems we find that the emergent neural architecture and; hence, code that maximises the MI is strongly influenced by the noise level. Low noise levels leads to a heterogeneous distribution of neural parameters (diversity), whereas, medium to high noise levels result in the clustering of neural parameters into distinct groups that can be interpreted as subpopulations In both cases the number of subpopulations increases with a decrease in noise level. Our results suggest that subpopulations are a generic feature of an information optimal neural population.
Resumo:
Recent efforts to develop large-scale neural architectures have paid relatively little attention to the use of self-organizing maps (SOMs). Part of the reason is that most conventional SOMs use a static encoding representation: Each input is typically represented by the fixed activation of a single node in the map layer. This not only carries information in an inefficient and unreliable way that impedes building robust multi-SOM neural architectures, but it is also inconsistent with rhythmic oscillations in biological neural networks. Here I develop and study an alternative encoding scheme that instead uses limit cycle attractors of multi-focal activity patterns to represent input patterns/sequences. Such a fundamental change in representation raises several questions: Can this be done effectively and reliably? If so, will map formation still occur? What properties would limit cycle SOMs exhibit? Could multiple such SOMs interact effectively? Could robust architectures based on such SOMs be built for practical applications? The principal results of examining these questions are as follows. First, conditions are established for limit cycle attractors to emerge in a SOM through self-organization when encoding both static and temporal sequence inputs. It is found that under appropriate conditions a set of learned limit cycles are stable, unique, and preserve input relationships. In spite of the continually changing activity in a limit cycle SOM, map formation continues to occur reliably. Next, associations between limit cycles in different SOMs are learned. It is shown that limit cycles in one SOM can be successfully retrieved by another SOM’s limit cycle activity. Control timings can be set quite arbitrarily during both training and activation. Importantly, the learned associations generalize to new inputs that have never been seen during training. Finally, a complete neural architecture based on multiple limit cycle SOMs is presented for robotic arm control. This architecture combines open-loop and closed-loop methods to achieve high accuracy and fast movements through smooth trajectories. The architecture is robust in that disrupting or damaging the system in a variety of ways does not completely destroy the system. I conclude that limit cycle SOMs have great potentials for use in constructing robust neural architectures.
Resumo:
This study examines the links between human perceptions, cognitive biases and neural processing of symmetrical stimuli. While preferences for symmetry have largely been examined in the context of disorders such as obsessive-compulsive disorder and autism spectrum disorders, we examine various these phenomena in non-clinical subjects and suggest that such preferences are distributed throughout the typical population as part of our cognitive and neural architecture. In Experiment 1, 82 young adults reported on the frequency of their obsessive-compulsive spectrum behaviors. Subjects also performed an emotional Stroop or variant of an Implicit Association Task (the OC-CIT) developed to assess cognitive biases for symmetry. Data not only reveal that subjects evidence a cognitive conflict when asked to match images of positive affect with asymmetrical stimuli, and disgust with symmetry, but also that their slowed reaction times when asked to do so were predicted by reports of OC behavior, particularly checking behavior. In Experiment 2, 26 participants were administered an oddball Event-Related Potential task specifically designed to assess sensitivity to symmetry as well as the OC-CIT. These data revealed that reaction times on the OC-CIT were strongly predicted by frontal electrode sites indicating faster processing of an asymmetrical stimulus (unparallel lines) relative to a symmetrical stimulus (parallel lines). The results point to an overall cognitive bias linking disgust with asymmetry and suggest that such cognitive biases are reflected in neural responses to symmetrical/asymmetrical stimuli.
Resumo:
Neuromorphic computing has become an emerging field in wide range of applications. Its challenge lies in developing a brain-inspired architecture that can emulate human brain and can work for real time applications. In this report a flexible neural architecture is presented which consists of 128 X 128 SRAM crossbar memory and 128 spiking neurons. For Neuron, digital integrate and fire model is used. All components are designed in 45nm technology node. The core can be configured for certain Neuron parameters, Axon types and synapses states and are fully digitally implemented. Learning for this architecture is done offline. To train this circuit a well-known algorithm Restricted Boltzmann Machine (RBM) is used and linear classifiers are trained at the output of RBM. Finally, circuit was tested for handwritten digit recognition application. Future prospects for this architecture are also discussed.
Resumo:
The response to pain involves a non-conscious, reflexive action and a conscious perception. According to Key (2016), consciousness — and thus pain perception — depends on a neuronal correlate that has a “unique neural architecture” as realized in the human cortex. On the basis of the “bioengineering principle that structure determines function,” Key (2016) concludes that animal species such as fish, which lack the requisite cortex-like neuroanatomical structure, are unable to feel pain. This commentary argues that the relationship between brain structure and brain function is less straightforward than suggested in Key’s target article.
Resumo:
To investigate the nature of plasticity in the adult visual system, perceptual learning was measured in a peripheral orientation discrimination task with systematically varying amounts of external (environmental) noise. The signal contrasts required to achieve threshold were reduced by a factor or two or more after training at all levels of external noise. The strong quantitative regularities revealed by this novel paradigm ruled out changes in multiplicative internal noise, changes in transducer nonlinearites, and simple attentional tradeoffs. Instead, the regularities specify the mechanisms of perceptual learning at the behavioral level as a combination of external noise exclusion and stimulus enhancement via additive internal noise reduction. The findings also constrain the neural architecture of perceptual learning. Plasticity in the weights between basic visual channels and decision is sufficient to account for perceptual learning without requiring the retuning of visual mechanisms.
Resumo:
Little is known about the functional and neural architecture of social reasoning, one major obstacle being that we crucially lack the relevant tools to test potentially different social reasoning components. In the case of belief reasoning, previous studies tried to separate the processes involved in belief reasoning per se from those involved in the processing of the high incidental demands such as the working memory demands of typical belief tasks (e.g., Stone et al., 1998; Samson et al., 2004). In this study, we developed new belief tasks in order to disentangle, for the first time, two perspective taking components involved in belief reasoning: (1) the ability to inhibit one’s own perspective (self-perspective inhibition) and (2) the ability to infer someone else’s perspective as such (other-perspective taking). The two tasks had similar demands in other-perspective taking as they both required the participant to infer that a character has a false belief about an object’s location. However, the tasks varied in the self-perspective inhibition demands. In the task with the lowest self-perspective inhibition demands, at the time the participant had to infer the character’s false belief, he or she had no idea what the new object’s location was. In contrast, in the task with the highest self-perspective inhibition demands, at the time the participant had to infer the character’s false belief, he or she knew where the object was actually located (and this knowledge had thus to be inhibited). The two tasks were presented to a stroke patient, WBA, with right prefrontal and temporal damage. WBA performed well in the low-inhibition false belief task but showed striking difficulty in the task placing high self-perspective inhibition demands, showing a selective deficit in inhibiting self-perspective. WBA also made egocentric errors in other social and visual perspective taking tasks, indicating a difficulty with belief attribution extending to the attribution of emotions, desires and visual experiences to other people. The case of WBA, together with the recent report of three patients impaired in belief reasoning even when self-perspective inhibition demands were reduced (Samson et al., 2004), provide the first neuropsychological evidence that (a) the inhibition of one’s own point of view and (b) the ability to infer someone else’ s point of view, rely on distinct neural and functional processes.
Resumo:
The first topic analyzed in the thesis will be Neural Architecture Search (NAS). I will focus on two different tools that I developed, one to optimize the architecture of Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged, and one to optimize the data precision of tensors inside CNNs. The first NAS proposed explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive field, and the number of features in each layer. Note that this is the first NAS that explicitly targets these networks. The second NAS proposed instead focuses on finding the most efficient data format for a target CNN, with the granularity of the layer filter. Note that applying these two NASes in sequence allows an "application designer" to minimize the structure of the neural network employed, minimizing the number of operations or the memory usage of the network. After that, the second topic described is the optimization of neural network deployment on edge devices. Importantly, exploiting edge platforms' scarce resources is critical for NN efficient execution on MCUs. To do so, I will introduce DORY (Deployment Oriented to memoRY) -- an automatic tool to deploy CNNs on low-cost MCUs. DORY, in different steps, can manage different levels of memory inside the MCU automatically, offload the computation workload (i.e., the different layers of a neural network) to dedicated hardware accelerators, and automatically generates ANSI C code that orchestrates off- and on-chip transfers with the computation phases. On top of this, I will introduce two optimized computation libraries that DORY can exploit to deploy TCNs and Transformers on edge efficiently. I conclude the thesis with two different applications on bio-signal analysis, i.e., heart rate tracking and sEMG-based gesture recognition.
Resumo:
Traditional content-based image retrieval (CBIR) systems use low-level features such as colors, shapes, and textures of images. Although, users make queries based on semantics, which are not easily related to such low-level characteristics. Recent works on CBIR confirm that researchers have been trying to map visual low-level characteristics and high-level semantics. The relation between low-level characteristics and image textual information has motivated this article which proposes a model for automatic classification and categorization of words associated to images. This proposal considers a self-organizing neural network architecture, which classifies textual information without previous learning. Experimental results compare the performance results of the text-based approach to an image retrieval system based on low-level features. (c) 2008 Wiley Periodicals, Inc.
Resumo:
A number of researchers have investigated the impact of network architecture on the performance of artificial neural networks. Particular attention has been paid to the impact on the performance of the multi-layer perceptron of architectural issues, and the use of various strategies to attain an optimal network structure. However, there are still perceived limitations with the multi-layer perceptron and networks that employ a different architecture to the multi-layer perceptron have gained in popularity in recent years, particularly, networks that implement a more localised solution, where the solution in one area of the problem space does not impact, or has a minimal impact, on other areas of the space. In this study, we discuss the major architectural issues affecting the performance of a multi-layer perceptron, before moving on to examine in detail the performance of a new localised network, namely the bumptree. The work presented here examines the impact on the performance of artificial neural networks of employing alternative networks to the long established multi-layer perceptron. In particular, networks that impose a solution where the impact of each parameter in the final network architecture has a localised impact on the problem space being modelled are examined. The alternatives examined are the radial basis function and bumptree neural networks, and the impact of architectural issues on the performance of these networks is examined. Particular attention is paid to the bumptree, with new techniques for both developing the bumptree structure and employing this structure to classify patterns being examined.