785 resultados para Multi layer perceptron backpropagation neural network
Resumo:
As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability," quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.
Resumo:
Oscillating Water Column (OWC) is one type of promising wave energy devices due to its obvious advantage over many other wave energy converters: no moving component in sea water. Two types of OWCs (bottom-fixed and floating) have been widely investigated, and the bottom-fixed OWCs have been very successful in several practical applications. Recently, the proposal of massive wave energy production and the availability of wave energy have pushed OWC applications from near-shore to deeper water regions where floating OWCs are a better choice. For an OWC under sea waves, the air flow driving air turbine to generate electricity is a random process. In such a working condition, single design/operation point is nonexistent. To improve energy extraction, and to optimise the performance of the device, a system capable of controlling the air turbine rotation speed is desirable. To achieve that, this paper presents a short-term prediction of the random, process by an artificial neural network (ANN), which can provide near-future information for the control system. In this research, ANN is explored and tuned for a better prediction of the airflow (as well as the device motions for a wide application). It is found that, by carefully constructing ANN platform and optimizing the relevant parameters, ANN is capable of predicting the random process a few steps ahead of the real, time with a good accuracy. More importantly, the tuned ANN works for a large range of different types of random, process.
Resumo:
Shape-based registration methods frequently encounters in the domains of computer vision, image processing and medical imaging. The registration problem is to find an optimal transformation/mapping between sets of rigid or nonrigid objects and to automatically solve for correspondences. In this paper we present a comparison of two different probabilistic methods, the entropy and the growing neural gas network (GNG), as general feature-based registration algorithms. Using entropy shape modelling is performed by connecting the point sets with the highest probability of curvature information, while with GNG the points sets are connected using nearest-neighbour relationships derived from competitive hebbian learning. In order to compare performances we use different levels of shape deformation starting with a simple shape 2D MRI brain ventricles and moving to more complicated shapes like hands. Results both quantitatively and qualitatively are given for both sets.
Resumo:
This paper presents flow regimes identification methodology in multiphase system in annular, stratified and homogeneous oil-water-gas regimes. The principle is based on recognition of the pulse height distributions (PHD) from gamma-ray with supervised artificial neural network (ANN) systems. The detection geometry simulation comprises of two NaI(Tl) detectors and a dual-energy gamma-ray source. The measurement of scattered radiation enables the dual modality densitometry (DMD) measurement principle to be explored. Its basic principle is to combine the measurement of scattered and transmitted radiation in order to acquire information about the different flow regimes. The PHDs obtained by the detectors were used as input to ANN. The data sets required for training and testing the ANN were generated by the MCNP-X code from static and ideal theoretical models of multiphase systems. The ANN correctly identified the three different flow regimes for all data set evaluated. The results presented show that PHDs examined by ANN may be applied in the successfully flow regime identification.
Resumo:
Multiphase flows, type oil–water-gas are very common among different industrial activities, such as chemical industries and petroleum extraction, and its measurements show some difficulties to be taken. Precisely determining the volume fraction of each one of the elements that composes a multiphase flow is very important in chemical plants and petroleum industries. This work presents a methodology able to determine volume fraction on Annular and Stratified multiphase flow system with the use of neutrons and artificial intelligence, using the principles of transmission/scattering of fast neutrons from a 241Am-Be source and measurements of point flow that are influenced by variations of volume fractions. The proposed geometries used on the mathematical model was used to obtain a data set where the thicknesses referred of each material had been changed in order to obtain volume fraction of each phase providing 119 compositions that were used in the simulation with MCNP-X –computer code based on Monte Carlo Method that simulates the radiation transport. An artificial neural network (ANN) was trained with data obtained using the MCNP-X, and used to correlate such measurements with the respective real fractions. The ANN was able to correlate the data obtained on the simulation with MCNP-X with the volume fractions of the multiphase flows (oil-water-gas), both in the pattern of annular flow as stratified, resulting in a average relative error (%) for each production set of: annular (air= 3.85; water = 4.31; oil=1.08); stratified (air=3.10, water 2.01, oil = 1.45). The method demonstrated good efficiency in the determination of each material that composes the phases, thus demonstrating the feasibility of the technique.
Resumo:
Las dificultades a las que los estudiantes se enfrentan y su lucha por dominar los temas, podría aumentar como consecuencia de la inadecuada utilización de materiales de evaluación. Generalmente se encuentran en el aula alumnos que hacen buen uso del material de los cursos y de una manera rápida, mientras que otros presentan dificultades con el aprendizaje del material. Esta situación es fácilmente visto en los resultados de los exámenes, un grupo de estudiantes podrían obtener buenas calificaciones animándoles, mientras que otros obtendrían la mala percepción de que los temas son difíciles, y en algunos casos, obligándolos a abandonar el curso o en otros casos a cambiar de carrera. Creemos que mediante el uso de técnicas de aprendizaje automático, y en nuestro caso la utilización de redes neuronales, sería factible crear un entorno de evaluación que podrían ajustarse a las necesidades de cada estudiante. Esto último disminuiría la sensación de insatisfacción de los alumnos y el abandono de los cursos.
Resumo:
In this Letter we introduce a continuum model of neural tissue that include the effects of so-called spike frequency adaptation (SFA). The basic model is an integral equation for synaptic activity that depends upon the non-local network connectivity, synaptic response, and firing rate of a single neuron. A phenomenological model of SFA is examined whereby the firing rate is taken to be a simple state-dependent threshold function. As in the case without SFA classical Mexican-Hat connectivity is shown to allow for the existence of spatially localized states (bumps). Importantly an analysis of bump stability using recent Evans function techniques shows that bumps may undergo instabilities leading to the emergence of both breathers and traveling waves. Moreover, a similar analysis for traveling pulses leads to the conditions necessary to observe a stable traveling breather. Direct numerical simulations both confirm our theoretical predictions and illustrate the rich dynamic behavior of this model, including the appearance of self-replicating bumps.
Resumo:
We study spatially localized states of a spiking neuronal network populated by a pulse coupled phase oscillator known as the lighthouse model. We show that in the limit of slow synaptic interactions in the continuum limit the dynamics reduce to those of the standard Amari model. For non-slow synaptic connections we are able to go beyond the standard firing rate analysis of localized solutions allowing us to explicitly construct a family of co-existing one-bump solutions, and then track bump width and firing pattern as a function of system parameters. We also present an analysis of the model on a discrete lattice. We show that multiple width bump states can co-exist and uncover a mechanism for bump wandering linked to the speed of synaptic processing. Moreover, beyond a wandering transition point we show that the bump undergoes an effective random walk with a diffusion coefficient that scales exponentially with the rate of synaptic processing and linearly with the lattice spacing.
Resumo:
Foreknowledge about upcoming events may be exploited to optimize behavioural responses. In a previous work, using an eye movement paradigm, we showed that different types of partial foreknowledge have different effects on saccadic efficiency. In the current study, we investigated the neural circuitry involved in processing of partial foreknowledge using functional magnetic resonance imaging. Fourteen subjects performed a mixed antisaccade, prosaccade paradigm with blocks of no foreknowledge, complete foreknowledge or partial foreknowledge about stimulus location, response direction or task. We found that saccadic foreknowledge is processed primarily within the well-known oculomotor network for saccades and antisaccades. Moreover, we found a consistent decrease in BOLD activity in the primary and secondary visual cortex in all foreknowledge conditions compared to the no-foreknowledge conditions. Furthermore we found that the different types of partial foreknowledge are processed in distinct brain areas: response foreknowledge is processed in the frontal eye field, while stimulus foreknowledge is processed in the frontal and parietal eye field. Task foreknowledge, however, revealed no positive BOLD correlate. Our results show different patterns of engagement in the saccade-related neural network depending upon precisely what type of information is known ahead.
Resumo:
This study is aimed to model and forecast the tourism demand for Mozambique for the period from January 2004 to December 2013 using artificial neural networks models. The number of overnight stays in Hotels was used as representative of the tourism demand. A set of independent variables were experimented in the input of the model, namely: Consumer Price Index, Gross Domestic Product and Exchange Rates, of the outbound tourism markets, South Africa, United State of America, Mozambique, Portugal and the United Kingdom. The best model achieved has 6.5% for Mean Absolute Percentage Error and 0.696 for Pearson correlation coefficient. A model like this with high accuracy of forecast is important for the economic agents to know the future growth of this activity sector, as it is important for stakeholders to provide products, services and infrastructures and for the hotels establishments to adequate its level of capacity to the tourism demand.
Resumo:
In this thesis, the problem of controlling a quadrotor UAV is considered. It is done by presenting an original control system, designed as a combination of Neural Networks and Disturbance Observer, using a composite learning approach for a system of the second order, which is a novel methodology in literature. After a brief introduction about the quadrotors, the concepts needed to understand the controller are presented, such as the main notions of advanced control, the basic structure and design of a Neural Network, the modeling of a quadrotor and its dynamics. The full simulator, developed on the MATLAB Simulink environment, used throughout the whole thesis, is also shown. For the guidance and control purposes, a Sliding Mode Controller, used as a reference, it is firstly introduced, and its theory and implementation on the simulator are illustrated. Finally the original controller is introduced, through its novel formulation, and implementation on the model. The effectiveness and robustness of the two controllers are then proven by extensive simulations in all different conditions of external disturbance and faults.
Resumo:
Resolution of multisensory deficits has been observed in teenagers with Autism Spectrum Disorders (ASD) for complex, social speech stimuli; this resolution extends to more basic multisensory processing, involving low-level stimuli. In particular, a delayed transition of multisensory integration (MSI) from a default state of competition to one of facilitation has been observed in ASD children. In other terms, the complete maturation of MSI is achieved later in ASD. In the present study a neuro-computational model is used to reproduce some patterns of behavior observed experimentally, modeling a bisensory reaction time task, in which auditory and visual stimuli are presented in random sequence alone (A or V) or together (AV). The model explains how the default competitive state can be implemented via mutual inhibition between primary sensory areas, and how the shift toward the classical multisensory facilitation, observed in adults, is the result of inhibitory cross-modal connections becoming excitatory during the development. Model results are consistent with a stronger cross-modal inhibition in ASD children, compared to normotypical (NT) ones, suggesting that the transition toward a cooperative interaction between sensory modalities takes longer to occur. Interestingly, the model also predicts the difference between unisensory switch trials (in which sensory modality switches) and unisensory repeat trials (in which sensory modality repeats). This is due to an inhibitory mechanism, characterized by a slow dynamics, driven by the preceding stimulus and inhibiting the processing of the incoming one, when of the opposite sensory modality. These findings link the cognitive framework delineated by the empirical results to a plausible neural implementation.
Resumo:
Pervasive and distributed Internet of Things (IoT) devices demand ubiquitous coverage beyond No-man’s land. To satisfy plethora of IoT devices with resilient connectivity, Non-Terrestrial Networks (NTN) will be pivotal to assist and complement terrestrial systems. In a massiveMTC scenario over NTN, characterized by sporadic uplink data reports, all the terminals within a satellite beam shall be served during the short visibility window of the flying platform, thus generating congestion due to simultaneous access attempts of IoT devices on the same radio resource. The more terminals collide, the more average-time it takes to complete an access which is due to the decreased number of successful attempts caused by Back-off commands of legacy methods. A possible countermeasure is represented by Non-Orthogonal Multiple Access scheme, which requires the knowledge of the number of superimposed NPRACH preambles. This work addresses this problem by proposing a Neural Network (NN) algorithm to cope with the uncoordinated random access performed by a prodigious number of Narrowband-IoT devices. Our proposed method classifies the number of colliding users, and for each estimates the Time of Arrival (ToA). The performance assessment, under Line of Sight (LoS) and Non-LoS conditions in sub-urban environments with two different satellite configurations, shows significant benefits of the proposed NN algorithm with respect to traditional methods for the ToA estimation.
Resumo:
Dirt counting and dirt particle characterisation of pulp samples is an important part of quality control in pulp and paper production. The need for an automatic image analysis system to consider dirt particle characterisation in various pulp samples is also very critical. However, existent image analysis systems utilise a single threshold to segment the dirt particles in different pulp samples. This limits their precision. Based on evidence, designing an automatic image analysis system that could overcome this deficiency is very useful. In this study, the developed Niblack thresholding method is proposed. The method defines the threshold based on the number of segmented particles. In addition, the Kittler thresholding is utilised. Both of these thresholding methods can determine the dirt count of the different pulp samples accurately as compared to visual inspection and the Digital Optical Measuring and Analysis System (DOMAS). In addition, the minimum resolution needed for acquiring a scanner image is defined. By considering the variation in dirt particle features, the curl shows acceptable difference to discriminate the bark and the fibre bundles in different pulp samples. Three classifiers, called k-Nearest Neighbour, Linear Discriminant Analysis and Multi-layer Perceptron are utilised to categorize the dirt particles. Linear Discriminant Analysis and Multi-layer Perceptron are the most accurate in classifying the segmented dirt particles by the Kittler thresholding with morphological processing. The result shows that the dirt particles are successfully categorized for bark and for fibre bundles.
Resumo:
The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.