13 resultados para Hopfield Neural Network

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La distorsione della percezione della distanza tra due stimoli puntuali applicati sulla superfice della pelle di diverse regioni corporee è conosciuta come Illusione di Weber. Questa illusione è stata osservata, e verificata, in molti esperimenti in cui ai soggetti era chiesto di giudicare la distanza tra due stimoli applicati sulla superficie della pelle di differenti parti corporee. Da tali esperimenti si è dedotto che una stessa distanza tra gli stimoli è giudicata differentemente per diverse regioni corporee. Il concetto secondo cui la distanza sulla pelle è spesso percepita in maniera alterata è ampiamente condiviso, ma i meccanismi neurali che manovrano questa illusione sono, allo stesso tempo, ancora ampiamente sconosciuti. In particolare, non è ancora chiaro come sia interpretata la distanza tra due stimoli puntuali simultanei, e quali aree celebrali siano coinvolte in questa elaborazione. L’illusione di Weber può essere spiegata, in parte, considerando la differenza in termini di densità meccano-recettoriale delle differenti regioni corporee, e l’immagine distorta del nostro corpo che risiede nella Corteccia Primaria Somato-Sensoriale (homunculus). Tuttavia, questi meccanismi sembrano non sufficienti a spiegare il fenomeno osservato: infatti, secondo i risultati derivanti da 100 anni di sperimentazioni, le distorsioni effettive nel giudizio delle distanze sono molto più piccole rispetto alle distorsioni che la Corteccia Primaria suggerisce. In altre parole, l’illusione osservata negli esperimenti tattili è molto più piccola rispetto all’effetto prodotto dalla differente densità recettoriale che affligge le diverse parti del corpo, o dall’estensione corticale. Ciò, ha portato a ipotizzare che la percezione della distanza tattile richieda la presenza di un’ulteriore area celebrale, e di ulteriori meccanismi che operino allo scopo di ridimensionare – almeno parzialmente – le informazioni derivanti dalla corteccia primaria, in modo da mantenere una certa costanza nella percezione della distanza tattile lungo la superfice corporea. E’ stata così proposta la presenza di una sorta di “processo di ridimensionamento”, chiamato “Rescaling Process” che opera per ridurre questa illusione verso una percezione più verosimile. Il verificarsi di questo processo è sostenuto da molti ricercatori in ambito neuro scientifico; in particolare, dal Dr. Matthew Longo, neuro scienziato del Department of Psychological Sciences (Birkbeck University of London), le cui ricerche sulla percezione della distanza tattile e sulla rappresentazione corporea sembrano confermare questa ipotesi. Tuttavia, i meccanismi neurali, e i circuiti che stanno alla base di questo potenziale “Rescaling Process” sono ancora ampiamente sconosciuti. Lo scopo di questa tesi è stato quello di chiarire la possibile organizzazione della rete, e i meccanismi neurali che scatenano l’illusione di Weber e il “Rescaling Process”, usando un modello di rete neurale. La maggior parte del lavoro è stata svolta nel Dipartimento di Scienze Psicologiche della Birkbeck University of London, sotto la supervisione del Dott. M. Longo, il quale ha contribuito principalmente all’interpretazione dei risultati del modello, dando suggerimenti sull’elaborazione dei risultati in modo da ottenere un’informazione più chiara; inoltre egli ha fornito utili direttive per la validazione dei risultati durante l’implementazione di test statistici. Per replicare l’illusione di Weber ed il “Rescaling Proess”, la rete neurale è stata organizzata con due strati principali di neuroni corrispondenti a due differenti aree funzionali corticali: • Primo strato di neuroni (il quale dà il via ad una prima elaborazione degli stimoli esterni): questo strato può essere pensato come parte della Corteccia Primaria Somato-Sensoriale affetta da Magnificazione Corticale (homunculus). • Secondo strato di neuroni (successiva elaborazione delle informazioni provenienti dal primo strato): questo strato può rappresentare un’Area Corticale più elevata coinvolta nell’implementazione del “Rescaling Process”. Le reti neurali sono state costruite includendo connessioni sinaptiche all’interno di ogni strato (Sinapsi Laterali), e connessioni sinaptiche tra i due strati neurali (Sinapsi Feed-Forward), assumendo inoltre che l’attività di ogni neurone dipenda dal suo input attraverso una relazione sigmoidale statica, cosi come da una dinamica del primo ordine. In particolare, usando la struttura appena descritta, sono state implementate due differenti reti neurali, per due differenti regioni corporee (per esempio, Mano e Braccio), caratterizzate da differente risoluzione tattile e differente Magnificazione Corticale, in modo da replicare l’Illusione di Weber ed il “Rescaling Process”. Questi modelli possono aiutare a comprendere il meccanismo dell’illusione di Weber e dare così una possibile spiegazione al “Rescaling Process”. Inoltre, le reti neurali implementate forniscono un valido contributo per la comprensione della strategia adottata dal cervello nell’interpretazione della distanza sulla superficie della pelle. Oltre allo scopo di comprensione, tali modelli potrebbero essere impiegati altresì per formulare predizioni che potranno poi essere verificate in seguito, in vivo, su soggetti reali attraverso esperimenti di percezione tattile. E’ importante sottolineare che i modelli implementati sono da considerarsi prettamente come modelli funzionali e non intendono replicare dettagli fisiologici ed anatomici. I principali risultati ottenuti tramite questi modelli sono la riproduzione del fenomeno della “Weber’s Illusion” per due differenti regioni corporee, Mano e Braccio, come riportato nei tanti articoli riguardanti le illusioni tattili (per esempio “The perception of distance and location for dual tactile pressures” di Barry G. Green). L’illusione di Weber è stata registrata attraverso l’output delle reti neurali, e poi rappresentata graficamente, cercando di spiegare le ragioni di tali risultati.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, the main Executive Control theories are exposed. Methods typical of Cognitive and Computational Neuroscience are introduced and the role of behavioural tasks involving conflict resolution in the response elaboration, after the presentation of a stimulus to the subject, are highlighted. In particular, the Eriksen Flanker Task and its variants are discussed. Behavioural data, from scientific literature, are illustrated in terms of response times and error rates. During experimental behavioural tasks, EEG is registered simultaneously. Thanks to this, event related potential, related with the current task, can be studied. Different theories regarding relevant event related potential in this field - such as N2, fERN (feedback Error Related Negativity) and ERN (Error Related Negativity) – are introduced. The aim of this thesis is to understand and simulate processes regarding Executive Control, including performance improvement, error detection mechanisms, post error adjustments and the role of selective attention, with the help of an original neural network model. The network described here has been built with the purpose to simulate behavioural results of a four choice Eriksen Flanker Task. Model results show that the neural network can simulate response times, error rates and event related potentials quite well. Finally, results are compared with behavioural data and discussed in light of the mentioned Executive Control theories. Future perspective for this new model are outlined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, the problem of controlling a quadrotor UAV is considered. It is done by presenting an original control system, designed as a combination of Neural Networks and Disturbance Observer, using a composite learning approach for a system of the second order, which is a novel methodology in literature. After a brief introduction about the quadrotors, the concepts needed to understand the controller are presented, such as the main notions of advanced control, the basic structure and design of a Neural Network, the modeling of a quadrotor and its dynamics. The full simulator, developed on the MATLAB Simulink environment, used throughout the whole thesis, is also shown. For the guidance and control purposes, a Sliding Mode Controller, used as a reference, it is firstly introduced, and its theory and implementation on the simulator are illustrated. Finally the original controller is introduced, through its novel formulation, and implementation on the model. The effectiveness and robustness of the two controllers are then proven by extensive simulations in all different conditions of external disturbance and faults.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resolution of multisensory deficits has been observed in teenagers with Autism Spectrum Disorders (ASD) for complex, social speech stimuli; this resolution extends to more basic multisensory processing, involving low-level stimuli. In particular, a delayed transition of multisensory integration (MSI) from a default state of competition to one of facilitation has been observed in ASD children. In other terms, the complete maturation of MSI is achieved later in ASD. In the present study a neuro-computational model is used to reproduce some patterns of behavior observed experimentally, modeling a bisensory reaction time task, in which auditory and visual stimuli are presented in random sequence alone (A or V) or together (AV). The model explains how the default competitive state can be implemented via mutual inhibition between primary sensory areas, and how the shift toward the classical multisensory facilitation, observed in adults, is the result of inhibitory cross-modal connections becoming excitatory during the development. Model results are consistent with a stronger cross-modal inhibition in ASD children, compared to normotypical (NT) ones, suggesting that the transition toward a cooperative interaction between sensory modalities takes longer to occur. Interestingly, the model also predicts the difference between unisensory switch trials (in which sensory modality switches) and unisensory repeat trials (in which sensory modality repeats). This is due to an inhibitory mechanism, characterized by a slow dynamics, driven by the preceding stimulus and inhibiting the processing of the incoming one, when of the opposite sensory modality. These findings link the cognitive framework delineated by the empirical results to a plausible neural implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Neural Networks customized and tested in this thesis (WaldoNet, FlowNet and PatchNet) are a first exploration and approach to the Template Matching task. The possibilities of extension are therefore many and some are proposed below. During my thesis, I have analyzed the functioning of the classical algorithms and adapted with deep learning algorithms. The features extracted from both the template and the query images resemble the keypoints of the SIFT algorithm. Then, instead of similarity function or keypoints matching, WaldoNet and PatchNet use the convolutional layer to compare the features, while FlowNet uses the correlational layer. In addition, I have identified the major challenges of the Template Matching task (affine/non-affine transformations, intensity changes...) and solved them with a careful design of the dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pervasive and distributed Internet of Things (IoT) devices demand ubiquitous coverage beyond No-man’s land. To satisfy plethora of IoT devices with resilient connectivity, Non-Terrestrial Networks (NTN) will be pivotal to assist and complement terrestrial systems. In a massiveMTC scenario over NTN, characterized by sporadic uplink data reports, all the terminals within a satellite beam shall be served during the short visibility window of the flying platform, thus generating congestion due to simultaneous access attempts of IoT devices on the same radio resource. The more terminals collide, the more average-time it takes to complete an access which is due to the decreased number of successful attempts caused by Back-off commands of legacy methods. A possible countermeasure is represented by Non-Orthogonal Multiple Access scheme, which requires the knowledge of the number of superimposed NPRACH preambles. This work addresses this problem by proposing a Neural Network (NN) algorithm to cope with the uncoordinated random access performed by a prodigious number of Narrowband-IoT devices. Our proposed method classifies the number of colliding users, and for each estimates the Time of Arrival (ToA). The performance assessment, under Line of Sight (LoS) and Non-LoS conditions in sub-urban environments with two different satellite configurations, shows significant benefits of the proposed NN algorithm with respect to traditional methods for the ToA estimation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Il tumore al seno si colloca al primo posto per livello di mortalità tra le patologie tumorali che colpiscono la popolazione femminile mondiale. Diversi studi clinici hanno dimostrato come la diagnosi da parte del radiologo possa essere aiutata e migliorata dai sistemi di Computer Aided Detection (CAD). A causa della grande variabilità di forma e dimensioni delle masse tumorali e della somiglianza di queste con i tessuti che le ospitano, la loro ricerca automatizzata è un problema estremamente complicato. Un sistema di CAD è generalmente composto da due livelli di classificazione: la detection, responsabile dell’individuazione delle regioni sospette presenti sul mammogramma (ROI) e quindi dell’eliminazione preventiva delle zone non a rischio; la classificazione vera e propria (classification) delle ROI in masse e tessuto sano. Lo scopo principale di questa tesi è lo studio di nuove metodologie di detection che possano migliorare le prestazioni ottenute con le tecniche tradizionali. Si considera la detection come un problema di apprendimento supervisionato e lo si affronta mediante le Convolutional Neural Networks (CNN), un algoritmo appartenente al deep learning, nuova branca del machine learning. Le CNN si ispirano alle scoperte di Hubel e Wiesel riguardanti due tipi base di cellule identificate nella corteccia visiva dei gatti: le cellule semplici (S), che rispondono a stimoli simili ai bordi, e le cellule complesse (C) che sono localmente invarianti all’esatta posizione dello stimolo. In analogia con la corteccia visiva, le CNN utilizzano un’architettura profonda caratterizzata da strati che eseguono sulle immagini, alternativamente, operazioni di convoluzione e subsampling. Le CNN, che hanno un input bidimensionale, vengono solitamente usate per problemi di classificazione e riconoscimento automatico di immagini quali oggetti, facce e loghi o per l’analisi di documenti.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Combinatorial optimization problems are typically tackled by the branch-and-bound paradigm. We propose to learn a variable selection policy for branch-and-bound in mixed-integer linear programming, by imitation learning on a diversified variant of the strong branching expert rule. We encode states as bipartite graphs and parameterize the policy as a graph convolutional neural network. Experiments on a series of synthetic problems demonstrate that our approach produces policies that can improve upon expert-designed branching rules on large problems, and generalize to instances significantly larger than seen during training.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The amplitude of motor evoked potentials (MEPs) elicited by transcranial magnetic stimulation (TMS) of the primary motor cortex (M1) shows a large variability from trial to trial, although MEPs are evoked by the same repeated stimulus. A multitude of factors is believed to influence MEP amplitudes, such as cortical, spinal and motor excitability state. The goal of this work is to explore to which degree the variation in MEP amplitudes can be explained by the cortical state right before the stimulation. Specifically, we analyzed a dataset acquired on eleven healthy subjects comprising, for each subject, 840 single TMS pulses applied to the left M1 during acquisition of electroencephalography (EEG) and electromyography (EMG). An interpretable convolutional neural network, named SincEEGNet, was utilized to discriminate between low- and high-corticospinal excitability trials, defined according to the MEP amplitude, using in input the pre-TMS EEG. This data-driven approach enabled considering multiple brain locations and frequency bands without any a priori selection. Post-hoc interpretation techniques were adopted to enhance interpretation by identifying the more relevant EEG features for the classification. Results show that individualized classifiers successfully discriminated between low and high M1 excitability states in all participants. Outcomes of the interpretation methods suggest the importance of the electrodes situated over the TMS stimulation site, as well as the relevance of the temporal samples of the input EEG closer to the stimulation time. This novel decoding method allows causal investigation of the cortical excitability state, which may be relevant for personalizing and increasing the efficacy of therapeutic brain-state dependent brain stimulation (for example in patients affected by Parkinson’s disease).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Depth estimation from images has long been regarded as a preferable alternative compared to expensive and intrusive active sensors, such as LiDAR and ToF. The topic has attracted the attention of an increasingly wide audience thanks to the great amount of application domains, such as autonomous driving, robotic navigation and 3D reconstruction. Among the various techniques employed for depth estimation, stereo matching is one of the most widespread, owing to its robustness, speed and simplicity in setup. Recent developments has been aided by the abundance of annotated stereo images, which granted to deep learning the opportunity to thrive in a research area where deep networks can reach state-of-the-art sub-pixel precision in most cases. Despite the recent findings, stereo matching still begets many open challenges, two among them being finding pixel correspondences in presence of objects that exhibits a non-Lambertian behaviour and processing high-resolution images. Recently, a novel dataset named Booster, which contains high-resolution stereo pairs featuring a large collection of labeled non-Lambertian objects, has been released. The work shown that training state-of-the-art deep neural network on such data improves the generalization capabilities of these networks also in presence of non-Lambertian surfaces. Regardless being a further step to tackle the aforementioned challenge, Booster includes a rather small number of annotated images, and thus cannot satisfy the intensive training requirements of deep learning. This thesis work aims to investigate novel view synthesis techniques to augment the Booster dataset, with ultimate goal of improving stereo matching reliability in presence of high-resolution images that displays non-Lambertian surfaces.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis contributes to the ArgMining 2021 shared task on Key Point Analysis. Key Point Analysis entails extracting and calculating the prevalence of a concise list of the most prominent talking points, from an input corpus. These talking points are usually referred to as key points. Key point analysis is divided into two subtasks: Key Point Matching, which involves assigning a matching score to each key point/argument pair, and Key Point Generation, which consists of the generation of key points. The task of Key Point Matching was approached using different models: a pretrained Sentence Transformers model and a tree-constrained Graph Neural Network were tested. The best model was the fine-tuned Sentence Transformers, which achieved a mean Average Precision score of 0.75, ranking 12 compared to other participating teams. The model was then used for the subtask of Key Point Generation using the extractive method in the selection of key point candidates and the model developed for the previous subtask to evaluate them.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Neural scene representation and neural rendering are new computer vision techniques that enable the reconstruction and implicit representation of real 3D scenes from a set of 2D captured images, by fitting a deep neural network. The trained network can then be used to render novel views of the scene. A recent work in this field, Neural Radiance Fields (NeRF), presented a state-of-the-art approach, which uses a simple Multilayer Perceptron (MLP) to generate photo-realistic RGB images of a scene from arbitrary viewpoints. However, NeRF does not model any light interaction with the fitted scene; therefore, despite producing compelling results for the view synthesis task, it does not provide a solution for relighting. In this work, we propose a new architecture to enable relighting capabilities in NeRF-based representations and we introduce a new real-world dataset to train and evaluate such a model. Our method demonstrates the ability to perform realistic rendering of novel views under arbitrary lighting conditions.