5 resultados para Inspection tasks

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, the main Executive Control theories are exposed. Methods typical of Cognitive and Computational Neuroscience are introduced and the role of behavioural tasks involving conflict resolution in the response elaboration, after the presentation of a stimulus to the subject, are highlighted. In particular, the Eriksen Flanker Task and its variants are discussed. Behavioural data, from scientific literature, are illustrated in terms of response times and error rates. During experimental behavioural tasks, EEG is registered simultaneously. Thanks to this, event related potential, related with the current task, can be studied. Different theories regarding relevant event related potential in this field - such as N2, fERN (feedback Error Related Negativity) and ERN (Error Related Negativity) – are introduced. The aim of this thesis is to understand and simulate processes regarding Executive Control, including performance improvement, error detection mechanisms, post error adjustments and the role of selective attention, with the help of an original neural network model. The network described here has been built with the purpose to simulate behavioural results of a four choice Eriksen Flanker Task. Model results show that the neural network can simulate response times, error rates and event related potentials quite well. Finally, results are compared with behavioural data and discussed in light of the mentioned Executive Control theories. Future perspective for this new model are outlined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La tesi tratta lo studio del sistema QNX e dello sviluppo di un simulatore di task hard/soft real-time, tramite uso di un meta-scheduler. Al termine dello sviluppo vengono valutate le prestazioni del sistema operativo QNX Neutrino.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La stima della frequenza di accadimento di eventi incidentali di tipo random da linee e apparecchiature è in generale effettuata sulla base delle informazioni presenti in banche dati specializzate. I dati presenti in tali banche contengono informazioni relative ad eventi incidentali avvenuti in varie tipologie di installazioni, che spaziano dagli impianti chimici a quelli petrolchimici. Alcune di queste banche dati risultano anche piuttosto datate, poiché fanno riferimento ad incidenti verificatisi ormai molto addietro negli anni. Ne segue che i valori relativi alle frequenze di perdita forniti dalle banche dati risultano molto conservativi. Per ovviare a tale limite e tenere in conto il progresso tecnico, la linea guida API Recommended Pratice 581, pubblicata nel 2000 e successivamente aggiornata nel 2008, ha introdotto un criterio per determinare frequenze di perdita specializzate alla realtà propria impiantistica, mediante l’ausilio di coefficienti correttivi che considerano il meccanismo di guasto del componente, il sistema di gestione della sicurezza e l’efficacia dell’attività ispettiva. Il presente lavoro di tesi ha lo scopo di mettere in evidenza l’evoluzione dell’approccio di valutazione delle frequenze di perdita da tubazione. Esso è articolato come descritto nel seguito. Il Capitolo 1 ha carattere introduttivo. Nel Capitolo 2 è affrontato lo studio delle frequenze di perdita reperibili nelle banche dati generaliste. Nel Capitolo 3 sono illustrati due approcci, uno qualitativo ed uno quantitativo, che permettono di determinare le linee che presentano priorità più alta per essere sottoposte all’attività ispettiva. Il Capitolo 4 è dedicato alla descrizione della guida API Recomended Practice 581. L’applicazione ad un caso di studio dei criteri di selezione delle linee illustrati nel Capitolo 3 e la definizione delle caratteristiche dell’attività ispettiva secondo la linea guida API Recomended Practice 581 sono illustrati nel Capitolo 5. Infine nel Capitolo 6 sono rese esplicite le considerazioni conclusive dello studio effettuato.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.