789 resultados para Artificial neural network models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lung cancer is the most common of malignant tumors, with 1.59 million new cases worldwide in 2012. Early detection is the main factor to determine the survival of patients affected by this disease. Furthermore, the correct classification is important to define the most appropriate therapeutic approach as well as suggest the prognosis and the clinical disease evolution. Among the exams used to detect lung cancer, computed tomography have been the most indicated. However, CT images are naturally complex and even experts medical are subject to fault detection or classification. In order to assist the detection of malignant tumors, computer-aided diagnosis systems have been developed to aid reduce the amount of false positives biopsies. In this work it was developed an automatic classification system of pulmonary nodules on CT images by using Artificial Neural Networks. Morphological, texture and intensity attributes were extracted from lung nodules cut tomographic images using elliptical regions of interest that they were subsequently segmented by Otsu method. These features were selected through statistical tests that compare populations (T test of Student and U test of Mann-Whitney); from which it originated a ranking. The features after selected, were inserted in Artificial Neural Networks (backpropagation) to compose two types of classification; one to classify nodules in malignant and benign (network 1); and another to classify two types of malignancies (network 2); featuring a cascade classifier. The best networks were associated and its performance was measured by the area under the ROC curve, where the network 1 and network 2 achieved performance equal to 0.901 and 0.892 respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables full spectrum CT in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical eects in the detector and are very noisy due to photon starvation. In this work, we proposed two methods based on machine learning to address the spectral distortion issue and to improve the material decomposition. This rst approach is to model distortions using an articial neural network (ANN) and compensate for the distortion in a statistical reconstruction. The second approach is to directly correct for the distortion in the projections. Both technique can be done as a calibration process where the neural network can be trained using 3D printed phantoms data to learn the distortion model or the correction model of the spectral distortion. This replaces the need for synchrotron measurements required in conventional technique to derive the distortion model parametrically which could be costly and time consuming. The results demonstrate experimental feasibility and potential advantages of ANN-based distortion modeling and correction for more accurate K-edge imaging with a PCXD. Given the computational eciency with which the ANN can be applied to projection data, the proposed scheme can be readily integrated into existing CT reconstruction pipelines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BARBOSA, André F. ; SOUZA, Bryan C. ; PEREIRA JUNIOR, Antônio ; MEDEIROS, Adelardo A. D.de, . Implementação de Classificador de Tarefas Mentais Baseado em EEG. In: CONGRESSO BRASILEIRO DE REDES NEURAIS, 9., 2009, Ouro Preto, MG. Anais... Ouro Preto, MG, 2009

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diese Arbeit beschäftigt sich mit nicht in Rechnung stellbaren Wasserverlusten in städtischen Versorgungsnetzen in Entwicklungsländern. Es soll das Wissen über diese Verluste erweitert und aufgezeigt werden, ob diese auf ein ökonomisch vertretbares Maß reduziert werden können. Die vorliegende Doktorarbeit untersucht solche unberechneten Wasserverluste und versucht, neben der Quantifizierung von Leckagen auch Entscheidungswerkzeuge für ein verbessertes Management der Versorgungsnetze in Entwicklungsländern zu erarbeiten. Als Fallstudie dient Harare, die Hauptstadt von Simbabwe. Wasserverluste in Verteilungsnetzen sind unvermeidbar, sollten aber auf ein ökonomisch tragbares Niveau reduziert werden, wenn ein nachhaltiger Betrieb erreicht werden soll. Wasserverluste können sowohl durch illegale und ungenehmigte Anschlüsse oder durch Undichtigkeiten im Verteilnetz, als auch durch mangelhafte Mess- und Berechnungssysteme entstehen. Es sind bereits viele Ansätze zur Verringerung von Verlusten in Wasserverteilsystemen bekannt geworden, entsprechend existieren dazu auch zahlreiche Methoden und Werkzeuge. Diese reichen von computergestützten Verfahren über gesetzliche und politische Vorgaben sowie ökonomische Berechnungen bis hin zu Maßnahmen der Modernisierung der Infrastruktur. Der Erfolg dieser Anstrengungen ist abhängig von der Umsetzbarkeit und dem Umfeld, in dem diese Maßnahmen durchgeführt werden. Die Bewertung der Arbeitsgüte einer jeden Wasserversorgungseinheit basiert auf der Effektivität des jeweiligen Verteilungssystems. Leistungs- und Bewertungszahlen sind die meist genutzten Ansätze, um Wasserverteilsysteme und ihre Effizienz einzustufen. Weltweit haben sich zur Bewertung als Indikatoren die finanzielle und die technische Leistungsfähigkeit durchgesetzt. Die eigene Untersuchung zeigt, dass diese Indikatoren in vielen Wasserversorgungssystemen der Entwicklungsländer nicht zur Einführung von Verlust reduzierenden Managementstrategien geführt haben. Viele durchgeführte Studien über die Einführung von Maßnahmen zur Verlustreduzierung beachten nur das gesamte nicht in Rechnung stellbare Wasser, ohne aber den Anteil der Leckagen an der Gesamthöhe zu bestimmen. Damit ist keine Aussage über die tatsächliche Zuordnung der Verluste möglich. Aus diesem Grund ist ein Bewertungsinstrument notwendig, mit dem die Verluste den verschiedenen Ursachen zugeordnet werden können. Ein solches Rechenwerkzeug ist das South African Night Flow Analysis Model (SANFLOW) der südafrikanischen Wasser-Forschungskommission, das Untersuchungen von Wasserdurchfluss und Anlagendruck in einzelnen Verteilbezirken ermöglicht. In der vorliegenden Arbeit konnte nachgewiesen werden, dass das SANFLOW-Modell gut zur Bestimmung des Leckageanteiles verwendet werden kann. Daraus kann gefolgert werden, dass dieses Modell ein geeignetes und gut anpassbares Analysewerkzeug für Entwicklungsländer ist. Solche computergestützte Berechnungsansätze können zur Bestimmung von Leckagen in Wasserverteilungsnetzen eingesetzt werden. Eine weitere Möglichkeit ist der Einsatz von Künstlichen Neuronalen Netzen (Artificial Neural Network – ANN), die trainiert und dann zur Vorhersage der dynamischen Verhältnisse in Wasserversorgungssystemen genutzt werden können. Diese Werte können mit der Wassernachfrage eines definierten Bezirks verglichen werden. Zur Untersuchung wurde ein Mehrschichtiges Künstliches Neuronales Netz mit Fehlerrückführung zur Modellierung des Wasserflusses in einem überwachten Abschnitt eingesetzt. Zur Bestimmung des Wasserbedarfes wurde ein MATLAB Algorithmus entwickelt. Aus der Differenz der aktuellen und des simulierten Wassernachfrage konnte die Leckagerate des Wasserversorgungssystems ermittelt werden. Es konnte gezeigt werden, dass mit dem angelernten Neuronalen Netzwerk eine Vorhersage des Wasserflusses mit einer Genauigkeit von 99% möglich ist. Daraus lässt sich die Eignung von ANNs als flexibler und wirkungsvoller Ansatz zur Leckagedetektion in der Wasserversorgung ableiten. Die Untersuchung zeigte weiterhin, dass im Versorgungsnetz von Harare 36 % des eingespeisten Wassers verloren geht. Davon wiederum sind 33 % auf Leckagen zurückzuführen. Umgerechnet bedeutet dies einen finanziellen Verlust von monatlich 1 Millionen Dollar, was 20 % der Gesamteinnahmen der Stadt entspricht. Der Stadtverwaltung von Harare wird daher empfohlen, aktiv an der Beseitigung der Leckagen zu arbeiten, da diese hohen Verluste den Versorgungsbetrieb negativ beeinflussen. Abschließend wird in der Arbeit ein integriertes Leckage-Managementsystem vorgeschlagen, das den Wasserversorgern eine Entscheidungshilfe bei zu ergreifenden Maßnahmen zur Instandhaltung des Verteilnetzes geben soll.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides an overview of IDS types and how they work as well as configuration considerations and issues that affect them. Advanced methods of increasing the performance of an IDS are explored such as specification based IDS for protecting Supervisory Control And Data Acquisition (SCADA) and Cloud networks. Also by providing a review of varied studies ranging from issues in configuration and specific problems to custom techniques and cutting edge studies a reference can be provided to others interested in learning about and developing IDS solutions. Intrusion Detection is an area of much required study to provide solutions to satisfy evolving services and networks and systems that support them. This paper aims to be a reference for IDS technologies other researchers and developers interested in the field of intrusion detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BARBOSA, André F. ; SOUZA, Bryan C. ; PEREIRA JUNIOR, Antônio ; MEDEIROS, Adelardo A. D.de, . Implementação de Classificador de Tarefas Mentais Baseado em EEG. In: CONGRESSO BRASILEIRO DE REDES NEURAIS, 9., 2009, Ouro Preto, MG. Anais... Ouro Preto, MG, 2009

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent years have seen an astronomical rise in SQL Injection Attacks (SQLIAs) used to compromise the confidentiality, authentication and integrity of organisations’ databases. Intruders becoming smarter in obfuscating web requests to evade detection combined with increasing volumes of web traffic from the Internet of Things (IoT), cloud-hosted and on-premise business applications have made it evident that the existing approaches of mostly static signature lack the ability to cope with novel signatures. A SQLIA detection and prevention solution can be achieved through exploring an alternative bio-inspired supervised learning approach that uses input of labelled dataset of numerical attributes in classifying true positives and negatives. We present in this paper a Numerical Encoding to Tame SQLIA (NETSQLIA) that implements a proof of concept for scalable numerical encoding of features to a dataset attributes with labelled class obtained from deep web traffic analysis. In the numerical attributes encoding: the model leverages proxy in the interception and decryption of web traffic. The intercepted web requests are then assembled for front-end SQL parsing and pattern matching by applying traditional Non-Deterministic Finite Automaton (NFA). This paper is intended for a technique of numerical attributes extraction of any size primed as an input dataset to an Artificial Neural Network (ANN) and statistical Machine Learning (ML) algorithms implemented using Two-Class Averaged Perceptron (TCAP) and Two-Class Logistic Regression (TCLR) respectively. This methodology then forms the subject of the empirical evaluation of the suitability of this model in the accurate classification of both legitimate web requests and SQLIA payloads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gap junction coupling is ubiquitous in the brain, particularly between the dendritic trees of inhibitory interneurons. Such direct non-synaptic interaction allows for direct electrical communication between cells. Unlike spike-time driven synaptic neural network models, which are event based, any model with gap junctions must necessarily involve a single neuron model that can represent the shape of an action potential. Indeed, not only do neurons communicating via gaps feel super-threshold spikes, but they also experience, and respond to, sub-threshold voltage signals. In this chapter we show that the so-called absolute integrate-and-fire model is ideally suited to such studies. At the single neuron level voltage traces for the model may be obtained in closed form, and are shown to mimic those of fast-spiking inhibitory neurons. Interestingly in the presence of a slow spike adaptation current the model is shown to support periodic bursting oscillations. For both tonic and bursting modes the phase response curve can be calculated in closed form. At the network level we focus on global gap junction coupling and show how to analyze the asynchronous firing state in large networks. Importantly, we are able to determine the emergence of non-trivial network rhythms due to strong coupling instabilities. To illustrate the use of our theoretical techniques (particularly the phase-density formalism used to determine stability) we focus on a spike adaptation induced transition from asynchronous tonic activity to synchronous bursting in a gap-junction coupled network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2016.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work aims to obtain a low-cost virtual sensor to estimate the quality of LPG. For the acquisition of data from a distillation tower, software HYSYS ® was used to simulate chemical processes. These data will be used for training and validation of an Artificial Neural Network (ANN). This network will aim to estimate from available simulated variables such as temperature, pressure and discharge flow of a distillation tower, the mole fraction of pentane present in LPG. Thus, allowing a better control of product quality

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Thesis presents the elaboration of a methodological propose for the development of an intelligent system, able to automatically achieve the effective porosity, in sedimentary layers, from a data bank built with information from the Ground Penetrating Radar GPR. The intelligent system was built to model the relation between the porosity (response variable) and the electromagnetic attribute from the GPR (explicative variables). Using it, the porosity was estimated using the artificial neural network (Multilayer Perceptron MLP) and the multiple linear regression. The data from the response variable and from the explicative variables were achieved in laboratory and in GPR surveys outlined in controlled sites, on site and in laboratory. The proposed intelligent system has the capacity of estimating the porosity from any available data bank, which has the same variables used in this Thesis. The architecture of the neural network used can be modified according to the existing necessity, adapting to the available data bank. The use of the multiple linear regression model allowed the identification and quantification of the influence (level of effect) from each explicative variable in the estimation of the porosity. The proposed methodology can revolutionize the use of the GPR, not only for the imaging of the sedimentary geometry and faces, but mainly for the automatically achievement of the porosity one of the most important parameters for the characterization of reservoir rocks (from petroleum or water)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ill-conditioned inverse problems frequently arise in life sciences, particularly in the context of image deblurring and medical image reconstruction. These problems have been addressed through iterative variational algorithms, which regularize the reconstruction by adding prior knowledge about the problem's solution. Despite the theoretical reliability of these methods, their practical utility is constrained by the time required to converge. Recently, the advent of neural networks allowed the development of reconstruction algorithms that can compute highly accurate solutions with minimal time demands. Regrettably, it is well-known that neural networks are sensitive to unexpected noise, and the quality of their reconstructions quickly deteriorates when the input is slightly perturbed. Modern efforts to address this challenge have led to the creation of massive neural network architectures, but this approach is unsustainable from both ecological and economic standpoints. The recently introduced GreenAI paradigm argues that developing sustainable neural network models is essential for practical applications. In this thesis, we aim to bridge the gap between theory and practice by introducing a novel framework that combines the reliability of model-based iterative algorithms with the speed and accuracy of end-to-end neural networks. Additionally, we demonstrate that our framework yields results comparable to state-of-the-art methods while using relatively small, sustainable models. In the first part of this thesis, we discuss the proposed framework from a theoretical perspective. We provide an extension of classical regularization theory, applicable in scenarios where neural networks are employed to solve inverse problems, and we show there exists a trade-off between accuracy and stability. Furthermore, we demonstrate the effectiveness of our methods in common life science-related scenarios. In the second part of the thesis, we initiate an exploration extending the proposed method into the probabilistic domain. We analyze some properties of deep generative models, revealing their potential applicability in addressing ill-posed inverse problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il trasformatore è uno degli elementi più importanti di una rete di trasmissione; essendo il tramite fra reti di alta e media tensione, il suo corretto funzionamento garantisce l’alimentazione di tutti i dispositivi e carichi connessi alla linea. Oltre a questo, il trasformatore è anche l’elemento più costoso di tutta la linea elettrica; la sua manutenzione è di vitale importanza per evitare costi elevati per la sostituzione e disagi lungo la linea. Qui entra in gioco il ruolo della diagnostica; attraverso misure periodiche e mirate sul trasformatore è possibile agire tempestivamente ed evitare tutti i fenomeni precedentemente elencati. Nell’elaborato si tratterà l’analisi del trasformatore elettrico trifase durante il suo funzionamento, evidenziando i sottocomponenti e le rispettive criticità; inoltre, verranno mostrate le varie tecniche di diagnostica del trasformatore, in modo tale da poter estrarre un indice legato allo stato di vita, ossia l’Health Index. Ad oggi esistono diverse tecniche di approccio al calcolo dell’Health Index, quella che viene presentata è una tecnica del tutto innovativa, ossia sviluppare una rete neurale artificiale (Artificial Neural Network, ANN) in grado di prevedere lo stato del trasformatore basandosi su misure effettuate sullo stesso. Dunque, verranno presentante le basi per lo sviluppo di una rete neurale, partendo dall’analisi e formattazione dei dati, fino alla fase di ottimizzazione delle prestazioni. Infine, si attraverseranno tutte le fasi intermedie di realizzazione del progetto da cui l’elaborato prende il titolo; osservando l’evoluzione di una rete neurale che si trasforma da un programma scritto in ambiente Python a una applicazione pronta all’uso per gli operatori durante le operazioni di diagnostica.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The comfort level of the seat has a major effect on the usage of a vehicle; thus, car manufacturers have been working on elevating car seat comfort as much as possible. However, still, the testing and evaluation of comfort are done using exhaustive trial and error testing and evaluation of data. In this thesis, we resort to machine learning and Artificial Neural Networks (ANN) to develop a fully automated approach. Even though this approach has its advantages in minimizing time and using a large set of data, it takes away the degree of freedom of the engineer on making decisions. The focus of this study is on filling the gap in a two-step comfort level evaluation which used pressure mapping with body regions to evaluate the average pressure supported by specific body parts and the Self-Assessment Exam (SAE) questions on evaluation of the person’s interest. This study has created a machine learning algorithm that works on giving a degree of freedom to the engineer in making a decision when mapping pressure values with body regions using ANN. The mapping is done with 92% accuracy and with the help of a Graphical User Interface (GUI) that facilitates the process during the testing time of comfort level evaluation of the car seat, which decreases the duration of the test analysis from days to hours.