831 resultados para network-based intrusion detection system
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Making diagnoses in oral pathology are often difficult and confusing in dental practice, especially for the lessexperienced dental student. One of the most promising areas in bioinformatics is computer-aided diagnosis, where a computer system is capable of imitating human reasoning ability and provides diagnoses with an accuracy approaching that of expert professionals. This type of system could be an alternative tool for assisting dental students to overcome the difficulties of the oral pathology learning process. This could allow students to define variables and information, important to improving the decision-making performance. However, no current open data management system has been integrated with an artificial intelligence system in a user-friendly environment. Such a system could also be used as an education tool to help students perform diagnoses. The aim of the present study was to develop and test an open case-based decisionsupport system.Methods: An open decision-support system based on Bayes' theorem connected to a relational database was developed using the C++ programming language. The software was tested in the computerisation of a surgical pathology service and in simulating the diagnosis of 43 known cases of oral bone disease. The simulation was performed after the system was initially filled with data from 401 cases of oral bone disease.Results: the system allowed the authors to construct and to manage a pathology database, and to simulate diagnoses using the variables from the database.Conclusion: Combining a relational database and an open decision-support system in the same user-friendly environment proved effective in simulating diagnoses based on information from an updated database.
Resumo:
A new methodology for soluble oxalic acid determination in grass samples was developed using a two enzyme reactor in an FIA system. The reactor consisted of 3 U of oxalate oxidase and 100 U of peroxidase immobilized on Sorghum vulgare seeds activated with glutaraldehyde. The carbon dioxide was monitored spectrophotometrically, after reacting with an acid-base indicator (Bromocresol Purple) after it permeated through a PTFE membrane. A linear response range was observed between 0.25 and 1.00mmol l-1 of oxalic acid; the data was fit by the equation A=-0.8(±1.5)+ 57.2(±2.5)[oxalate], with a correlation coefficient of 0.9971 and a relative standard deviation of 2% for n=5. The variance for a 0.25 mmol l-1 oxalic acid standard solution was lower than 4% for 11 measurements. The FIA system allows analysis of 20 samples per hour without prior treatment. The proposed method showed a good correlation with that of the Sigma Kit.
Resumo:
Scientific research plays a fundamental role in the health and development of any society, since all technological advances depend ultimately on scientific discovery and the generation of wealth is intricately dependent on technological advance. Due to their importance, science and technology generally occupy important places in the hierarchical structure of developed societies, and they receive considerable public and private investment. Publicly funded science is almost entirely devoted to discovery, and it is administered and structured in a very similar way throughout the world. Particularly in the biological sciences, this structure, which is very much centered on the individual scientist and his own hypothesis-based investigations, may not be the best suited for either discovery in the context of complex biological systems, or for the efficient advancement of fundamental knowledge into practical utility. The adoption of other organizational paradigms, which permit a more coordinated and interactive research structure, may provide important opportunities to accelerate the scientific process and further enhance its relevance and contribution to society. The key alternative is a structure that incorporates larger organizational units to tackle larger and more complex problems. One example of such a unit is the research network. Brazil has utilized such networks to great effect in genome sequencing projects, demonstrating their relevance to the Brazilian research community and opening the possibility of their wider utility in the future.
Resumo:
This article presents software architecture for a web-based system to aid project managing, conceptually founded on guidelines of the Project Management Body of Knowledge (PMBoK) and on ISO/IEC 9126, as well as on the result of an empiric study done in Brazil. Based on these guidelines, this study focused on two different points of view about project management: the view of those who develop software systems to aid management and the view of those who use these systems. The designed software architecture is capable of guiding an incremental development of a quality system that will satisfy today's marketing necessities, principally those of small and medium size enterprises.
Resumo:
In this paper we propose a nature-inspired approach that can boost the Optimum-Path Forest (OPF) clustering algorithm by optimizing its parameters in a discrete lattice. The experiments in two public datasets have shown that the proposed algorithm can achieve similar parameters' values compared to the exhaustive search. Although, the proposed technique is faster than the traditional one, being interesting for intrusion detection in large scale traffic networks. © 2012 IEEE.
Resumo:
Nowadays, organizations face the problem of keeping their information protected, available and trustworthy. In this context, machine learning techniques have also been extensively applied to this task. Since manual labeling is very expensive, several works attempt to handle intrusion detection with traditional clustering algorithms. In this paper, we introduce a new pattern recognition technique called Optimum-Path Forest (OPF) clustering to this task. Experiments on three public datasets have showed that OPF classifier may be a suitable tool to detect intrusions on computer networks, since it outperformed some state-of-the-art unsupervised techniques. © 2012 IEEE.
Resumo:
This work aims at presenting a no-break system for microcomputers using ultracapacitors in replacement of the conventional chemical batteries. We analyzed the most relevant data about average power consumption of microcomputers, electrical and mechanical characteristics of ultracapacitors and operation of no-break power circuits, to propose a configuration capable of working properly with a microcomputer switching mode power supply. Our solution was a sixteen-component ultracapacitor bank, with a total capacitance of 350 F and voltage of 10.8 V, adequate to integrate a low-capacity no-break system, capable of feeding a load of 180 Wh, during 75 s. Our proposed no-break increases the reliability of microcomputers by reducing the probability of user data losses, in case of a power grid failure, offering, so, a high benefit-cost ratio. The replacement of the battery by ultracapacitors allows a quick no-break recharge and low maintenance costs, since these modern components have a lifetime longer than the batteries. Moreover, this solution reduces the environmental impact and eliminates the constant recharge of the energy storage device.
Resumo:
Glasses in the system [Na2S](2/3)[(B2S3)(x)(P2S5)(1-x)](1/3) (0.0 <= x <= 1.0) were prepared by the melt quenching technique, and their properties were characterized by thermal analysis and impedance spectroscopy. Their atomic-level structures were comprehensively characterized by Raman spectroscopy and B-11, P-31, and Na-23 high resolution solid state magic-angle spinning (MAS) NMR techniques. P-31 MAS NMR peak assignments were made by the presence or absence of homonuclear indirect P-31-P-31 spin-spin interactions as detected using homonuclear J-resolved and refocused INADEQUATE techniques. The extent of B-S-P connectivity in the glassy network was quantified by P-31{B-11} and B-11{P-31} rotational echo double resonance spectroscopy. The results clearly illustrate that the network modifier alkali sulfide, Na2S, is not proportionally shared between the two network former components, B and P. Rather, the thiophosphate (P) component tends to attract a larger concentration of network modifier species than predicted by the bulk composition, and this results in the conversion of P2S74-, pyrothiophosphate, Na/P = 2:1, units into PS43-, orthothiophosphate, Na/P = 3:1, groups. Charge balance is maintained by increasing the net degree of polymerization of the thioborate (B) units through the formation of covalent bridging sulfur (BS) units, B S B. Detailed inspection of the B-11 MAS NMR spectra reveals that multiple thioborate units are formed, ranging from neutral BS3/2 groups all the way to the fully depolymerized orthothioborate (BS33-) species. On the basis of these results, a comprehensive and quantitative structural model is developed for these glasses, on the basis of which the compositional trends in the glass transition temperatures (T-g) and ionic conductivities can be rationalized. Up to x = 0.4, the dominant process can be described in a simplified way by the net reaction equation P-1 + B-1 reversible arrow P-0 + B-4, where the superscripts denote the number of BS atoms for the respective network former species. Above x = 0.4, all of the thiophosphate units are of the P-0 type and both pyro-(B-1) and orthothioborate (B-0) species make increasing contributions to the network structure with increasing x. In sharp contrast to the situation in sodium borophosphate glasses, four-coordinated thioborate species are generally less abundant and heteroatomic B-S-P linkages appear to not exist. On the basis of this structural information, compositional trends in the ionic conductivities are discussed in relation to the nature of the charge-compensating anionic species and the spatial distribution of the charge carriers.
Resumo:
Traditional supervised data classification considers only physical features (e. g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.
Resumo:
Background: In epidemiological surveys, a good reliability among the examiners regarding the caries detection method is essential. However, training and calibrating those examiners is an arduous task because it involves several patients who are examined many times. To facilitate this step, we aimed to propose a laboratory methodology to simulate the examinations performed to detect caries lesions using the International Caries Detection and Assessment System (ICDAS) in epidemiological surveys. Methods: A benchmark examiner conducted all training sessions. A total of 67 exfoliated primary teeth, varying from sound to extensive cavitated, were set in seven arch models to simulate complete mouths in primary dentition. Sixteen examiners (graduate students) evaluated all surfaces of the teeth under illumination using buccal mirrors and ball-ended probe in two occasions, using only coronal primary caries scores of the ICDAS. As reference standard, two different examiners assessed the proximal surfaces by direct visual inspection, classifying them in sound, with non-cavitated or with cavitated lesions. After, teeth were sectioned in the bucco-lingual direction, and the examiners assessed the sections in stereomicroscope, classifying the occlusal and smooth surfaces according to lesion depth. Inter-examiner reproducibility was evaluated using weighted kappa. Sensitivities and specificities were calculated at two thresholds: all lesions and advanced lesions (cavitated lesions in proximal surfaces and lesions reaching the dentine in occlusal and smooth surfaces). Conclusion: The methodology purposed for training and calibration of several examiners designated for epidemiological surveys of dental caries in preschool children using the ICDAS is feasible, permitting the assessment of reliability and accuracy of the examiners previously to the survey´s development.
Resumo:
Un livello di sicurezza che prevede l’autenticazione e autorizzazione di un utente e che permette di tenere traccia di tutte le operazioni effettuate, non esclude una rete dall’essere soggetta a incidenti informatici, che possono derivare da tentativi di accesso agli host tramite innalzamento illecito di privilegi o dai classici programmi malevoli come virus, trojan e worm. Un rimedio per identificare eventuali minacce prevede l’utilizzo di un dispositivo IDS (Intrusion Detection System) con il compito di analizzare il traffico e confrontarlo con una serie d’impronte che fanno riferimento a scenari d’intrusioni conosciute. Anche con elevate capacità di elaborazione dell’hardware, le risorse potrebbero non essere sufficienti a garantire un corretto funzionamento del servizio sull’intero traffico che attraversa una rete. L'obiettivo di questa tesi consiste nella creazione di un’applicazione con lo scopo di eseguire un’analisi preventiva, in modo da alleggerire la mole di dati da sottoporre all’IDS nella fase di scansione vera e propria del traffico. Per fare questo vengono sfruttate le statistiche calcolate su dei dati forniti direttamente dagli apparati di rete, cercando di identificare del traffico che utilizza dei protocolli noti e quindi giudicabile non pericoloso con una buona probabilità.
Resumo:
Il presente lavoro di tesi si inserisce nell’ambito della classificazione di dati ad alta dimensionalità, sviluppando un algoritmo basato sul metodo della Discriminant Analysis. Esso classifica i campioni attraverso le variabili prese a coppie formando un network a partire da quelle che hanno una performance sufficientemente elevata. Successivamente, l’algoritmo si avvale di proprietà topologiche dei network (in particolare la ricerca di subnetwork e misure di centralità di singoli nodi) per ottenere varie signature (sottoinsiemi delle variabili iniziali) con performance ottimali di classificazione e caratterizzate da una bassa dimensionalità (dell’ordine di 101, inferiore di almeno un fattore 103 rispetto alle variabili di partenza nei problemi trattati). Per fare ciò, l’algoritmo comprende una parte di definizione del network e un’altra di selezione e riduzione della signature, calcolando ad ogni passaggio la nuova capacità di classificazione operando test di cross-validazione (k-fold o leave- one-out). Considerato l’alto numero di variabili coinvolte nei problemi trattati – dell’ordine di 104 – l’algoritmo è stato necessariamente implementato su High-Performance Computer, con lo sviluppo in parallelo delle parti più onerose del codice C++, nella fattispecie il calcolo vero e proprio del di- scriminante e il sorting finale dei risultati. L’applicazione qui studiata è a dati high-throughput in ambito genetico, riguardanti l’espressione genica a livello cellulare, settore in cui i database frequentemente sono costituiti da un numero elevato di variabili (104 −105) a fronte di un basso numero di campioni (101 −102). In campo medico-clinico, la determinazione di signature a bassa dimensionalità per la discriminazione e classificazione di campioni (e.g. sano/malato, responder/not-responder, ecc.) è un problema di fondamentale importanza, ad esempio per la messa a punto di strategie terapeutiche personalizzate per specifici sottogruppi di pazienti attraverso la realizzazione di kit diagnostici per l’analisi di profili di espressione applicabili su larga scala. L’analisi effettuata in questa tesi su vari tipi di dati reali mostra che il metodo proposto, anche in confronto ad altri metodi esistenti basati o me- no sull’approccio a network, fornisce performance ottime, tenendo conto del fatto che il metodo produce signature con elevate performance di classifica- zione e contemporaneamente mantenendo molto ridotto il numero di variabili utilizzate per questo scopo.
Resumo:
Bandlaufwerke waren bisher die vorherrschende Technologie, um die anfallenden Datenmengen in Archivsystemen zu speichern. Mit Zugriffsmustern, die immer aktiver werden, und Speichermedien wie Festplatten die kostenmäßig aufholen, muss die Architektur vor Speichersystemen zur Archivierung neu überdacht werden. Zuverlässigkeit, Integrität und Haltbarkeit sind die Haupteigenschaften der digitalen Archivierung. Allerdings nimmt auch die Zugriffsgeschwindigkeit einen erhöhten Stellenwert ein, wenn aktive Archive ihre gesamten Inhalte für den direkten Zugriff bereitstellen. Ein band-basiertes System kann die hierfür benötigte Parallelität, Latenz und Durchsatz nicht liefern, was in der Regel durch festplattenbasierte Systeme als Zwischenspeicher kompensiert wird.rnIn dieser Arbeit untersuchen wir die Herausforderungen und Möglichkeiten ein festplattenbasiertes Speichersystem zu entwickeln, das auf eine hohe Zuverlässigkeit und Energieeffizienz zielt und das sich sowohl für aktive als auch für kalte Archivumgebungen eignet. Zuerst analysieren wir die Speichersysteme und Zugriffsmuster eines großen digitalen Archivs und präsentieren damit ein mögliches Einsatzgebiet für unsere Architektur. Daraufhin stellen wir Mechanismen vor um die Zuverlässigkeit einer einzelnen Festplatte zu verbessern und präsentieren sowie evaluieren einen neuen, energieeffizienten, zwei- dimensionalen RAID Ansatz der für „Schreibe ein Mal, lese mehrfach“ Zugriffe optimiert ist. Letztlich stellen wir Protokollierungs- und Zwischenspeichermechanismen vor, die die zugrundeliegenden Ziele unterstützen und evaluieren das RAID System in einer Dateisystemumgebung.