875 resultados para Voltage disturbance detection and classification
Resumo:
In this paper, we present a novel coarse-to-fine visual localization approach: contextual visual localization. This approach relies on three elements: (i) a minimal-complexity classifier for performing fast coarse localization (submap classification); (ii) an optimized saliency detector which exploits the visual statistics of the submap; and (iii) a fast view-matching algorithm which filters initial matchings with a structural criterion. The latter algorithm yields fine localization. Our experiments show that these elements have been successfully integrated for solving the global localization problem. Context, that is, the awareness of being in a particular submap, is defined by a supervised classifier tuned for a minimal set of features. Visual context is exploited both for tuning (optimizing) the saliency detection process, and to select potential matching views in the visual database, close enough to the query view.
Resumo:
The growing importance and influence of new resources connected to the power systems has caused many changes in their operation. Environmental policies and several well know advantages have been made renewable based energy resources largely disseminated. These resources, including Distributed Generation (DG), are being connected to lower voltage levels where Demand Response (DR) must be considered too. These changes increase the complexity of the system operation due to both new operational constraints and amounts of data to be processed. Virtual Power Players (VPP) are entities able to manage these resources. Addressing these issues, this paper proposes a methodology to support VPP actions when these act as a Curtailment Service Provider (CSP) that provides DR capacity to a DR program declared by the Independent System Operator (ISO) or by the VPP itself. The amount of DR capacity that the CSP can assure is determined using data mining techniques applied to a database which is obtained for a large set of operation scenarios. The paper includes a case study based on 27,000 scenarios considering a diversity of distributed resources in a 33 bus distribution network.
Resumo:
This paper presents solutions for fault detection and diagnosis of two-level, three phase voltage-source inverter (VSI) topologies with IGBT devices. The proposed solutions combine redundant standby VSI structures and contactors (or relays) to improve the fault-tolerant capabilities of power electronics in applications with safety requirements. The suitable combination of these elements gives the inverter the ability to maintain energy processing in the occurrence of several failure modes, including short-circuit in IGBT devices, thus extending its reliability and availability. A survey of previously developed fault-tolerant VSI structures and several aspects of failure modes, detection and isolation mechanisms within VSI is first discussed. Hardware solutions for the protection of power semiconductors with fault detection and diagnosis mechanisms are then proposed to provide conditions to isolate and replace damaged power devices (or branches) in real time. Experimental results from a prototype are included to validate the proposed solutions.
Resumo:
The present study was performed to assess the interlaboratory reproducibility of the molecular detection and identification of species of Zygomycetes from formalin-fixed paraffin-embedded kidney and brain tissues obtained from experimentally infected mice. Animals were infected with one of five species (Rhizopus oryzae, Rhizopus microsporus, Lichtheimia corymbifera, Rhizomucor pusillus, and Mucor circinelloides). Samples with 1, 10, or 30 slide cuts of the tissues were prepared from each paraffin block, the sample identities were blinded for analysis, and the samples were mailed to each of seven laboratories for the assessment of sensitivity. A protocol describing the extraction method and the PCR amplification procedure was provided. The internal transcribed spacer 1 (ITS1) region was amplified by PCR with the fungal universal primers ITS1 and ITS2 and sequenced. As negative results were obtained for 93% of the tissue specimens infected by M. circinelloides, the data for this species were excluded from the analysis. Positive PCR results were obtained for 93% (52/56), 89% (50/56), and 27% (15/56) of the samples with 30, 10, and 1 slide cuts, respectively. There were minor differences, depending on the organ tissue, fungal species, and laboratory. Correct species identification was possible for 100% (30 cuts), 98% (10 cuts), and 93% (1 cut) of the cases. With the protocol used in the present study, the interlaboratory reproducibility of ITS sequencing for the identification of major Zygomycetes species from formalin-fixed paraffin-embedded tissues can reach 100%, when enough material is available.
Resumo:
The project aims at advancing the state of the art in the use of context information for classification of image and video data. The use of context in the classification of images has been showed of great importance to improve the performance of actual object recognition systems. In our project we proposed the concept of Multi-scale Feature Labels as a general and compact method to exploit the local and global context. The feature extraction from the discriminative probability or classification confidence label field is of great novelty. Moreover the use of a multi-scale representation of the feature labels lead to a compact and efficient description of the context. The goal of the project has been also to provide a general-purpose method and prove its suitability in different image/video analysis problem. The two-year project generated 5 journal publications (plus 2 under submission), 10 conference publications (plus 2 under submission) and one patent (plus 1 pending). Of these publications, a relevant number make use of the main result of this project to improve the results in detection and/or segmentation of objects.
Resumo:
OBJECTIVE: To evaluate the power of various parameters of the vestibulo-ocular reflex (VOR) in detecting unilateral peripheral vestibular dysfunction and in characterizing certain inner ear pathologies. STUDY DESIGN: Prospective study of consecutive ambulatory patients presenting with acute onset of peripheral vertigo and spontaneous nystagmus. SETTING: Tertiary referral center. PATIENTS: Seventy-four patients (40 females, 34 males) and 22 normal subjects (11 females, 11 males) were included in the study. Patients were classified in three main diagnoses: vestibular neuritis: 40; viral labyrinthitis: 22; Meniere's disease: 12. METHODS: The VOR function was evaluated by standard caloric and impulse rotary tests (velocity step). A mathematical model of vestibular function was used to characterize the VOR response to rotational stimulation. The diagnostic value of the different VOR parameters was assessed by uni- and multivariable logistic regression. RESULTS: In univariable analysis, caloric asymmetry emerged as the most powerful VOR parameter in identifying unilateral vestibular deficit, with a boundary limit set at 20%. In multivariable analysis, the combination of caloric asymmetry and rotational time constant asymmetry significantly improved the discriminatory power over caloric alone (p<0.0001) and produced a detection score with a correct classification of 92.4%. In discriminating labyrinthine diseases, different combinations of the VOR parameters were obtained for each diagnosis (p<0.003) supporting that the VOR characteristics differ between the three inner ear disorders. However, the clinical usefulness of these characteristics in separating the pathologies was limited. CONCLUSION: We propose a powerful logistic model combining the indices of caloric and time constant asymmetries to detect a peripheral vestibular loss, with an accuracy of 92.4%. Based on vestibular data only, the discrimination between the different inner ear diseases is statistically possible, which supports different pathophysiologic changes in labyrinthine pathologies.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
Raman spectroscopy combined with chemometrics has recently become a widespread technique for the analysis of pharmaceutical solid forms. The application presented in this paper is the investigation of counterfeit medicines. This increasingly serious issue involves networks that are an integral part of industrialized organized crime. Efficient analytical tools are consequently required to fight against it. Quick and reliable authentication means are needed to allow the deployment of measures from the company and the authorities. For this purpose a method in two steps has been implemented here. The first step enables the identification of pharmaceutical tablets and capsules and the detection of their counterfeits. A nonlinear classification method, the Support Vector Machines (SVM), is computed together with a correlation with the database and the detection of Active Pharmaceutical Ingredient (API) peaks in the suspect product. If a counterfeit is detected, the second step allows its chemical profiling among former counterfeits in a forensic intelligence perspective. For this second step a classification based on Principal Component Analysis (PCA) and correlation distance measurements is applied to the Raman spectra of the counterfeits.
Resumo:
Dans l'apprentissage machine, la classification est le processus d’assigner une nouvelle observation à une certaine catégorie. Les classifieurs qui mettent en œuvre des algorithmes de classification ont été largement étudié au cours des dernières décennies. Les classifieurs traditionnels sont basés sur des algorithmes tels que le SVM et les réseaux de neurones, et sont généralement exécutés par des logiciels sur CPUs qui fait que le système souffre d’un manque de performance et d’une forte consommation d'énergie. Bien que les GPUs puissent être utilisés pour accélérer le calcul de certains classifieurs, leur grande consommation de puissance empêche la technologie d'être mise en œuvre sur des appareils portables tels que les systèmes embarqués. Pour rendre le système de classification plus léger, les classifieurs devraient être capable de fonctionner sur un système matériel plus compact au lieu d'un groupe de CPUs ou GPUs, et les classifieurs eux-mêmes devraient être optimisés pour ce matériel. Dans ce mémoire, nous explorons la mise en œuvre d'un classifieur novateur sur une plate-forme matérielle à base de FPGA. Le classifieur, conçu par Alain Tapp (Université de Montréal), est basé sur une grande quantité de tables de recherche qui forment des circuits arborescents qui effectuent les tâches de classification. Le FPGA semble être un élément fait sur mesure pour mettre en œuvre ce classifieur avec ses riches ressources de tables de recherche et l'architecture à parallélisme élevé. Notre travail montre que les FPGAs peuvent implémenter plusieurs classifieurs et faire les classification sur des images haute définition à une vitesse très élevée.
Resumo:
Die thermische Verarbeitung von Lebensmitteln beeinflusst deren Qualität und ernährungsphysiologischen Eigenschaften. Im Haushalt ist die Überwachung der Temperatur innerhalb des Lebensmittels sehr schwierig. Zudem ist das Wissen über optimale Temperatur- und Zeitparameter für die verschiedenen Speisen oft unzureichend. Die optimale Steuerung der thermischen Zubereitung ist maßgeblich abhängig von der Art des Lebensmittels und der äußeren und inneren Temperatureinwirkung während des Garvorgangs. Das Ziel der Arbeiten war die Entwicklung eines automatischen Backofens, der in der Lage ist, die Art des Lebensmittels zu erkennen und die Temperatur im Inneren des Lebensmittels während des Backens zu errechnen. Die für die Temperaturberechnung benötigten Daten wurden mit mehreren Sensoren erfasst. Hierzu kam ein Infrarotthermometer, ein Infrarotabstandssensor, eine Kamera, ein Temperatursensor und ein Lambdasonde innerhalb des Ofens zum Einsatz. Ferner wurden eine Wägezelle, ein Strom- sowie Spannungs-Sensor und ein Temperatursensor außerhalb des Ofens genutzt. Die während der Aufheizphase aufgenommen Datensätze ermöglichten das Training mehrerer künstlicher neuronaler Netze, die die verschiedenen Lebensmittel in die entsprechenden Kategorien einordnen konnten, um so das optimale Backprogram auszuwählen. Zur Abschätzung der thermische Diffusivität der Nahrung, die von der Zusammensetzung (Kohlenhydrate, Fett, Protein, Wasser) abhängt, wurden mehrere künstliche neuronale Netze trainiert. Mit Ausnahme des Fettanteils der Lebensmittel konnten alle Komponenten durch verschiedene KNNs mit einem Maximum von 8 versteckten Neuronen ausreichend genau abgeschätzt werden um auf deren Grundlage die Temperatur im inneren des Lebensmittels zu berechnen. Die durchgeführte Arbeit zeigt, dass mit Hilfe verschiedenster Sensoren zur direkten beziehungsweise indirekten Messung der äußeren Eigenschaften der Lebensmittel sowie KNNs für die Kategorisierung und Abschätzung der Lebensmittelzusammensetzung die automatische Erkennung und Berechnung der inneren Temperatur von verschiedensten Lebensmitteln möglich ist.
Resumo:
In this report, a face recognition system that is capable of detecting and recognizing frontal and rotated faces was developed. Two face recognition methods focusing on the aspect of pose invariance are presented and evaluated - the whole face approach and the component-based approach. The main challenge of this project is to develop a system that is able to identify faces under different viewing angles in realtime. The development of such a system will enhance the capability and robustness of current face recognition technology. The whole-face approach recognizes faces by classifying a single feature vector consisting of the gray values of the whole face image. The component-based approach first locates the facial components and extracts them. These components are normalized and combined into a single feature vector for classification. The Support Vector Machine (SVM) is used as the classifier for both approaches. Extensive tests with respect to the robustness against pose changes are performed on a database that includes faces rotated up to about 40 degrees in depth. The component-based approach clearly outperforms the whole-face approach on all tests. Although this approach isproven to be more reliable, it is still too slow for real-time applications. That is the reason why a real-time face recognition system using the whole-face approach is implemented to recognize people in color video sequences.
Resumo:
El desalineamiento temporal es la incorrespondencia de dos señales debido a una distorsión en el eje temporal. La Detección y Diagnóstico de Fallas (Fault Detection and Diagnosis-FDD) permite la detección, el diagnóstico y la corrección de fallos en un proceso. La metodología usada en FDD está dividida en dos categorías: técnicas basadas en modelos y no basadas en modelos. Esta tesis doctoral trata sobre el estudio del efecto del desalineamiento temporal en FDD. Nuestra atención se enfoca en el análisis y el diseño de sistemas FDD en caso de problemas de comunicación de datos, como retardos y pérdidas. Se proponen dos técnicas para reducir estos problemas: una basada en programación dinámica y la otra en optimización. Los métodos propuestos han sido validados sobre diferentes sistemas dinámicos: control de posición de un motor de corriente continua, una planta de laboratorio y un problema de sistemas eléctricos conocido como hueco de tensión.
Resumo:
Although the oral cavity is easily accessible to inspection, patients with oral cancer most often present at a late stage, leading to high morbidity and mortality. Autofluorescence imaging has emerged as a promising technology to aid clinicians in screening for oral neoplasia and as an aid to resection, but current approaches rely on subjective interpretation. We present a new method to objectively delineate neoplastic oral mucosa using autofluorescence imaging. Autofluorescence images were obtained from 56 patients with oral lesions and 11 normal volunteers. From these images, 276 measurements from 159 unique regions of interest (ROI) sites corresponding to normal and confirmed neoplastic areas were identified. Data from ROIs in the first 46 subjects were used to develop a simple classification algorithm based on the ratio of red-to-green fluorescence; performance of this algorithm was then validated using data from the ROIs in the last 21 subjects. This algorithm was applied to patient images to create visual disease probability maps across the field of view. Histologic sections of resected tissue were used to validate the disease probability maps. The best discrimination between neoplastic and nonneoplastic areas was obtained at 405 nm excitation; normal tissue could be discriminated from dysplasia and invasive cancer with a 95.9% sensitivity and 96.2% specificity in the training set, and with a 100% sensitivity and 91.4% specificity in the validation set. Disease probability maps qualitatively agreed with both clinical impression and histology. Autofluorescence imaging coupled with objective image analysis provided a sensitive and noninvasive tool for the detection of oral neoplasia.
Detection and Identification of Abnormalities in Customer Consumptions in Power Distribution Systems
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Malware has become a major threat in the last years due to the ease of spread through the Internet. Malware detection has become difficult with the use of compression, polymorphic methods and techniques to detect and disable security software. Those and other obfuscation techniques pose a problem for detection and classification schemes that analyze malware behavior. In this paper we propose a distributed architecture to improve malware collection using different honeypot technologies to increase the variety of malware collected. We also present a daemon tool developed to grab malware distributed through spam and a pre-classification technique that uses antivirus technology to separate malware in generic classes. © 2009 SPIE.