973 resultados para Extração de características
Resumo:
Pós-graduação em Engenharia Mecânica - FEIS
Resumo:
Information extraction is a frequent and relevant problem in digital signal processing. In the past few years, different methods have been utilized for the parameterization of signals and the achievement of efficient descriptors. When the signals possess statistical cyclostationary properties, the Cyclic Autocorrelation Function (CAF) and the Spectral Cyclic Density (SCD) can be used to extract second-order cyclostationary information. However, second-order cyclostationary information is poor in nongaussian signals, as the cyclostationary analysis in this case should comprise higher-order statistical information. This paper proposes a new mathematical tool for the higher-order cyclostationary analysis based on the correntropy function. Specifically, the cyclostationary analysis is revisited focusing on the information theory, while the Cyclic Correntropy Function (CCF) and Cyclic Correntropy Spectral Density (CCSD) are also defined. Besides, it is analytically proven that the CCF contains information regarding second- and higher-order cyclostationary moments, being a generalization of the CAF. The performance of the aforementioned new functions in the extraction of higher-order cyclostationary characteristics is analyzed in a wireless communication system where nongaussian noise exists.
Resumo:
Essa dissertação tem o objetivo de verificar a contribuição de diferentes abordagens para extração de linhas, à classificação de imagens multiespectrais, com o possível uso na discriminação e mapeamento de classes de cobertura da terra. Nesse contexto, é efetuada a comparação entre diferentes técnicas de extração de características para extração de linhas de transmissão em áreas rurais, a saber, técnicas de realce utilizando variação de contraste e filtragem morfológica, bem como detecção de bordas utilizando filtro Canny e detector SUSAN, citando como técnica de extração de linhas a Transformada de Hough e Transformada de Radon, utilizando diferentes algoritmos, em imagens aéreas e de sensoriamento remoto. O processo de análise de imagens, com diferentes abordagens leva a resultados variados em diferentes tipos de coberturas do solo. Tais resultados foram avaliados e comparados produzindo tabelas de eficiência para cada procedimento. Estas tabelas direcionam a diferentes encaminhamentos, que vão variar de abordagem dependendo do objetivo final da extração das Linhas de Transmissão.
Resumo:
Skeletal muscle consists of muscle fiber types that have different physiological and biochemical characteristics. Basically, the muscle fiber can be classified into type I and type II, presenting, among other features, contraction speed and sensitivity to fatigue different for each type of muscle fiber. These fibers coexist in the skeletal muscles and their relative proportions are modulated according to the muscle functionality and the stimulus that is submitted. To identify the different proportions of fiber types in the muscle composition, many studies use biopsy as standard procedure. As the surface electromyography (EMGs) allows to extract information about the recruitment of different motor units, this study is based on the assumption that it is possible to use the EMG to identify different proportions of fiber types in a muscle. The goal of this study was to identify the characteristics of the EMG signals which are able to distinguish, more precisely, different proportions of fiber types. Also was investigated the combination of characteristics using appropriate mathematical models. To achieve the proposed objective, simulated signals were developed with different proportions of motor units recruited and with different signal-to-noise ratios. Thirteen characteristics in function of time and the frequency were extracted from emulated signals. The results for each extracted feature of the signals were submitted to the clustering algorithm k-means to separate the different proportions of motor units recruited on the emulated signals. Mathematical techniques (confusion matrix and analysis of capability) were implemented to select the characteristics able to identify different proportions of muscle fiber types. As a result, the average frequency and median frequency were selected as able to distinguish, with more precision, the proportions of different muscle fiber types. Posteriorly, the features considered most able were analyzed in an associated way through principal component analysis. Were found two principal components of the signals emulated without noise (CP1 and CP2) and two principal components of the noisy signals (CP1 and CP2 ). The first principal components (CP1 and CP1 ) were identified as being able to distinguish different proportions of muscle fiber types. The selected characteristics (median frequency, mean frequency, CP1 and CP1 ) were used to analyze real EMGs signals, comparing sedentary people with physically active people who practice strength training (weight training). The results obtained with the different groups of volunteers show that the physically active people obtained higher values of mean frequency, median frequency and principal components compared with the sedentary people. Moreover, these values decreased with increasing power level for both groups, however, the decline was more accented for the group of physically active people. Based on these results, it is assumed that the volunteers of the physically active group have higher proportions of type II fibers than sedentary people. Finally, based on these results, we can conclude that the selected characteristics were able to distinguish different proportions of muscle fiber types, both for the emulated signals as to the real signals. These characteristics can be used in several studies, for example, to evaluate the progress of people with myopathy and neuromyopathy due to the physiotherapy, and also to analyze the development of athletes to improve their muscle capacity according to their sport. In both cases, the extraction of these characteristics from the surface electromyography signals provides a feedback to the physiotherapist and the coach physical, who can analyze the increase in the proportion of a given type of fiber, as desired in each case.
Resumo:
Forensic speaker comparison exams have complex characteristics, demanding a long time for manual analysis. A method for automatic recognition of vowels, providing feature extraction for acoustic analysis is proposed, aiming to contribute as a support tool in these exams. The proposal is based in formant measurements by LPC (Linear Predictive Coding), selectively by fundamental frequency detection, zero crossing rate, bandwidth and continuity, with the clustering being done by the k-means method. Experiments using samples from three different databases have shown promising results, in which the regions corresponding to five of the Brasilian Portuguese vowels were successfully located, providing visualization of a speaker’s vocal tract behavior, as well as the detection of segments corresponding to target vowels.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo Automação e Electrónica Industrial
Resumo:
Trabalho de Projeto para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações
Resumo:
Nesta dissertação é apresentado um estudo dos sistemas de processamento automático de imagem em contexto de um problema relacionado com a individualização de neurónios em imagens da nematoda C. elegans durante estudos relacionados com a doença de Parkinson. Apresenta-se uma breve introdução à anatomia do verme, uma introdução à doença de Parkinson e uso do C. elegans em estudos relacionados e também é feita a análise de artigos em contexto de processamento de imagem para contextualizar a situação atual de soluções para o problema de extração de características e regiões específicas. Neste projeto é desenvolvida uma pipeline com o auxilio do software CellProfiler para procurar uma resposta para o problema em questão.
Resumo:
Este trabalho foi realizado dentro da área de reconhecimento automático de voz (RAV). Atualmente, a maioria dos sistemas de RAV é baseada nos modelos ocultos de Markov (HMMs) [GOM 99] [GOM 99b], quer utilizando-os exclusivamente, quer utilizando-os em conjunto com outras técnicas e constituindo sistemas híbridos. A abordagem estatística dos HMMs tem mostrado ser uma das mais poderosas ferramentas disponíveis para a modelagem acústica e temporal do sinal de voz. A melhora da taxa de reconhecimento exige algoritmos mais complexos [RAV 96]. O aumento do tamanho do vocabulário ou do número de locutores exige um processamento computacional adicional. Certas aplicações, como a verificação de locutor ou o reconhecimento de diálogo podem exigir processamento em tempo real [DOD 85] [MAM 96]. Outras aplicações tais como brinquedos ou máquinas portáveis ainda podem agregar o requisito de portabilidade, e de baixo consumo, além de um sistema fisicamente compacto. Tais necessidades exigem uma solução em hardware. O presente trabalho propõe a implementação de um sistema de RAV utilizando hardware baseado em FPGAs (Field Programmable Gate Arrays) e otimizando os algoritmos que se utilizam no RAV. Foi feito um estudo dos sistemas de RAV e das técnicas que a maioria dos sistemas utiliza em cada etapa que os conforma. Deu-se especial ênfase aos Modelos Ocultos de Markov, seus algoritmos de cálculo de probabilidades, de treinamento e de decodificação de estados, e sua aplicação nos sistemas de RAV. Foi realizado um estudo comparativo dos sistemas em hardware, produzidos por outros centros de pesquisa, identificando algumas das suas características mais relevantes. Foi implementado um modelo de software, descrito neste trabalho, utilizado para validar os algoritmos de RAV e auxiliar na especificação em hardware. Um conjunto de funções digitais implementadas em FPGA, necessárias para o desenvolvimento de sistemas de RAV é descrito. Foram realizadas algumas modificações nos algoritmos de RAV para facilitar a implementação digital dos mesmos. A conexão, entre as funções digitais projetadas, para a implementação de um sistema de reconhecimento de palavras isoladas é aqui apresentado. A implementação em FPGA da etapa de pré-processamento, que inclui a pré-ênfase, janelamento e extração de características, e a implementação da etapa de reconhecimento são apresentadas finalmente neste trabalho.
Resumo:
The skin cancer is the most common of all cancers and the increase of its incidence must, in part, caused by the behavior of the people in relation to the exposition to the sun. In Brazil, the non-melanoma skin cancer is the most incident in the majority of the regions. The dermatoscopy and videodermatoscopy are the main types of examinations for the diagnosis of dermatological illnesses of the skin. The field that involves the use of computational tools to help or follow medical diagnosis in dermatological injuries is seen as very recent. Some methods had been proposed for automatic classification of pathology of the skin using images. The present work has the objective to present a new intelligent methodology for analysis and classification of skin cancer images, based on the techniques of digital processing of images for extraction of color characteristics, forms and texture, using Wavelet Packet Transform (WPT) and learning techniques called Support Vector Machine (SVM). The Wavelet Packet Transform is applied for extraction of texture characteristics in the images. The WPT consists of a set of base functions that represents the image in different bands of frequency, each one with distinct resolutions corresponding to each scale. Moreover, the characteristics of color of the injury are also computed that are dependants of a visual context, influenced for the existing colors in its surround, and the attributes of form through the Fourier describers. The Support Vector Machine is used for the classification task, which is based on the minimization principles of the structural risk, coming from the statistical learning theory. The SVM has the objective to construct optimum hyperplanes that represent the separation between classes. The generated hyperplane is determined by a subset of the classes, called support vectors. For the used database in this work, the results had revealed a good performance getting a global rightness of 92,73% for melanoma, and 86% for non-melanoma and benign injuries. The extracted describers and the SVM classifier became a method capable to recognize and to classify the analyzed skin injuries
Resumo:
The human voice is an important communication tool and any disorder of the voice can have profound implications for social and professional life of an individual. Techniques of digital signal processing have been used by acoustic analysis of vocal disorders caused by pathologies in the larynx, due to its simplicity and noninvasive nature. This work deals with the acoustic analysis of voice signals affected by pathologies in the larynx, specifically, edema, and nodules on the vocal folds. The purpose of this work is to develop a classification system of voices to help pre-diagnosis of pathologies in the larynx, as well as monitoring pharmacological treatments and after surgery. Linear Prediction Coefficients (LPC), Mel Frequency cepstral coefficients (MFCC) and the coefficients obtained through the Wavelet Packet Transform (WPT) are applied to extract relevant characteristics of the voice signal. For the classification task is used the Support Vector Machine (SVM), which aims to build optimal hyperplanes that maximize the margin of separation between the classes involved. The hyperplane generated is determined by the support vectors, which are subsets of points in these classes. According to the database used in this work, the results showed a good performance, with a hit rate of 98.46% for classification of normal and pathological voices in general, and 98.75% in the classification of diseases together: edema and nodules
Resumo:
With the rapid growth of databases of various types (text, multimedia, etc..), There exist a need to propose methods for ordering, access and retrieve data in a simple and fast way. The images databases, in addition to these needs, require a representation of the images so that the semantic content characteristics are considered. Accordingly, several proposals such as the textual annotations based retrieval has been made. In the annotations approach, the recovery is based on the comparison between the textual description that a user can make of images and descriptions of the images stored in database. Among its drawbacks, it is noted that the textual description is very dependent on the observer, in addition to the computational effort required to describe all the images in database. Another approach is the content based image retrieval - CBIR, where each image is represented by low-level features such as: color, shape, texture, etc. In this sense, the results in the area of CBIR has been very promising. However, the representation of the images semantic by low-level features is an open problem. New algorithms for the extraction of features as well as new methods of indexing have been proposed in the literature. However, these algorithms become increasingly complex. So, doing an analysis, it is natural to ask whether there is a relationship between semantics and low-level features extracted in an image? and if there is a relationship, which descriptors better represent the semantic? which leads us to a new question: how to use descriptors to represent the content of the images?. The work presented in this thesis, proposes a method to analyze the relationship between low-level descriptors and semantics in an attempt to answer the questions before. Still, it was observed that there are three possibilities of indexing images: Using composed characteristic vectors, using parallel and independent index structures (for each descriptor or set of them) and using characteristic vectors sorted in sequential order. Thus, the first two forms have been widely studied and applied in literature, but there were no records of the third way has even been explored. So this thesis also proposes to index using a sequential structure of descriptors and also the order of these descriptors should be based on the relationship that exists between each descriptor and semantics of the users. Finally, the proposed index in this thesis revealed better than the traditional approachs and yet, was showed experimentally that the order in this sequence is important and there is a direct relationship between this order and the relationship of low-level descriptors with the semantics of the users
Resumo:
The need to implement a software architecture that promotes the development of a SCADA supervisory system for monitoring industrial processes simulated with the flexibility of adding intelligent modules and devices such as CLP, according to the specifications of the problem, it was the motivation for this work. In the present study, we developed an intelligent supervisory system on a simulation of a distillation column modeled with Unisim. Furthermore, OLE Automation was used as communication between the supervisory and simulation software, which, with the use of the database, promoted an architecture both scalable and easy to maintain. Moreover, intelligent modules have been developed for preprocessing, data characteristics extraction, and variables inference. These modules were fundamentally based on the Encog software
Resumo:
In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables