80 resultados para Extração de características

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information extraction is a frequent and relevant problem in digital signal processing. In the past few years, different methods have been utilized for the parameterization of signals and the achievement of efficient descriptors. When the signals possess statistical cyclostationary properties, the Cyclic Autocorrelation Function (CAF) and the Spectral Cyclic Density (SCD) can be used to extract second-order cyclostationary information. However, second-order cyclostationary information is poor in nongaussian signals, as the cyclostationary analysis in this case should comprise higher-order statistical information. This paper proposes a new mathematical tool for the higher-order cyclostationary analysis based on the correntropy function. Specifically, the cyclostationary analysis is revisited focusing on the information theory, while the Cyclic Correntropy Function (CCF) and Cyclic Correntropy Spectral Density (CCSD) are also defined. Besides, it is analytically proven that the CCF contains information regarding second- and higher-order cyclostationary moments, being a generalization of the CAF. The performance of the aforementioned new functions in the extraction of higher-order cyclostationary characteristics is analyzed in a wireless communication system where nongaussian noise exists.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The skin cancer is the most common of all cancers and the increase of its incidence must, in part, caused by the behavior of the people in relation to the exposition to the sun. In Brazil, the non-melanoma skin cancer is the most incident in the majority of the regions. The dermatoscopy and videodermatoscopy are the main types of examinations for the diagnosis of dermatological illnesses of the skin. The field that involves the use of computational tools to help or follow medical diagnosis in dermatological injuries is seen as very recent. Some methods had been proposed for automatic classification of pathology of the skin using images. The present work has the objective to present a new intelligent methodology for analysis and classification of skin cancer images, based on the techniques of digital processing of images for extraction of color characteristics, forms and texture, using Wavelet Packet Transform (WPT) and learning techniques called Support Vector Machine (SVM). The Wavelet Packet Transform is applied for extraction of texture characteristics in the images. The WPT consists of a set of base functions that represents the image in different bands of frequency, each one with distinct resolutions corresponding to each scale. Moreover, the characteristics of color of the injury are also computed that are dependants of a visual context, influenced for the existing colors in its surround, and the attributes of form through the Fourier describers. The Support Vector Machine is used for the classification task, which is based on the minimization principles of the structural risk, coming from the statistical learning theory. The SVM has the objective to construct optimum hyperplanes that represent the separation between classes. The generated hyperplane is determined by a subset of the classes, called support vectors. For the used database in this work, the results had revealed a good performance getting a global rightness of 92,73% for melanoma, and 86% for non-melanoma and benign injuries. The extracted describers and the SVM classifier became a method capable to recognize and to classify the analyzed skin injuries

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The human voice is an important communication tool and any disorder of the voice can have profound implications for social and professional life of an individual. Techniques of digital signal processing have been used by acoustic analysis of vocal disorders caused by pathologies in the larynx, due to its simplicity and noninvasive nature. This work deals with the acoustic analysis of voice signals affected by pathologies in the larynx, specifically, edema, and nodules on the vocal folds. The purpose of this work is to develop a classification system of voices to help pre-diagnosis of pathologies in the larynx, as well as monitoring pharmacological treatments and after surgery. Linear Prediction Coefficients (LPC), Mel Frequency cepstral coefficients (MFCC) and the coefficients obtained through the Wavelet Packet Transform (WPT) are applied to extract relevant characteristics of the voice signal. For the classification task is used the Support Vector Machine (SVM), which aims to build optimal hyperplanes that maximize the margin of separation between the classes involved. The hyperplane generated is determined by the support vectors, which are subsets of points in these classes. According to the database used in this work, the results showed a good performance, with a hit rate of 98.46% for classification of normal and pathological voices in general, and 98.75% in the classification of diseases together: edema and nodules

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the rapid growth of databases of various types (text, multimedia, etc..), There exist a need to propose methods for ordering, access and retrieve data in a simple and fast way. The images databases, in addition to these needs, require a representation of the images so that the semantic content characteristics are considered. Accordingly, several proposals such as the textual annotations based retrieval has been made. In the annotations approach, the recovery is based on the comparison between the textual description that a user can make of images and descriptions of the images stored in database. Among its drawbacks, it is noted that the textual description is very dependent on the observer, in addition to the computational effort required to describe all the images in database. Another approach is the content based image retrieval - CBIR, where each image is represented by low-level features such as: color, shape, texture, etc. In this sense, the results in the area of CBIR has been very promising. However, the representation of the images semantic by low-level features is an open problem. New algorithms for the extraction of features as well as new methods of indexing have been proposed in the literature. However, these algorithms become increasingly complex. So, doing an analysis, it is natural to ask whether there is a relationship between semantics and low-level features extracted in an image? and if there is a relationship, which descriptors better represent the semantic? which leads us to a new question: how to use descriptors to represent the content of the images?. The work presented in this thesis, proposes a method to analyze the relationship between low-level descriptors and semantics in an attempt to answer the questions before. Still, it was observed that there are three possibilities of indexing images: Using composed characteristic vectors, using parallel and independent index structures (for each descriptor or set of them) and using characteristic vectors sorted in sequential order. Thus, the first two forms have been widely studied and applied in literature, but there were no records of the third way has even been explored. So this thesis also proposes to index using a sequential structure of descriptors and also the order of these descriptors should be based on the relationship that exists between each descriptor and semantics of the users. Finally, the proposed index in this thesis revealed better than the traditional approachs and yet, was showed experimentally that the order in this sequence is important and there is a direct relationship between this order and the relationship of low-level descriptors with the semantics of the users

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The need to implement a software architecture that promotes the development of a SCADA supervisory system for monitoring industrial processes simulated with the flexibility of adding intelligent modules and devices such as CLP, according to the specifications of the problem, it was the motivation for this work. In the present study, we developed an intelligent supervisory system on a simulation of a distillation column modeled with Unisim. Furthermore, OLE Automation was used as communication between the supervisory and simulation software, which, with the use of the database, promoted an architecture both scalable and easy to maintain. Moreover, intelligent modules have been developed for preprocessing, data characteristics extraction, and variables inference. These modules were fundamentally based on the Encog software

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes a method based on the theory of electromagnetic waves reflected to evaluate the behavior of these waves and the level of attenuation caused in bone tissue. For this, it was proposed the construction of two antennas in microstrip structure with resonance frequency at 2.44 GHz The problem becomes relevant because of the diseases osteometabolic reach a large portion of the population, men and women. With this method, the signal is classified into two groups: tissue mass with bony tissues with normal or low bone mass. For this, techniques of feature extraction (Wavelet Transform) and pattern recognition (KNN and ANN) were used. The tests were performed on bovine bone and tissue with chemicals, the methodology and results are described in the work

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The need to implement a software architecture that promotes the development of a SCADA supervisory system for monitoring industrial processes simulated with the flexibility of adding intelligent modules and devices such as CLP, according to the specifications of the problem, it was the motivation for this work. In the present study, we developed an intelligent supervisory system on a simulation of a distillation column modeled with Unisim. Furthermore, OLE Automation was used as communication between the supervisory and simulation software, which, with the use of the database, promoted an architecture both scalable and easy to maintain. Moreover, intelligent modules have been developed for preprocessing, data characteristics extraction, and variables inference. These modules were fundamentally based on the Encog software

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Lung cancer is one of the most common types of cancer and has the highest mortality rate. Patient survival is highly correlated with early detection. Computed Tomography technology services the early detection of lung cancer tremendously by offering aminimally invasive medical diagnostic tool. However, the large amount of data per examination makes the interpretation difficult. This leads to omission of nodules by human radiologist. This thesis presents a development of a computer-aided diagnosis system (CADe) tool for the detection of lung nodules in Computed Tomography study. The system, called LCD-OpenPACS (Lung Cancer Detection - OpenPACS) should be integrated into the OpenPACS system and have all the requirements for use in the workflow of health facilities belonging to the SUS (Brazilian health system). The LCD-OpenPACS made use of image processing techniques (Region Growing and Watershed), feature extraction (Histogram of Gradient Oriented), dimensionality reduction (Principal Component Analysis) and classifier (Support Vector Machine). System was tested on 220 cases, totaling 296 pulmonary nodules, with sensitivity of 94.4% and 7.04 false positives per case. The total time for processing was approximately 10 minutes per case. The system has detected pulmonary nodules (solitary, juxtavascular, ground-glass opacity and juxtapleural) between 3 mm and 30 mm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Simultaneous Localization and Mapping (SLAM - Simultaneous Localization and Mapping), a robot placed in an unknown location in any environment must be able to create a perspective of this environment (a map) and is situated in the same simultaneously, using only information captured by the robot s sensors and control signals known. Recently, driven by the advance of computing power, work in this area have proposed to use video camera as a sensor and it came so Visual SLAM. This has several approaches and the vast majority of them work basically extracting features of the environment, calculating the necessary correspondence and through these estimate the required parameters. This work presented a monocular visual SLAM system that uses direct image registration to calculate the image reprojection error and optimization methods that minimize this error and thus obtain the parameters for the robot pose and map of the environment directly from the pixels of the images. Thus the steps of extracting and matching features are not needed, enabling our system works well in environments where traditional approaches have difficulty. Moreover, when addressing the problem of SLAM as proposed in this work we avoid a very common problem in traditional approaches, known as error propagation. Worrying about the high computational cost of this approach have been tested several types of optimization methods in order to find a good balance between good estimates and processing time. The results presented in this work show the success of this system in different environments

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The extraction with pressurized fluids has become an attractive process for the extraction of essential oils, mainly due the specific characteristics of the fluids near the critical region. This work presents results of the extraction process of the essential oil of Cymbopogon winterianus J. with CO2 under high pressures. The effect of the following variables was evaluated: solvent flow rate (from 0.37 to 1.5 g CO2/min), pressure (66.7 and 75 bar) and temperature (8, 10, 15, 20 and 25 ºC) on the extraction kinetics and the total yield of the process, as well as in the solubility and composition of the C. winterianus essential oil. The experimental apparatus consisted of an extractor of fixed bed and the dynamic method was adopted for the calculation of the oil solubility. Extractions were also accomplished by conventional techniques (steam and organic solvent extraction). The determination and identification of extract composition were done by gas chromatography coupled with a mass spectrometer (GC-MS). The extract composition varied in function of the studied operational conditions and also related to the used extraction method. The main components obtained in the CO2 extraction were elemol, geraniol, citronellol and citronellal. For the steam extraction were the citronellal, citronellol and geraniol and for the organic solvent extraction were the azulene and the hexadecane. The most yield values (2.76%) and oil solubility (2.49x10-2 g oil/ g CO2) were obtained through the CO2 extraction in the operational conditions of T = 10°C, P = 66.7 bar and solvent flow rate 0.85 g CO2/min

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the growth and development of modern society, arises the need to search for new raw materials and new technologies which present the "clean" characteristic, and do not harm the environment, but can join the energy needs of industry and transportation. The Moringa oleifera Lam, plant originating from India, and currently present in the Brazilian Northeast, presents itself as a multi-purpose plant, can be used as a coagulant in water treatment, as a natural remedy and as a feedstock for biodiesel production. In this work, Moringa has been used as a raw material for studies on the extraction and subsequently in the synthesis of biodiesel. Studies have been conducted on various techniques of Moringa oil extraction (solvents, mechanical pressing and enzymatic), being specially developed an experimental design for the aqueous extraction with the aid of the enzyme Neutrase© 0.8 L, with the aim of analyzing the influence variable pH (5.5-7.5), temperature (45-55°C), time (16-24 hours) and amount of catalyst (2-5%) on the extraction yield. In relation to study of the synthesis of biodiesel was initially carried out a conventional transesterification (50°C, KOH as a catalyst, methanol and 60 minutes reaction). Next, a study was conducted using the technique of in situ transesterification by using an experimental design variables as temperature (30-60°C), catalyst amount (2-5%), and molar ratio oil / ethanol (1:420-1:600). The extraction technique that achieved the highest extraction yield (35%) was the one that used hexane as a solvent. The extraction using 32% ethanol obtained by mechanical pressing and extraction reached 25% yield. For the enzymatic extraction, the experimental design indicated that the extraction yield was most affected by the effect of the combination of temperature and time. The maximum yield obtained in this extraction was 16%. After the step of obtaining the oil was accomplished the synthesis of biodiesel by the conventional method and the in situ technique. The method of conventional transesterification was obtained a content of 100% and esters by in situ technique was also obtained in 100% in the experimental point 7, with a molar ratio oil / alcohol 1:420, Temperature 60°C in 5% weight KOH with the reaction time of 1.5 h. By the experimental design, it was found that the variable that most influenced the ester content was late the percentage of catalyst. By physico-chemical analysis it was observed that the biodiesel produced by the in situ method fell within the rules of the ANP, therefore this technique feasible, because does not require the preliminary stage of oil extraction and achieves high levels of esters

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cultivation of microalgae biomass in order to produce biodiesel arises as an extremely promising aspect, in that the microalgae culture includes short cycle of reproduction, smaller areas for planting and residual biomass rich in protein content. The present dissertation evaluates the performance and features, through spectrometry in the region of infrared with transformed Fourier (FTIR) and spectrometry in the region of UVvisible (UV-Vis), of the extracted lipid material (LM) using different techniques of cell wall disruption (mechanical agitation at low and at high spin and agitation associated with cavitation). The technique of gas chromatography (GC) brought to light the success of alkaline transesterification in the conversion of oil into methyl monoesters (MME), which was also analyzed by spectroscopic techniques (FTIR, proton magnetic resonance (1H NMR) and carbon (13C NMR). Through thermogravimetric analysis (TGA) were analyzed the lipid material (LM), biodiesel and the microalgae biomass. The method which provided the best results concerning the efficiency in extraction of the LP of Monoraphidium sp. (12,51%) was by mechanical agitation at high spin (14 000 rpm), for 2 hours being the ideal time, as shown by the t test. The spectroscopic techniques (1H NMR, 13C NMR and FTIR) confirmed that the structure of methyl monoesters and the chromatographic data (CG) revealed a high content of saturated fatty acid esters (about 70%) being the major constituent eicosanoic acid (33,7%), which justifies the high thermal stability of microalgae biodiesel. The TGA also ratified the conversion rate (96%) of LM into MME, pointing out the quantitative results compatible with the values obtained through GC (about 98%) and confirmed the efficiency of the extraction methods used, showing that may be a good technique to confirm the extraction of these materials. The content of LM microalgae obtained (12,51%) indicates good potential for using such material as a raw material for biodiesel production, when compared to oil content which can be obtained from traditional oil for this use, since the productivity of microalgae per hectare is much larger and requires an extremely reduced period to renew its cultivation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vascular segmentation is important in diagnosing vascular diseases like stroke and is hampered by noise in the image and very thin vessels that can pass unnoticed. One way to accomplish the segmentation is extracting the centerline of the vessel with height ridges, which uses the intensity as features for segmentation. This process can take from seconds to minutes, depending on the current technology employed. In order to accelerate the segmentation method proposed by Aylward [Aylward & Bullitt 2002] we have adapted it to run in parallel using CUDA architecture. The performance of the segmentation method running on GPU is compared to both the same method running on CPU and the original Aylward s method running also in CPU. The improvemente of the new method over the original one is twofold: the starting point for the segmentation process is not a single point in the blood vessel but a volume, thereby making it easier for the user to segment a region of interest, and; the overall gain method was 873 times faster running on GPU and 150 times more fast running on the CPU than the original CPU in Aylward