838 resultados para Wavelet transform analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Textile industries use large amounts of water in dyeing processes and a wide variety of synthetic dyes. A small concentration of these dyes in the environment can generate highly visible pollution and changes in aquatic ecosystems. Adsorption, biosorption, and biodegradation are the most advantageous dye removal processes. Biodegradation occurs when enzymes produced by certain microorganisms are capable of breaking down the dye molecule. To increase the efficiency of these processes, cell immobilization enables the reuse of the immobilized cells and offers a high degree of mechanical strength, allowing metabolic processes to take place under adverse conditions. The aim of the present study was to investigate the use of Saccharomyces cerevisiae immobilized in activated sugarcane bagasse for the degradation of Acid Black 48 dye in aqueous solutions. For such, sugarcane bagasse was treated with polyethyleneimine (PEI). Concentrations of a 1 % S. cerevisiae suspension were evaluated to determine cell immobilization rates. Once immobilization was established, biodegradation assays for 240 h with free and immobilized yeast in PEI-treated sugarcane bagasse were evaluated by Fourier transform infrared spectrophotometry. The results indicated a probable change in the dye molecule and the possible formation of new metabolites. Thus, S. cerevisiae immobilized in sugarcane bagasse is very attractive for biodegradation processes in the treatment of textile effluents. © 2013 Springer Science+Business Media Dordrecht.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a novel method for shape analysis called HTS (Hough Transform Statistics), which uses statistics from Hough Transform space in order to characterize the shape of objects in digital images. Experimental results showed that the HTS descriptor is robust and presents better accuracy than some traditional shape description methods. Furthermore, HTS algorithm has linear complexity, which is an important requirement for content based image retrieval from large databases. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Física - IGCE

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Um registro sísmico é frequentemente representado como a convolução de um pulso-fonte com a resposta do meio ao impulso, relacionada ao caminho da propagação. O processo de separação destes dois componentes da convolução é denominado deconvolução. Existe uma variedade de aproximações para o desenvolvimento de uma deconvolução. Uma das mais comuns é o uso da filtragem linear inversa, ou seja, o processamento do sinal composto, através de um filtro linear, cuja resposta de frequência é a recíproca da transformada de Fourier de um dos componentes do sinal. Obviamente, a fim de usarmos a filtragem inversa, tais componentes devem ser conhecidas ou estimadas. Neste trabalho, tratamos da aplicação a sinais sísmicos, de uma técnica de deconvolução não linear, proposta por Oppenheim (1965), a qual utiliza a teoria de uma classe de sistemas não lineares, que satisfazem um princípio generalizado de superposição, denominados de sistemas homomórficos. Tais sistemas são particularmente úteis na separação de sinais que estão combinados através da operação de convolução. O algoritmo da deconvolução homomórfica transforma o processo de convolução em uma superposição aditiva de seus componentes, com o resultado de que partes simples podem ser separadas mais facilmente. Esta classe de técnicas de filtragem representa uma generalização dos problemas de filtragem linear. O presente método oferece a considerável vantagem de que não é necessário fazer qualquer suposição prévia sobre a natureza do pulso sísmico fonte, ou da resposta do meio ao impulso, não requerendo assim, as considerações usuais de que o pulso seja de fase-mínima e que a distribuição dos impulsos seja aleatória, embora a qualidade dos resultados obtidos pela análise homomórfica seja muito sensível à razão sinal/ruído, como demonstrado.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Biopatologia Bucal - ICT

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the widespread proliferation of computers, many human activities entail the use of automatic image analysis. The basic features used for image analysis include color, texture, and shape. In this paper, we propose a new shape description method, called Hough Transform Statistics (HTS), which uses statistics from the Hough space to characterize the shape of objects or regions in digital images. A modified version of this method, called Hough Transform Statistics neighborhood (HTSn), is also presented. Experiments carried out on three popular public image databases showed that the HTS and HTSn descriptors are robust, since they presented precision-recall results much better than several other well-known shape description methods. When compared to Beam Angle Statistics (BAS) method, a shape description method that inspired their development, both the HTS and the HTSn methods presented inferior results regarding the precision-recall criterion, but superior results in the processing time and multiscale separability criteria. The linear complexity of the HTS and the HTSn algorithms, in contrast to BAS, make them more appropriate for shape analysis in high-resolution image retrieval tasks when very large databases are used, which are very common nowadays. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on the development and validation of a simple and sensitive method that uses solid phase extraction (SPE) and liquid chromatography with ultraviolet detection to analyze fluoxetine (FLX) and norfluoxetine (NFLX) in human plasma samples. A lab-made C18 SPE phase was synthesized by using a sol–gel process employing a low-cost silica precursor. This sorbent was fully characterized by nuclear magnetic resonance (NMR), Fourier-transform infrared spectroscopy (FT-IR), and scanning electron microscopy (SEM) to check the particles' shape, size and C18 functionalization. The lab-made C18 silica was used in the sample preparation step of human plasma by the SPE-HPLC-UV method. The method was validated in the 15 to 500 ng mL 1 range for both FLX and NFLX using a matrix matched curve. Detection limits of 4.3 and 4.2 ng mL 1 were obtained for FLX and NFLX, respectively. The repeatability and intermediary precision achieved varied from 7.6 to 15.0% and the accuracy ranged from 14.9 to 9.1%. The synthesized C18 sorbent was compared to commercial C18 sorbents. The average recoveries were similar (85–105%), however the lab-made C18 silica showed fewer interfering peaks in the chromatogram. After development and validation, the method using the lab-made C18 SPE was applied to plasma samples of patients under FLX treatment (n ¼ 6). The concentrations of FLX and NFLX found in the samples varied from 46.8–215.5 and 48.0–189.9 ng mL 1 , respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on image processing has shown that combining segmentation methods may lead to a solid approach to extract semantic information from different sort of images. Within this context, the Normalized Cut (NCut) is usually used as a final partitioning tool for graphs modeled in some chosen method. This work explores the Watershed Transform as a modeling tool, using different criteria of the hierarchical Watershed to convert an image into an adjacency graph. The Watershed is combined with an unsupervised distance learning step that redistributes the graph weights and redefines the Similarity matrix, before the final segmentation step using NCut. Adopting the Berkeley Segmentation Data Set and Benchmark as a background, our goal is to compare the results obtained for this method with previous work to validate its performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to present a method to analyze the noise in aircraft cabins through the VHF Aeronautical Communication Channel, aimed at examining an environment that has the possibility of communication problems between the aircraft crew and the professionals responsible for the controls on land. Design/methodology/approach - This analysis includes equipment normally used for identification and comparison of electromagnetic noise, the cabin and the environment that are present in an airport, as well as equipment for frequency analysis and intensity of those signals. The analysis is done in a reverse way, eliminating situations that are not common in the examined environment, until the identification of the situation with the irregularity. Findings - According to the results, the implementation of the Fourier transform for noise analysis in the cabin was efficient. These results demonstrate that through this transformation, the noise sources can be identified in the environments in cases where there is much spectrum pollution. Research limitations/implications - This kind of noise analysis is important, considering the importance of having good accuracy in airport environment analysis. Originality/value - The paper presents the main trends in the future of aviation communications, and describes the new applications that aim to minimize problems with the current VHF channel.