60 resultados para processamento de sinal
Resumo:
Originally aimed at operational objectives, the continuous measurement of well bottomhole pressure and temperature, recorded by permanent downhole gauges (PDG), finds vast applicability in reservoir management. It contributes for the monitoring of well performance and makes it possible to estimate reservoir parameters on the long term. However, notwithstanding its unquestionable value, data from PDG is characterized by a large noise content. Moreover, the presence of outliers within valid signal measurements seems to be a major problem as well. In this work, the initial treatment of PDG signals is addressed, based on curve smoothing, self-organizing maps and the discrete wavelet transform. Additionally, a system based on the coupling of fuzzy clustering with feed-forward neural networks is proposed for transient detection. The obtained results were considered quite satisfactory for offshore wells and matched real requisites for utilization
Resumo:
With the growth of energy consumption worldwide, conventional reservoirs, the reservoirs called "easy exploration and production" are not meeting the global energy demand. This has led many researchers to develop projects that will address these needs, companies in the oil sector has invested in techniques that helping in locating and drilling wells. One of the techniques employed in oil exploration process is the reverse time migration (RTM), in English, Reverse Time Migration, which is a method of seismic imaging that produces excellent image of the subsurface. It is algorithm based in calculation on the wave equation. RTM is considered one of the most advanced seismic imaging techniques. The economic value of the oil reserves that require RTM to be localized is very high, this means that the development of these algorithms becomes a competitive differentiator for companies seismic processing. But, it requires great computational power, that it still somehow harms its practical success. The objective of this work is to explore the implementation of this algorithm in unconventional architectures, specifically GPUs using the CUDA by making an analysis of the difficulties in developing the same, as well as the performance of the algorithm in the sequential and parallel version
Resumo:
The existence of inequalities among the Brazilian regions is an indeed fact along the country s history. Before this reality the constitutional legislator inserted into the Federal Constitution of 1988, as a purpose of the Federative Republic of Brazil, the reduction of regional inequalities. The development has also been included as a purpose from the State, because there is an straight relation with the reduction of regional inequalities. In both situations is searched the improvement of people s living conditions. . In pursuit of this achievement, the State must implement public policy, and, for this to happen, it needs the ingress of income inside of the public coffers and support of economic agents, therefore the importance of constitucionalization of the economic policy. The 1988 s Constitution adopted a rational capitalism regime consentaneous with current legal and social conceptions, that s why it enabled the State s intervention into economy to correct the so-called market failures or to make the established objectives fulfilled. About this last one, the intervention may happen by induction through the adoption of regulatory Standards of incentive or disincentive of economic activity. Among the possible inductive ways there are the tax assessments that aim to stimulate the economic agents behavior in view of finding that the development doesn t occur with the same intensity in all of the country s regions. Inside this context there are the Export Processing Zones (EPZs) which are special areas with different customs regime by the granting of benefits to the companies that are installed there. The EPZs have been used, by several countries, in order to develop certain regions, and economic indicators show that they promoted economic and social changes in the places where they are installed, especially because, by attracting companies, they provide job creation, industrialization and increased exports. In Brazil, they can contribute decisively to overcome major obstacles or decrease the attraction of economic agents and economic development of the country. In the case of an instrument known to be effective to achieve the goals established by the Constitution, it is duty of the Executive to push for the law that governs this customs regime is effectively applied. If the Executive doesn t fulfill this duty, incurs into unjustifiable omission, correction likely by the Judiciary, whose mission is to prevent acts or omissions contrary to constitutional order
Resumo:
This work deals with a mathematical fundament for digital signal processing under point view of interval mathematics. Intend treat the open problem of precision and repesention of data in digital systems, with a intertval version of signals representation. Signals processing is a rich and complex area, therefore, this work makes a cutting with focus in systems linear invariant in the time. A vast literature in the area exists, but, some concepts in interval mathematics need to be redefined or to be elaborated for the construction of a solid theory of interval signal processing. We will construct a basic fundaments for signal processing in the interval version, such as basic properties linearity, stability, causality, a version to intervalar of linear systems e its properties. They will be presented interval versions of the convolution and the Z-transform. Will be made analysis of convergences of systems using interval Z-transform , a essentially interval distance, interval complex numbers , application in a interval filter.
Resumo:
The human voice is an important communication tool and any disorder of the voice can have profound implications for social and professional life of an individual. Techniques of digital signal processing have been used by acoustic analysis of vocal disorders caused by pathologies in the larynx, due to its simplicity and noninvasive nature. This work deals with the acoustic analysis of voice signals affected by pathologies in the larynx, specifically, edema, and nodules on the vocal folds. The purpose of this work is to develop a classification system of voices to help pre-diagnosis of pathologies in the larynx, as well as monitoring pharmacological treatments and after surgery. Linear Prediction Coefficients (LPC), Mel Frequency cepstral coefficients (MFCC) and the coefficients obtained through the Wavelet Packet Transform (WPT) are applied to extract relevant characteristics of the voice signal. For the classification task is used the Support Vector Machine (SVM), which aims to build optimal hyperplanes that maximize the margin of separation between the classes involved. The hyperplane generated is determined by the support vectors, which are subsets of points in these classes. According to the database used in this work, the results showed a good performance, with a hit rate of 98.46% for classification of normal and pathological voices in general, and 98.75% in the classification of diseases together: edema and nodules
Resumo:
In this work we use Interval Mathematics to establish interval counterparts for the main tools used in digital signal processing. More specifically, the approach developed here is oriented to signals, systems, sampling, quantization, coding and Fourier transforms. A detailed study for some interval arithmetics which handle with complex numbers is provided; they are: complex interval arithmetic (or rectangular), circular complex arithmetic, and interval arithmetic for polar sectors. This lead us to investigate some properties that are relevant for the development of a theory of interval digital signal processing. It is shown that the sets IR and R(C) endowed with any correct arithmetic is not an algebraic field, meaning that those sets do not behave like real and complex numbers. An alternative to the notion of interval complex width is also provided and the Kulisch- Miranker order is used in order to write complex numbers in the interval form enabling operations on endpoints. The use of interval signals and systems is possible thanks to the representation of complex values into floating point systems. That is, if a number x 2 R is not representable in a floating point system F then it is mapped to an interval [x;x], such that x is the largest number in F which is smaller than x and x is the smallest one in F which is greater than x. This interval representation is the starting point for definitions like interval signals and systems which take real or complex values. It provides the extension for notions like: causality, stability, time invariance, homogeneity, additivity and linearity to interval systems. The process of quantization is extended to its interval counterpart. Thereafter the interval versions for: quantization levels, quantization error and encoded signal are provided. It is shown that the interval levels of quantization represent complex quantization levels and the classical quantization error ranges over the interval quantization error. An estimation for the interval quantization error and an interval version for Z-transform (and hence Fourier transform) is provided. Finally, the results of an Matlab implementation is given
Resumo:
Este trabalho propõe um ambiente computacional aplicado ao ensino de sistemas de controle, denominado de ModSym. O software implementa uma interface gráfica para a modelagem de sistemas físicos lineares e mostra, passo a passo, o processamento necessário à obtenção de modelos matemáticos para esses sistemas. Um sistema físico pode ser representado, no software, de três formas diferentes. O sistema pode ser representado por um diagrama gráfico a partir de elementos dos domínios elétrico, mecânico translacional, mecânico rotacional e hidráulico. Pode também ser representado a partir de grafos de ligação ou de diagramas de fluxo de sinal. Uma vez representado o sistema, o ModSym possibilita o cálculo de funções de transferência do sistema na forma simbólica, utilizando a regra de Mason. O software calcula também funções de transferência na forma numérica e funções de sensibilidade paramétrica. O trabalho propõe ainda um algoritmo para obter o diagrama de fluxo de sinal de um sistema físico baseado no seu grafo de ligação. Este algoritmo e a metodologia de análise de sistemas conhecida por Network Method permitiram a utilização da regra de Mason no cálculo de funções de transferência dos sistemas modelados no software
Resumo:
This work presents a cooperative navigation systemof a humanoid robot and a wheeled robot using visual information, aiming to navigate the non-instrumented humanoid robot using information obtained from the instrumented wheeled robot. Despite the humanoid not having sensors to its navigation, it can be remotely controlled by infra-red signals. Thus, the wheeled robot can control the humanoid positioning itself behind him and, through visual information, find it and navigate it. The location of the wheeled robot is obtained merging information from odometers and from landmarks detection, using the Extended Kalman Filter. The marks are visually detected, and their features are extracted by image processing. Parameters obtained by image processing are directly used in the Extended Kalman Filter. Thus, while the wheeled robot locates and navigates the humanoid, it also simultaneously calculates its own location and maps the environment (SLAM). The navigation is done through heuristic algorithms based on errors between the actual and desired pose for each robot. The main contribution of this work was the implementation of a cooperative navigation system for two robots based on visual information, which can be extended to other robotic applications, as the ability to control robots without interfering on its hardware, or attaching communication devices
Resumo:
The exponential growth in the applications of radio frequency (RF) is accompanied by great challenges as more efficient use of spectrum as in the design of new architectures for multi-standard receivers or software defined radio (SDR) . The key challenge in designing architecture of the software defined radio is the implementation of a wide-band receiver, reconfigurable, low cost, low power consumption, higher level of integration and flexibility. As a new solution of SDR design, a direct demodulator architecture, based on fiveport technology, or multi-port demodulator, has been proposed. However, the use of the five-port as a direct-conversion receiver requires an I/Q calibration (or regeneration) procedure in order to generate the in-phase (I) and quadrature (Q) components of the transmitted baseband signal. In this work, we propose to evaluate the performance of a blind calibration technique without additional knowledge about training or pilot sequences of the transmitted signal based on independent component analysis for the regeneration of I/Q five-port downconversion, by exploiting the information on the statistical properties of the three output signals
Resumo:
abstract
Resumo:
ln this work, it was deveIoped a parallel cooperative genetic algorithm with different evolution behaviors to train and to define architectures for MuItiIayer Perceptron neural networks. MuItiIayer Perceptron neural networks are very powerful tools and had their use extended vastIy due to their abiIity of providing great resuIts to a broad range of appIications. The combination of genetic algorithms and parallel processing can be very powerful when applied to the Iearning process of the neural network, as well as to the definition of its architecture since this procedure can be very slow, usually requiring a lot of computational time. AIso, research work combining and appIying evolutionary computation into the design of neural networks is very useful since most of the Iearning algorithms deveIoped to train neural networks only adjust their synaptic weights, not considering the design of the networks architecture. Furthermore, the use of cooperation in the genetic algorithm allows the interaction of different populations, avoiding local minima and helping in the search of a promising solution, acceIerating the evolutionary process. Finally, individuaIs and evolution behavior can be exclusive on each copy of the genetic algorithm running in each task enhancing the diversity of populations
Resumo:
ln this work the implementation of the SOM (Self Organizing Maps) algorithm or Kohonen neural network is presented in the form of hierarchical structures, applied to the compression of images. The main objective of this approach is to develop an Hierarchical SOM algorithm with static structure and another one with dynamic structure to generate codebooks (books of codes) in the process of the image Vector Quantization (VQ), reducing the time of processing and obtaining a good rate of compression of images with a minimum degradation of the quality in relation to the original image. Both self-organizing neural networks developed here, were denominated HSOM, for static case, and DHSOM, for the dynamic case. ln the first form, the hierarchical structure is previously defined and in the later this structure grows in an automatic way in agreement with heuristic rules that explore the data of the training group without use of external parameters. For the network, the heuristic mIes determine the dynamics of growth, the pruning of ramifications criteria, the flexibility and the size of children maps. The LBO (Linde-Buzo-Oray) algorithm or K-means, one ofthe more used algorithms to develop codebook for Vector Quantization, was used together with the algorithm of Kohonen in its basic form, that is, not hierarchical, as a reference to compare the performance of the algorithms here proposed. A performance analysis between the two hierarchical structures is also accomplished in this work. The efficiency of the proposed processing is verified by the reduction in the complexity computational compared to the traditional algorithms, as well as, through the quantitative analysis of the images reconstructed in function of the parameters: (PSNR) peak signal-to-noise ratio and (MSE) medium squared error
Resumo:
Nowadays, where the market competition requires products with better quality and a constant search for cost savings and a better use of raw materials, the research for more efficient control strategies becomes vital. In Natural Gas Processin Units (NGPUs), as in the most chemical processes, the quality control is accomplished through their products composition. However, the chemical composition analysis has a long measurement time, even when performed by instruments such as gas chromatographs. This fact hinders the development of control strategies to provide a better process yield. The natural gas processing is one of the most important activities in the petroleum industry. The main economic product of a NGPU is the liquefied petroleum gas (LPG). The LPG is ideally composed by propane and butane, however, in practice, its composition has some contaminants, such as ethane and pentane. In this work is proposed an inferential system using neural networks to estimate the ethane and pentane mole fractions in LPG and the propane mole fraction in residual gas. The goal is to provide the values of these estimated variables in every minute using a single multilayer neural network, making it possibly to apply inferential control techniques in order to monitor the LPG quality and to reduce the propane loss in the process. To develop this work a NGPU was simulated in HYSYS R software, composed by two distillation collumns: deethanizer and debutanizer. The inference is performed through the process variables of the PID controllers present in the instrumentation of these columns. To reduce the complexity of the inferential neural network is used the statistical technique of principal component analysis to decrease the number of network inputs, thus forming a hybrid inferential system. It is also proposed in this work a simple strategy to correct the inferential system in real-time, based on measurements of the chromatographs which may exist in process under study
Resumo:
The limits to inform is about the character stico of basic, quimica, mineralogical and mechaniques of matlaughed material used in the manufacturing process the product certified in economic region the Cariri, specifically in the city of Crato, Ceará state, motivated the development of this work, since in this region the exist ing economic context that a general appear as important in the production chains. Were made twentyfive soils-test specimen collection and the study was performed to differentiate the mat laugh materials of variaveis processing of mathing raw materials in the factory The product mica monkeys by extrusion and pressing. The results were obtained ap s as analyzes: grain size, index of plasticity, fluoresce incidence X-ray difration the X-ray, and analyzes thermicals and properties technological. through s of curves gresifica returned to was a comparison between the retro the linear, absorb to water, porosity and bulk density. the results show that the excellent distribution and character acceptable available for the processing of the structure color dark red. needing, therefore, of the mixture of a less plastic clay with thick granulation, that works as plasticity reducer. In spite of the different resignation forms for prensagem and extrusion, the characteristics of absorption of water and rupture tension the flexing was shown inside of the patterns of ABNT
Resumo:
The cashew, a fruit from Brazilian Northeast is used to produce juice due to its flavor and vitamin C richness. However, its acceptance is limited due to its astringency. Cajuína is a derivate product appreciated by its characteristic flavor, freshness and lack of astringency, due to tannin removal. Cajuína is a light yellow beverage made from clarified cashew juice and sterilized after bottling. It differs from the integral and concentrated juice by the clarification and thermal treatment steps. Many problems such as haze and excessive browning could appear if these steps are not controlled. The objective of this work was divided into two stages with the aim to supply process information in order to obtain a good quality product with uniform characteristics (sensory and nutritional). Polyphenol-protein interaction was studied at the clarification step, which is an empirical process, to provide values on the amount of clarifying solution (gelatin) that must be added to achieve a complete juice clarification. Clarification essays were performed with juice dilutions of 1:2 and 1:10 and the effect of metabissulfite and tannic acid addition was evaluated. It was not possible to establish a clarification point. Metabissulfite did not influenced the clarification process however tannic acid addition displaced the clarification point, showing the difficulty visual monitoring of the process. Thermal treatment of clarified juice was studied at 88, 100, 111 e 121 °C. To evaluate the non-enzymatic browning, vitamin C, 5-hidroximetilfurfural (5-HMF) and sugar variation were correlated with color parameters (reflectance spectra, color difference and CIELAB). Kinetic models were obtained for reflectance spectra, ascorbic acid and 5-HMF. It was observed that 5-HMF introduction followed a first order kinetic rate at the beginning of the thermal treatment and a zero order kinetic at later process stages. An inverse correlation was observed between absorbance at 420 nm and ascorbic acid degradation, which indicates that ascorbic acid might be the principal factor on cajuína non-enzymatic browning. Constant sugar concentration showed that this parameter did not contribute directly to the nonenzymatic browning. Optimization techniques showed showed that to obtain a high vitamin C and a low 5-HMF content, the process must be done at 120 ºC. With the water-bath thermal treatment, the 90 °C temperature promoted a lower ascorbic acid degradation at the expense of a higher 5-HMF level