645 resultados para Ruído sísmico


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Brazilian Environmental Data Collecting System (SBCDA) collects and broadcasts meteorological and environmental data, to be handled by dozens of institutions and organizations. The system space segment, composed by the data collecting satellites, plays an important role for the system operation. To ensure the continuity and quality of these services, efforts are being made to the development of new satellite architectures. Aiming a reduction of size and power consumption, the design of an integrated circuit containing a receiver front-end is proposed, to be embedded in the next SBCDA satellite generations. The circuit will also operate under the requirements of the international data collecting standard ARGOS. This work focuses on the design of an UHF low noise amplifier and mixers in a CMOS standard technology. The specifi- cations are firstly described and the circuit topologies presented. Then the circuit conception is discussed and the design variables derived. Finally, the layout is designed and the final results are commented. The chip will be fabricated in a 130 nm technology from ST Microelectronics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inside the Borborema Province the Northwestern Ceará (NC) is one of the most seismic active regions. There are reports of an earthquake occurred in 1810 in the Granja town. On January, 2008 the seismic activity in NC has increased and it was deployed a seismographic network with 11 digital stations. In 2009, another earthquake sequence began and it was deployed another seismographic network in the Santana do Acaraú town with 6 stations. This thesis presents the results obtained by analyzing the data recorded in these two networks. The epicentral areas are located near the northeastern part of the Transbrasiliano Lineament, a shear zone with NE-SW-trending that cuts the study area. The hypocenters are located between 1km and 8km. The strike-slip focal mechanisms were found, which is predominant in the Borborema Province. An integration of seismological, geological and geophysical data was performed and it show that the seismogenic faults found are oriented in the same direction to the local brittle structures observed in field and magnetic lineaments. The SHmax (maximum compressional stress) direction in NC was estimated using an inversion of seven focal mechanisms. The horizontal maximum compression stress (σ1 = 300°) with orientation NW-SE and extension (σ3 = 210°) with NE-SW and σ2 vertical. These results are consistent with results of previous studies. The seismic activity recorded in NC is not related to a possible reactivation of the Transbrasiliano Lineament, by now.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The reverse time migration algorithm (RTM) has been widely used in the seismic industry to generate images of the underground and thus reduce the risk of oil and gas exploration. Its widespread use is due to its high quality in underground imaging. The RTM is also known for its high computational cost. Therefore, parallel computing techniques have been used in their implementations. In general, parallel approaches for RTM use a coarse granularity by distributing the processing of a subset of seismic shots among nodes of distributed systems. Parallel approaches with coarse granularity for RTM have been shown to be very efficient since the processing of each seismic shot can be performed independently. For this reason, RTM algorithm performance can be considerably improved by using a parallel approach with finer granularity for the processing assigned to each node. This work presents an efficient parallel algorithm for 3D reverse time migration with fine granularity using OpenMP. The propagation algorithm of 3D acoustic wave makes up much of the RTM. Different load balancing were analyzed in order to minimize possible losses parallel performance at this stage. The results served as a basis for the implementation of other phases RTM: backpropagation and imaging condition. The proposed algorithm was tested with synthetic data representing some of the possible underground structures. Metrics such as speedup and efficiency were used to analyze its parallel performance. The migrated sections show that the algorithm obtained satisfactory performance in identifying subsurface structures. As for the parallel performance, the analysis clearly demonstrate the scalability of the algorithm achieving a speedup of 22.46 for the propagation of the wave and 16.95 for the RTM, both with 24 threads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Beamforming is a technique widely used in various fields. With the aid of an antenna array, the beamforming aims to minimize the contribution of unknown interferents directions, while capturing the desired signal in a given direction. In this thesis are proposed beamforming techniques using Reinforcement Learning (RL) through the Q-Learning algorithm in antennas array. One proposal is to use RL to find the optimal policy selection between the beamforming (BF) and power control (PC) in order to better leverage the individual characteristics of each of them for a certain amount of Signal to Interference plus noise Ration (SINR). Another proposal is to use RL to determine the optimal policy between blind beamforming algorithm of CMA (Constant Modulus Algorithm) and DD (Decision Direct) in multipath environments. Results from simulations showed that the RL technique could be effective in achieving na optimal of switching between different techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The constant necessity for new sources of renewable energy is increasingly promoting the increase of investments in this area. Among other sources, the wind power has been becoming prominent. It is important to promote the search for the improvement of the technologies involved in the topologies of the wind turbines, seeking for alternatives which enhance the gotten performance, despite the irregularity of the wind speed. This study presents a new system for speed control, in this case applied to the wind turbines - the Electromagnetic Frequency Regulator (EFR). One of the most used devices in some topologies is the mechanical gearboxes which, along with a short service life, often represent sources of noise and defects. The EFR does not need these transmission boxes, representing a technological advancement, using for that an adapted induction machine, in which the stator becomes mobile, supportive to the axis of the turbine. In the topology used in this study, the EFR also allows us to leave out the usage of the eletronic converters to establish the coupling between the generator and the electrical grid. It also the reason why it provides the possibility of obtaining the generation in alternating current, with constant voltage and frequency, where there is no electrical grid. Responsable for the mechanical speed control of the generator, the EFR can be useful in other transmission systems in which the mechanical speed control output is the objective. In addition, the EFR operates through the combination of two inputs, a mechanical and other electrical. It multiplies the possibilities of application because it is able to synergistic coupling between different arrays of energy, and, for such reasons, it enables the various sources of energy involved to be uncoupled from the network, being the synchronous generator responsible for the system connection with the electrical grid, simplifying the control strategies on the power injected in it. Experimental and simulation results are presented through this study, about a wind turbine, validating the proposal related to the efficience in the speed control of the system for different wind conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Brazil, the National Agency of Electric Energy (ANEEL) represents the energy regulator. The rates review have been one of its main tasks, which establish a pricing practice at a level to cover the efficient operating costs and also the appropriate return of the distributors investments. The changes in the procedures to redefine the efficient costs and the several studies on the methodologies employed to regulate this segment denote the challenge faced by regulators about the best methodological strategy to be employed. In this context, this research aims to propose a benchmarking evaluation applied to the national regulation system in the establishment of efficient operating costs of electricity distribution utilities. The model is formulated to promote the electricity market development, partnering with government policies ant to society benefit. To conduct this research, an integration of Data Envelopment Analysis (DEA) with the Stochastic Frontier Analysis (SFA) is adopted in a three stages procedure to correct the efficiency in terms of environmental effects: (i) evaluation by means of DEA to measure operating costs slacks of the utilities, in which environmental variables are omitted; (ii) The slacks calculated in the first stage are regressed on a set of environmental variables by means of SFA and operating costs are adjusted to account the environmental impact and statistical noise effects; and, (iii) reassess the performance of the electric power distribution utilities by means of DEA. Based on this methodology it is possible to obtain a performance evaluation exclusively expressed in terms of management efficiency, in which the operating environment and statistical noise effects are controlled.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information extraction is a frequent and relevant problem in digital signal processing. In the past few years, different methods have been utilized for the parameterization of signals and the achievement of efficient descriptors. When the signals possess statistical cyclostationary properties, the Cyclic Autocorrelation Function (CAF) and the Spectral Cyclic Density (SCD) can be used to extract second-order cyclostationary information. However, second-order cyclostationary information is poor in nongaussian signals, as the cyclostationary analysis in this case should comprise higher-order statistical information. This paper proposes a new mathematical tool for the higher-order cyclostationary analysis based on the correntropy function. Specifically, the cyclostationary analysis is revisited focusing on the information theory, while the Cyclic Correntropy Function (CCF) and Cyclic Correntropy Spectral Density (CCSD) are also defined. Besides, it is analytically proven that the CCF contains information regarding second- and higher-order cyclostationary moments, being a generalization of the CAF. The performance of the aforementioned new functions in the extraction of higher-order cyclostationary characteristics is analyzed in a wireless communication system where nongaussian noise exists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trace gases are important to our environment even though their presence comes only by ‘traces’, but their concentrations must be monitored, so any necessary interventions can be done at the right time. There are some lower and upper boundaries which produce nice conditions for our lives and then monitoring trace gases comes as an essential task nowadays to be accomplished by many techniques. One of them is the differential optical absorption spectroscopy (DOAS), which consists mathematically on a regression - the classical method uses least-squares - to retrieve the trace gases concentrations. In order to achieve better results, many works have tried out different techniques instead of the classical approach. Some have tried to preprocess the signals to be analyzed by a denoising procedure - e.g. discrete wavelet transform (DWT). This work presents a semi-empirical study to find out the most suitable DWT family to be used in this denoising. The search seeks among many well-known families the one to better remove the noise, keeping the original signal’s main features, then by decreasing the noise, the residual left after the regression is done decreases too. The analysis take account the wavelet decomposition level, the threshold to be applied on the detail coefficients and how to apply them - hard or soft thresholding. The signals used come from an open and online data base which contains characteristic signals from some trace gases usually studied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trace gases are important to our environment even though their presence comes only by ‘traces’, but their concentrations must be monitored, so any necessary interventions can be done at the right time. There are some lower and upper boundaries which produce nice conditions for our lives and then monitoring trace gases comes as an essential task nowadays to be accomplished by many techniques. One of them is the differential optical absorption spectroscopy (DOAS), which consists mathematically on a regression - the classical method uses least-squares - to retrieve the trace gases concentrations. In order to achieve better results, many works have tried out different techniques instead of the classical approach. Some have tried to preprocess the signals to be analyzed by a denoising procedure - e.g. discrete wavelet transform (DWT). This work presents a semi-empirical study to find out the most suitable DWT family to be used in this denoising. The search seeks among many well-known families the one to better remove the noise, keeping the original signal’s main features, then by decreasing the noise, the residual left after the regression is done decreases too. The analysis take account the wavelet decomposition level, the threshold to be applied on the detail coefficients and how to apply them - hard or soft thresholding. The signals used come from an open and online data base which contains characteristic signals from some trace gases usually studied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Skeletal muscle consists of muscle fiber types that have different physiological and biochemical characteristics. Basically, the muscle fiber can be classified into type I and type II, presenting, among other features, contraction speed and sensitivity to fatigue different for each type of muscle fiber. These fibers coexist in the skeletal muscles and their relative proportions are modulated according to the muscle functionality and the stimulus that is submitted. To identify the different proportions of fiber types in the muscle composition, many studies use biopsy as standard procedure. As the surface electromyography (EMGs) allows to extract information about the recruitment of different motor units, this study is based on the assumption that it is possible to use the EMG to identify different proportions of fiber types in a muscle. The goal of this study was to identify the characteristics of the EMG signals which are able to distinguish, more precisely, different proportions of fiber types. Also was investigated the combination of characteristics using appropriate mathematical models. To achieve the proposed objective, simulated signals were developed with different proportions of motor units recruited and with different signal-to-noise ratios. Thirteen characteristics in function of time and the frequency were extracted from emulated signals. The results for each extracted feature of the signals were submitted to the clustering algorithm k-means to separate the different proportions of motor units recruited on the emulated signals. Mathematical techniques (confusion matrix and analysis of capability) were implemented to select the characteristics able to identify different proportions of muscle fiber types. As a result, the average frequency and median frequency were selected as able to distinguish, with more precision, the proportions of different muscle fiber types. Posteriorly, the features considered most able were analyzed in an associated way through principal component analysis. Were found two principal components of the signals emulated without noise (CP1 and CP2) and two principal components of the noisy signals (CP1 and CP2 ). The first principal components (CP1 and CP1 ) were identified as being able to distinguish different proportions of muscle fiber types. The selected characteristics (median frequency, mean frequency, CP1 and CP1 ) were used to analyze real EMGs signals, comparing sedentary people with physically active people who practice strength training (weight training). The results obtained with the different groups of volunteers show that the physically active people obtained higher values of mean frequency, median frequency and principal components compared with the sedentary people. Moreover, these values decreased with increasing power level for both groups, however, the decline was more accented for the group of physically active people. Based on these results, it is assumed that the volunteers of the physically active group have higher proportions of type II fibers than sedentary people. Finally, based on these results, we can conclude that the selected characteristics were able to distinguish different proportions of muscle fiber types, both for the emulated signals as to the real signals. These characteristics can be used in several studies, for example, to evaluate the progress of people with myopathy and neuromyopathy due to the physiotherapy, and also to analyze the development of athletes to improve their muscle capacity according to their sport. In both cases, the extraction of these characteristics from the surface electromyography signals provides a feedback to the physiotherapist and the coach physical, who can analyze the increase in the proportion of a given type of fiber, as desired in each case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this work is to use algorithms known as Boltzmann Machine to rebuild and classify patterns as images. This algorithm has a similar structure to that of an Artificial Neural Network but network nodes have stochastic and probabilistic decisions. This work presents the theoretical framework of the main Artificial Neural Networks, General Boltzmann Machine algorithm and a variation of this algorithm known as Restricted Boltzmann Machine. Computer simulations are performed comparing algorithms Artificial Neural Network Backpropagation with these algorithms Boltzmann General Machine and Machine Restricted Boltzmann. Through computer simulations are analyzed executions times of the different described algorithms and bit hit percentage of trained patterns that are later reconstructed. Finally, they used binary images with and without noise in training Restricted Boltzmann Machine algorithm, these images are reconstructed and classified according to the bit hit percentage in the reconstruction of the images. The Boltzmann machine algorithms were able to classify patterns trained and showed excellent results in the reconstruction of the standards code faster runtime and thus can be used in applications such as image recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A number of studies in the areas of Biomedical Engineering and Health Sciences have employed machine learning tools to develop methods capable of identifying patterns in different sets of data. Despite its extinction in many countries of the developed world, Hansen’s disease is still a disease that affects a huge part of the population in countries such as India and Brazil. In this context, this research proposes to develop a method that makes it possible to understand in the future how Hansen’s disease affects facial muscles. By using surface electromyography, a system was adapted so as to capture the signals from the largest possible number of facial muscles. We have first looked upon the literature to learn about the way researchers around the globe have been working with diseases that affect the peripheral neural system and how electromyography has acted to contribute to the understanding of these diseases. From these data, a protocol was proposed to collect facial surface electromyographic (sEMG) signals so that these signals presented a high signal to noise ratio. After collecting the signals, we looked for a method that would enable the visualization of this information in a way to make it possible to guarantee that the method used presented satisfactory results. After identifying the method's efficiency, we tried to understand which information could be extracted from the electromyographic signal representing the collected data. Once studies demonstrating which information could contribute to a better understanding of this pathology were not to be found in literature, parameters of amplitude, frequency and entropy were extracted from the signal and a feature selection was made in order to look for the features that better distinguish a healthy individual from a pathological one. After, we tried to identify the classifier that best discriminates distinct individuals from different groups, and also the set of parameters of this classifier that would bring the best outcome. It was identified that the protocol proposed in this study and the adaptation with disposable electrodes available in market proved their effectiveness and capability of being used in different studies whose intention is to collect data from facial electromyography. The feature selection algorithm also showed that not all of the features extracted from the signal are significant for data classification, with some more relevant than others. The classifier Support Vector Machine (SVM) proved itself efficient when the adequate Kernel function was used with the muscle from which information was to be extracted. Each investigated muscle presented different results when the classifier used linear, radial and polynomial kernel functions. Even though we have focused on Hansen’s disease, the method applied here can be used to study facial electromyography in other pathologies.