74 resultados para Ruído sísmico
Resumo:
This work presents an wideband ring VCO for cognitive radio five-port based receivers. A three-stage differential topology using transmission gate was adopted in order to maintain wide and linear tuning range and a low phase-noise. Monte-Carlo analysis were performed for phase-shift response of individual stages, which is an important figure of merit in five-port works. It was observed a fairly linear correlation between control voltage and oscillation frequency in the range between 200 MHz and 1800 MHz. The VCO was preliminarily designed for IBM 130nm CMOS technology
Resumo:
Modern wireless systems employ adaptive techniques to provide high throughput while observing desired coverage, Quality of Service (QoS) and capacity. An alternative to further enhance data rate is to apply cognitive radio concepts, where a system is able to exploit unused spectrum on existing licensed bands by sensing the spectrum and opportunistically access unused portions. Techniques like Automatic Modulation Classification (AMC) could help or be vital for such scenarios. Usually, AMC implementations rely on some form of signal pre-processing, which may introduce a high computational cost or make assumptions about the received signal which may not hold (e.g. Gaussianity of noise). This work proposes a new method to perform AMC which uses a similarity measure from the Information Theoretic Learning (ITL) framework, known as correntropy coefficient. It is capable of extracting similarity measurements over a pair of random processes using higher order statistics, yielding in better similarity estimations than by using e.g. correlation coefficient. Experiments carried out by means of computer simulation show that the technique proposed in this paper presents a high rate success in classification of digital modulation, even in the presence of additive white gaussian noise (AWGN)
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
The increasing demand for high performance wireless communication systems has shown the inefficiency of the current model of fixed allocation of the radio spectrum. In this context, cognitive radio appears as a more efficient alternative, by providing opportunistic spectrum access, with the maximum bandwidth possible. To ensure these requirements, it is necessary that the transmitter identify opportunities for transmission and the receiver recognizes the parameters defined for the communication signal. The techniques that use cyclostationary analysis can be applied to problems in either spectrum sensing and modulation classification, even in low signal-to-noise ratio (SNR) environments. However, despite the robustness, one of the main disadvantages of cyclostationarity is the high computational cost for calculating its functions. This work proposes efficient architectures for obtaining cyclostationary features to be employed in either spectrum sensing and automatic modulation classification (AMC). In the context of spectrum sensing, a parallelized algorithm for extracting cyclostationary features of communication signals is presented. The performance of this features extractor parallelization is evaluated by speedup and parallel eficiency metrics. The architecture for spectrum sensing is analyzed for several configuration of false alarm probability, SNR levels and observation time for BPSK and QPSK modulations. In the context of AMC, the reduced alpha-profile is proposed as as a cyclostationary signature calculated for a reduced cyclic frequencies set. This signature is validated by a modulation classification architecture based on pattern matching. The architecture for AMC is investigated for correct classification rates of AM, BPSK, QPSK, MSK and FSK modulations, considering several scenarios of observation length and SNR levels. The numerical results of performance obtained in this work show the eficiency of the proposed architectures
Sistema inteligente para detecção de manchas de óleo na superfície marinha através de imagens de SAR
Resumo:
Oil spill on the sea, accidental or not, generates enormous negative consequences for the affected area. The damages are ambient and economic, mainly with the proximity of these spots of preservation areas and/or coastal zones. The development of automatic techniques for identification of oil spots on the sea surface, captured through Radar images, assist in a complete monitoring of the oceans and seas. However spots of different origins can be visualized in this type of imaging, which is a very difficult task. The system proposed in this work, based on techniques of digital image processing and artificial neural network, has the objective to identify the analyzed spot and to discern between oil and other generating phenomena of spot. Tests in functional blocks that compose the proposed system allow the implementation of different algorithms, as well as its detailed and prompt analysis. The algorithms of digital image processing (speckle filtering and gradient), as well as classifier algorithms (Multilayer Perceptron, Radial Basis Function, Support Vector Machine and Committe Machine) are presented and commented.The final performance of the system, with different kind of classifiers, is presented by ROC curve. The true positive rates are considered agreed with the literature about oil slick detection through SAR images presents
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
In this work we investigate the stochastic behavior of a large class of systems with variable damping which are described by a time-dependent Lagrangian. Our stochastic approach is based on the Langevin treatment describing the motion of a classical Brownian particle of mass m. Two situations of physical interest are considered. In the first one, we discuss in detail an application of the standard Langevin treatment (white noise) for the variable damping system. In the second one, a more general viewpoint is adopted by assuming a given expression to the so-called collored noise. For both cases, the basic diffententiaql equations are analytically solved and al the quantities physically relevant are explicitly determined. The results depend on an arbitrary q parameter measuring how the behavior of the system departs from the standard brownian particle with constant viscosity. Several types of sthocastic behavior (superdiffusive and subdiffusive) are obteinded when the free pamameter varies continuosly. However, all the results of the conventional Langevin approach with constant damping are recovered in the limit q = 1
Resumo:
One of the main goals of CoRoT Natal Team is the determination of rotation period for thousand of stars, a fundamental parameter for the study of stellar evolutionary histories. In order to estimate the rotation period of stars and to understand the associated uncertainties resulting, for example, from discontinuities in the curves and (or) low signal-to-noise ratio, we have compared three different methods for light curves treatment. These methods were applied to many light curves with different characteristics. First, a Visual Analysis was undertaken for each light curve, giving a general perspective on the different phenomena reflected in the curves. The results obtained by this method regarding the rotation period of the star, the presence of spots, or the star nature (binary system or other) were then compared with those obtained by two accurate methods: the CLEANest method, based on the DCDFT (Date Compensated Discrete Fourier Transform), and the Wavelet method, based on the Wavelet Transform. Our results show that all three methods have similar levels of accuracy and can complement each other. Nevertheless, the Wavelet method gives more information about the star, from the wavelet map, showing the variations of frequencies over time in the signal. Finally, we discuss the limitations of these methods, the efficiency to give us informations about the star and the development of tools to integrate different methods into a single analysis
Resumo:
In this thesis, we study the application of spectral representations to the solution of problems in seismic exploration, the synthesis of fractal surfaces and the identification of correlations between one-dimensional signals. We apply a new approach, called Wavelet Coherency, to the study of stratigraphic correlation in well log signals, as an attempt to identify layers from the same geological formation, showing that the representation in wavelet space, with introduction of scale domain, can facilitate the process of comparing patterns in geophysical signals. We have introduced a new model for the generation of anisotropic fractional brownian surfaces based on curvelet transform, a new multiscale tool which can be seen as a generalization of the wavelet transform to include the direction component in multidimensional spaces. We have tested our model with a modified version of the Directional Average Method (DAM) to evaluate the anisotropy of fractional brownian surfaces. We also used the directional behavior of the curvelets to attack an important problem in seismic exploration: the atenuation of the ground roll, present in seismograms as a result of surface Rayleigh waves. The techniques employed are effective, leading to sparse representation of the signals, and, consequently, to good resolutions
Resumo:
The work is to make a brief discussion of methods to estimate the parameters of the Generalized Pareto distribution (GPD). Being addressed the following techniques: Moments (moments), Maximum Likelihood (MLE), Biased Probability Weighted Moments (PWMB), Unbiased Probability Weighted Moments (PWMU), Mean Power Density Divergence (MDPD), Median (MED), Pickands (PICKANDS), Maximum Penalized Likelihood (MPLE), Maximum Goodness-of-fit (MGF) and the Maximum Entropy (POME) technique, the focus of this manuscript. By way of illustration adjustments were made for the Generalized Pareto distribution, for a sequence of earthquakes intraplacas which occurred in the city of João Câmara in the northeastern region of Brazil, which was monitored continuously for two years (1987 and 1988). It was found that the MLE and POME were the most efficient methods, giving them basically mean squared errors. Based on the threshold of 1.5 degrees was estimated the seismic risk for the city, and estimated the level of return to earthquakes of intensity 1.5°, 2.0°, 2.5°, 3.0° and the most intense earthquake never registered in the city, which occurred in November 1986 with magnitude of about 5.2º
Resumo:
Recently, genetically encoded optical indicators have emerged as noninvasive tools of high spatial and temporal resolution utilized to monitor the activity of individual neurons and specific neuronal populations. The increasing number of new optogenetic indicators, together with the absence of comparisons under identical conditions, has generated difficulty in choosing the most appropriate protein, depending on the experimental design. Therefore, the purpose of our study was to compare three recently developed reporter proteins: the calcium indicators GCaMP3 and R-GECO1, and the voltage indicator VSFP butterfly1.2. These probes were expressed in hippocampal neurons in culture, which were subjected to patchclamp recordings and optical imaging. The three groups (each one expressing a protein) exhibited similar values of membrane potential (in mV, GCaMP3: -56 ±8.0, R-GECO1: -57 ±2.5; VSFP: -60 ±3.9, p = 0.86); however, the group of neurons expressing VSFP showed a lower average of input resistance than the other groups (in Mohms, GCaMP3: 161 ±18.3; GECO1-R: 128 ±15.3; VSFP: 94 ±14.0, p = 0.02). Each neuron was submitted to current injections at different frequencies (10 Hz, 5 Hz, 3 Hz, 1.5 Hz, and 0.7 Hz) and their fluorescence responses were recorded in time. In our study, only 26.7% (4/15) of the neurons expressing VSFP showed detectable fluorescence signal in response to action potentials (APs). The average signal-to-noise ratio (SNR) obtained in response to five spikes (at 10 Hz) was small (1.3 ± 0.21), however the rapid kinetics of the VSFP allowed discrimination of APs as individual peaks, with detection of 53% of the evoked APs. Frequencies below 5 Hz and subthreshold signals were undetectable due to high noise. On the other hand, calcium indicators showed the greatest change in fluorescence following the same protocol (five APs at 10 Hz). Among the GCaMP3 expressing neurons, 80% (8/10) exhibited signal, with an average SNR value of 21 ±6.69 (soma), while for the R-GECO1 neurons, 50% (2/4) of the neurons had signal, with a mean SNR value of 52 ±19.7 (soma). For protocols at 10 Hz, 54% of the evoked APs were detected with GCaMP3 and 85% with R-GECO1. APs were detectable in all the analyzed frequencies and fluorescence signals were detected from subthreshold depolarizations as well. Because GCaMP3 is the most likely to yield fluorescence signal and with high SNR, some experiments were performed only with this probe. We demonstrate that GCaMP3 is effective in detecting synaptic inputs (involving Ca2+ influx), with high spatial and temporal resolution. Differences were also observed between the SNR values resulting from evoked APs, compared to spontaneous APs. In recordings of groups of cells, GCaMP3 showed clear discrimination between activated and silent cells, and reveals itself as a potential tool in studies of neuronal synchronization. Thus, our results indicate that the presently available calcium indicators allow detailed studies on neuronal communication, ranging from individual dendritic spines to the investigation of events of synchrony in neuronal networks genetically defined. In contrast, studies employing VSFPs represent a promising technology for monitoring neural activity and, although still to be improved, they may become more appropriate than calcium indicators, since neurons work on a time scale faster than events of calcium may foresee
Resumo:
In the behavioral paradigm of discriminative avoidance task, both short and long-term memories have been extensively investigated with behavioral and pharmacological approaches. The aim of the present study was to evaluate, using the abovementioned model, the hippocampal expression of zif-268 - a calcium-dependent immediate early gene involved with synaptic plasticity process - throughout several steps of memory formation, such as acquisition, evocation and extiction. The behavioral apparatus consisted of a modified elevaated plus-maze, with their enclosed arms disposed in "L". A pre-exposure to the maze was made with the animal using all arms enclosed, for 30 minutes, followed by training and test, during 10 minutes each. The between sections interval was 24h. During training, aversive stimuli (bright light and loud noise) were actived whenever the animals entered one of the enclosed armas (aversive arm). Memory acquisiton, retention and extinction were evaluated by the percentage of the total time spent exploring the aversive arm. The parameters evaluated (time spent in the arms and total distance traveled) were estimated with an animal tracking software (Anymaze, Stoelting, USA). Learning during training was estimated by the decrease of the time spent exploring the aversive arm. One hour after the beginning of each section, animals were anaesthetized with sodium-thiopental (i.p.) and perfused with 0.9% heparinized saline solution followed by 4% paraformaldehyde. Brains were cryoprotected with 20% sucrose, separeted in three blocks and frozen. The middle block, containing the hippocampus, was sectioned at 20 micro meters in the coronal plane and the resutant sections were submitted to zif-268 immunohistochemistry. Our results show an increased expression of zif-268 in the dentate gyrus (DG) during the evocation and extinction stages. There is a distinct participation of the DG during the memory evocation, but not during its acquisition. Inaddition, all hippocampal regions (CA1, CA3 and DG) presented an increased zif-268 expression during the process of extinction.
Resumo:
The aim of this study is to investigate the eco-environmental vulnerability, its changes, and its causes to develop a management system for application of eco-environmental vulnerability and risk assessment in the Apodi-Mossory estuary, Northeast Brazil. This analysis is focused on the interference of the landscape conditions, and its changes, due to the following factors: the oil and natural gas industry, tropical fruits industry, shrimp farms, marine salt industry, occupation of the sensitive areas; demand for land, vegetation degradation, siltation in rivers, severe flooding, sea level rise (SLR), coastal dynamics, low and flat topography, high ecological value and tourism in the region and the rapid growth of urbanization. Conventional and remote sensing data were analyzed using modeling techniques based on ArcGIS, ER-Mapper, ERDAS Imagine and ENVI software. Digital images were initially processed by Principal Component Analysis and transformation of the maximum fraction of noise, and then all bands were normalized to reduce errors caused by bands of different sizes. They were integrated in a Geographic Information System analysis to detect changes, to generate digital elevation models, geomorphic indices and other variables of the study area. A three band color combination of multispectral bands was used to monitor changes of land and vegetation cover from 1986 to 2009. This task also included the analysis of various secondary data, such as field data, socioeconomic data, environmental data and prospects growth. The main objective of this study was to improve our understanding of eco-environmental vulnerability and risk assessment; it´s causes basically show the intensity, its distribution and human-environment effect on the ecosystem, and identify the high and low sensitive areas and area of inundation due to future SLR, and the loss of land due to coastal erosion in the Apodi-Mossoró estuary in order to establish a strategy for sustainable land use. The developed model includes some basic factors such as geology, geomorphology, soils, land use / land cover, vegetation cover, slope, topography and hydrology. The numerical results indicate that 9.86% of total study area was under very high vulnerability, 29.12% high vulnerability, 52.90% moderate vulnerability and 2.23% were in the category of very low vulnerability. The analysis indicates that 216.1 km² and 362.8 km² area flooded on 1m and 10m in sea levels respectively. The sectors most affected were residential, industrial and recreational areas, agricultural land, and ecosystems of high environmental sensitivity. The results showed that changes in eco-environmental vulnerability have a significant impact on the sustainable development of the RN state, since the indicator is a function of sensitivity, exposure and status in relation to a level of damage. The model were presented as a tool to assist in indexing vulnerability in order to optimize actions and assess the implications of decisions makers and policies regarding the management of coastal and estuarine areas. In this context aspects such as population growth, degradation of vegetation, land use / land cover, amount and type of industrialization, SLR and government policies for environmental protection were considered the main factors that affect the eco-environmental changes over the last three decades in the Apodi-Mossoró estuary.
Resumo:
The Galaxy open clusters have a wide variety of physical properties that make them valuable laboratories for studies of stellar and chemical evolution of the Galaxy. In order to better settle these properties we investigate the abundances of a large number of chemical elements in a sample of 27 evolved stars of the open cluster M67 with different evolutionary stages (turn-off, subgiant and giant stars). For such a study we used high-resolution spectra (R 47 000) and high S/N obtained with UVES+FLAMES at VLT/UT2, covering the wavelength interval 4200-10 600 Å. Our spectral analysis is based on the MARCS models of atmosphere and Turbospectrum spectroscopic tool. The oxygen abundances were determined from the [O I] line at 6300 Å. In addition, we have also computed abundances of Si I, Na I, Mg I, Al I, Ca I, Ti I, Co I, Ni I, Zr I, La II and Cr I. The abundances investigated in this work, combined with their stellar parameters, offers an opportunity to determine the level of mixing and convective dilution of evolved stars in M67. Based on the obtained parameters, the abundances of these seem to follow a similar trend to the curve of solar abundances. Additionally, following strategies of other studies have investigated the relative abundances as a function of effective temperature and metallicity, where it was possible to observe an abundance of Na, Al and Si to the stars in the field of giants. A large dispersion from star to star, is observed in the ratios [X / Fe] for the Co, Zr and La, and the absence of Zr and La, in the stars of the turn-off. Comparisons made between our results and other studies in the literature show that values of abundances are in agreement and close to the limits of the errors
Resumo:
In this work, the study of some complex systems is done with use of two distinct procedures. In the first part, we have studied the usage of Wavelet transform on analysis and characterization of (multi)fractal time series. We have test the reliability of Wavelet Transform Modulus Maxima method (WTMM) in respect to the multifractal formalism, trough the calculation of the singularity spectrum of time series whose fractality is well known a priori. Next, we have use the Wavelet Transform Modulus Maxima method to study the fractality of lungs crackles sounds, a biological time series. Since the crackles sounds are due to the opening of a pulmonary airway bronchi, bronchioles and alveoli which was initially closed, we can get information on the phenomenon of the airway opening cascade of the whole lung. Once this phenomenon is associated with the pulmonar tree architecture, which displays fractal geometry, the analysis and fractal characterization of this noise may provide us with important parameters for comparison between healthy lungs and those affected by disorders that affect the geometry of the tree lung, such as the obstructive and parenchymal degenerative diseases, which occurs, for example, in pulmonary emphysema. In the second part, we study a site percolation model for square lattices, where the percolating cluster grows governed by a control rule, corresponding to a method of automatic search. In this model of percolation, which have characteristics of self-organized criticality, the method does not use the automated search on Leaths algorithm. It uses the following control rule: pt+1 = pt + k(Rc − Rt), where p is the probability of percolation, k is a kinetic parameter where 0 < k < 1 and R is the fraction of percolating finite square lattices with side L, LxL. This rule provides a time series corresponding to the dynamical evolution of the system, in particular the likelihood of percolation p. We proceed an analysis of scaling of the signal obtained in this way. The model used here enables the study of the automatic search method used for site percolation in square lattices, evaluating the dynamics of their parameters when the system goes to the critical point. It shows that the scaling of , the time elapsed until the system reaches the critical point, and tcor, the time required for the system loses its correlations, are both inversely proportional to k, the kinetic parameter of the control rule. We verify yet that the system has two different time scales after: one in which the system shows noise of type 1 f , indicating to be strongly correlated. Another in which it shows white noise, indicating that the correlation is lost. For large intervals of time the dynamics of the system shows ergodicity