645 resultados para Ruído sísmico
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
The increasing demand for high performance wireless communication systems has shown the inefficiency of the current model of fixed allocation of the radio spectrum. In this context, cognitive radio appears as a more efficient alternative, by providing opportunistic spectrum access, with the maximum bandwidth possible. To ensure these requirements, it is necessary that the transmitter identify opportunities for transmission and the receiver recognizes the parameters defined for the communication signal. The techniques that use cyclostationary analysis can be applied to problems in either spectrum sensing and modulation classification, even in low signal-to-noise ratio (SNR) environments. However, despite the robustness, one of the main disadvantages of cyclostationarity is the high computational cost for calculating its functions. This work proposes efficient architectures for obtaining cyclostationary features to be employed in either spectrum sensing and automatic modulation classification (AMC). In the context of spectrum sensing, a parallelized algorithm for extracting cyclostationary features of communication signals is presented. The performance of this features extractor parallelization is evaluated by speedup and parallel eficiency metrics. The architecture for spectrum sensing is analyzed for several configuration of false alarm probability, SNR levels and observation time for BPSK and QPSK modulations. In the context of AMC, the reduced alpha-profile is proposed as as a cyclostationary signature calculated for a reduced cyclic frequencies set. This signature is validated by a modulation classification architecture based on pattern matching. The architecture for AMC is investigated for correct classification rates of AM, BPSK, QPSK, MSK and FSK modulations, considering several scenarios of observation length and SNR levels. The numerical results of performance obtained in this work show the eficiency of the proposed architectures
Sistema inteligente para detecção de manchas de óleo na superfície marinha através de imagens de SAR
Resumo:
Oil spill on the sea, accidental or not, generates enormous negative consequences for the affected area. The damages are ambient and economic, mainly with the proximity of these spots of preservation areas and/or coastal zones. The development of automatic techniques for identification of oil spots on the sea surface, captured through Radar images, assist in a complete monitoring of the oceans and seas. However spots of different origins can be visualized in this type of imaging, which is a very difficult task. The system proposed in this work, based on techniques of digital image processing and artificial neural network, has the objective to identify the analyzed spot and to discern between oil and other generating phenomena of spot. Tests in functional blocks that compose the proposed system allow the implementation of different algorithms, as well as its detailed and prompt analysis. The algorithms of digital image processing (speckle filtering and gradient), as well as classifier algorithms (Multilayer Perceptron, Radial Basis Function, Support Vector Machine and Committe Machine) are presented and commented.The final performance of the system, with different kind of classifiers, is presented by ROC curve. The true positive rates are considered agreed with the literature about oil slick detection through SAR images presents
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
In this work we investigate the stochastic behavior of a large class of systems with variable damping which are described by a time-dependent Lagrangian. Our stochastic approach is based on the Langevin treatment describing the motion of a classical Brownian particle of mass m. Two situations of physical interest are considered. In the first one, we discuss in detail an application of the standard Langevin treatment (white noise) for the variable damping system. In the second one, a more general viewpoint is adopted by assuming a given expression to the so-called collored noise. For both cases, the basic diffententiaql equations are analytically solved and al the quantities physically relevant are explicitly determined. The results depend on an arbitrary q parameter measuring how the behavior of the system departs from the standard brownian particle with constant viscosity. Several types of sthocastic behavior (superdiffusive and subdiffusive) are obteinded when the free pamameter varies continuosly. However, all the results of the conventional Langevin approach with constant damping are recovered in the limit q = 1
Resumo:
One of the main goals of CoRoT Natal Team is the determination of rotation period for thousand of stars, a fundamental parameter for the study of stellar evolutionary histories. In order to estimate the rotation period of stars and to understand the associated uncertainties resulting, for example, from discontinuities in the curves and (or) low signal-to-noise ratio, we have compared three different methods for light curves treatment. These methods were applied to many light curves with different characteristics. First, a Visual Analysis was undertaken for each light curve, giving a general perspective on the different phenomena reflected in the curves. The results obtained by this method regarding the rotation period of the star, the presence of spots, or the star nature (binary system or other) were then compared with those obtained by two accurate methods: the CLEANest method, based on the DCDFT (Date Compensated Discrete Fourier Transform), and the Wavelet method, based on the Wavelet Transform. Our results show that all three methods have similar levels of accuracy and can complement each other. Nevertheless, the Wavelet method gives more information about the star, from the wavelet map, showing the variations of frequencies over time in the signal. Finally, we discuss the limitations of these methods, the efficiency to give us informations about the star and the development of tools to integrate different methods into a single analysis
Resumo:
In this thesis, we study the application of spectral representations to the solution of problems in seismic exploration, the synthesis of fractal surfaces and the identification of correlations between one-dimensional signals. We apply a new approach, called Wavelet Coherency, to the study of stratigraphic correlation in well log signals, as an attempt to identify layers from the same geological formation, showing that the representation in wavelet space, with introduction of scale domain, can facilitate the process of comparing patterns in geophysical signals. We have introduced a new model for the generation of anisotropic fractional brownian surfaces based on curvelet transform, a new multiscale tool which can be seen as a generalization of the wavelet transform to include the direction component in multidimensional spaces. We have tested our model with a modified version of the Directional Average Method (DAM) to evaluate the anisotropy of fractional brownian surfaces. We also used the directional behavior of the curvelets to attack an important problem in seismic exploration: the atenuation of the ground roll, present in seismograms as a result of surface Rayleigh waves. The techniques employed are effective, leading to sparse representation of the signals, and, consequently, to good resolutions
Resumo:
The work is to make a brief discussion of methods to estimate the parameters of the Generalized Pareto distribution (GPD). Being addressed the following techniques: Moments (moments), Maximum Likelihood (MLE), Biased Probability Weighted Moments (PWMB), Unbiased Probability Weighted Moments (PWMU), Mean Power Density Divergence (MDPD), Median (MED), Pickands (PICKANDS), Maximum Penalized Likelihood (MPLE), Maximum Goodness-of-fit (MGF) and the Maximum Entropy (POME) technique, the focus of this manuscript. By way of illustration adjustments were made for the Generalized Pareto distribution, for a sequence of earthquakes intraplacas which occurred in the city of João Câmara in the northeastern region of Brazil, which was monitored continuously for two years (1987 and 1988). It was found that the MLE and POME were the most efficient methods, giving them basically mean squared errors. Based on the threshold of 1.5 degrees was estimated the seismic risk for the city, and estimated the level of return to earthquakes of intensity 1.5°, 2.0°, 2.5°, 3.0° and the most intense earthquake never registered in the city, which occurred in November 1986 with magnitude of about 5.2º
Resumo:
Recently, genetically encoded optical indicators have emerged as noninvasive tools of high spatial and temporal resolution utilized to monitor the activity of individual neurons and specific neuronal populations. The increasing number of new optogenetic indicators, together with the absence of comparisons under identical conditions, has generated difficulty in choosing the most appropriate protein, depending on the experimental design. Therefore, the purpose of our study was to compare three recently developed reporter proteins: the calcium indicators GCaMP3 and R-GECO1, and the voltage indicator VSFP butterfly1.2. These probes were expressed in hippocampal neurons in culture, which were subjected to patchclamp recordings and optical imaging. The three groups (each one expressing a protein) exhibited similar values of membrane potential (in mV, GCaMP3: -56 ±8.0, R-GECO1: -57 ±2.5; VSFP: -60 ±3.9, p = 0.86); however, the group of neurons expressing VSFP showed a lower average of input resistance than the other groups (in Mohms, GCaMP3: 161 ±18.3; GECO1-R: 128 ±15.3; VSFP: 94 ±14.0, p = 0.02). Each neuron was submitted to current injections at different frequencies (10 Hz, 5 Hz, 3 Hz, 1.5 Hz, and 0.7 Hz) and their fluorescence responses were recorded in time. In our study, only 26.7% (4/15) of the neurons expressing VSFP showed detectable fluorescence signal in response to action potentials (APs). The average signal-to-noise ratio (SNR) obtained in response to five spikes (at 10 Hz) was small (1.3 ± 0.21), however the rapid kinetics of the VSFP allowed discrimination of APs as individual peaks, with detection of 53% of the evoked APs. Frequencies below 5 Hz and subthreshold signals were undetectable due to high noise. On the other hand, calcium indicators showed the greatest change in fluorescence following the same protocol (five APs at 10 Hz). Among the GCaMP3 expressing neurons, 80% (8/10) exhibited signal, with an average SNR value of 21 ±6.69 (soma), while for the R-GECO1 neurons, 50% (2/4) of the neurons had signal, with a mean SNR value of 52 ±19.7 (soma). For protocols at 10 Hz, 54% of the evoked APs were detected with GCaMP3 and 85% with R-GECO1. APs were detectable in all the analyzed frequencies and fluorescence signals were detected from subthreshold depolarizations as well. Because GCaMP3 is the most likely to yield fluorescence signal and with high SNR, some experiments were performed only with this probe. We demonstrate that GCaMP3 is effective in detecting synaptic inputs (involving Ca2+ influx), with high spatial and temporal resolution. Differences were also observed between the SNR values resulting from evoked APs, compared to spontaneous APs. In recordings of groups of cells, GCaMP3 showed clear discrimination between activated and silent cells, and reveals itself as a potential tool in studies of neuronal synchronization. Thus, our results indicate that the presently available calcium indicators allow detailed studies on neuronal communication, ranging from individual dendritic spines to the investigation of events of synchrony in neuronal networks genetically defined. In contrast, studies employing VSFPs represent a promising technology for monitoring neural activity and, although still to be improved, they may become more appropriate than calcium indicators, since neurons work on a time scale faster than events of calcium may foresee
Resumo:
In the behavioral paradigm of discriminative avoidance task, both short and long-term memories have been extensively investigated with behavioral and pharmacological approaches. The aim of the present study was to evaluate, using the abovementioned model, the hippocampal expression of zif-268 - a calcium-dependent immediate early gene involved with synaptic plasticity process - throughout several steps of memory formation, such as acquisition, evocation and extiction. The behavioral apparatus consisted of a modified elevaated plus-maze, with their enclosed arms disposed in "L". A pre-exposure to the maze was made with the animal using all arms enclosed, for 30 minutes, followed by training and test, during 10 minutes each. The between sections interval was 24h. During training, aversive stimuli (bright light and loud noise) were actived whenever the animals entered one of the enclosed armas (aversive arm). Memory acquisiton, retention and extinction were evaluated by the percentage of the total time spent exploring the aversive arm. The parameters evaluated (time spent in the arms and total distance traveled) were estimated with an animal tracking software (Anymaze, Stoelting, USA). Learning during training was estimated by the decrease of the time spent exploring the aversive arm. One hour after the beginning of each section, animals were anaesthetized with sodium-thiopental (i.p.) and perfused with 0.9% heparinized saline solution followed by 4% paraformaldehyde. Brains were cryoprotected with 20% sucrose, separeted in three blocks and frozen. The middle block, containing the hippocampus, was sectioned at 20 micro meters in the coronal plane and the resutant sections were submitted to zif-268 immunohistochemistry. Our results show an increased expression of zif-268 in the dentate gyrus (DG) during the evocation and extinction stages. There is a distinct participation of the DG during the memory evocation, but not during its acquisition. Inaddition, all hippocampal regions (CA1, CA3 and DG) presented an increased zif-268 expression during the process of extinction.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Police officers are exposed to impact noise coming from firearms, which may cause irreversible injuries to the hearing system.Aim: To evaluate the noise exposure in shooting stands during gunfire exercises, to analyze the acoustic impact of the noise produced by the firearms and to associate it with tonal audiometry results.Study design: Cross-sectional.Materials and methods: To measure noise intensity we used a digital sound level meter, and the acoustic analysis was carried out by means of the oscillations and cochlear response curves provided by the Praat software. 30 police officers were selected (27 males and 3 females).Results: The peak level measured was 113.1 dB(C) from a .40 pistol and 116.8 dB(C) for a .38 revolver. The values obtained for oscillation and Praat was 17.9 +/- 0.3 Barks, corresponding to the rate of 4,120 and 4,580 Hz. Audiometry indicated greater hearing loss at 4,000Hz in 86.7% of the cases.Conclusion: With the acoustic analysis it was possible to show cause and effect between the main areas of energy excitation of the cochlea (Praat cochlear response curve) and the frequencies of low hearing acuity.
Resumo:
O transtorno do processamento auditivo é uma entidade clínica que pode estar associado a diversos distúrbios da comunicação humana, entre estes o distúrbio de aprendizagem. OBJETIVO: Caracterizar e comparar o desempenho de escolares com e sem distúrbio de aprendizagem nos testes de Fala com Ruído e Escuta Dicótica de Dígitos e Verbal. MATERIAL E MÉTODO: Participaram 40 escolares, de ambos os gêneros, com faixa etária de 8 a 12 anos, divididos em dois grupos: GI: composto por 20 escolares com diagnóstico de distúrbio de Aprendizagem e GII: composto por 20 escolares com bom desempenho escolar, pareados segundo gênero, faixa etária e escolaridade com GI. Foram realizadas avaliações audiológicas básicas e Testes de Dicótico de Dígitos, Dissílabos Alternados (SSW) e Fala com Ruído. FORMA DE ESTUDO: Estudo transversal com corte histórica. RESULTADOS: Os escolares de GI apresentaram desempenho inferior ao dos escolares de GII, nos testes Dicótico de Dígitos e Dissílabos Alternados e desempenho sem diferença estatisticamente significante no Teste de fala com Ruído. CONCLUSÃO: Os achados sugerem que o grupo de escolares com distúrbio de aprendizagem apresenta desempenho inferior em relação ao grupo sem dificuldades, refletindo dificuldades no processamento das informações auditivas.
Resumo:
The aim of this study is to investigate the eco-environmental vulnerability, its changes, and its causes to develop a management system for application of eco-environmental vulnerability and risk assessment in the Apodi-Mossory estuary, Northeast Brazil. This analysis is focused on the interference of the landscape conditions, and its changes, due to the following factors: the oil and natural gas industry, tropical fruits industry, shrimp farms, marine salt industry, occupation of the sensitive areas; demand for land, vegetation degradation, siltation in rivers, severe flooding, sea level rise (SLR), coastal dynamics, low and flat topography, high ecological value and tourism in the region and the rapid growth of urbanization. Conventional and remote sensing data were analyzed using modeling techniques based on ArcGIS, ER-Mapper, ERDAS Imagine and ENVI software. Digital images were initially processed by Principal Component Analysis and transformation of the maximum fraction of noise, and then all bands were normalized to reduce errors caused by bands of different sizes. They were integrated in a Geographic Information System analysis to detect changes, to generate digital elevation models, geomorphic indices and other variables of the study area. A three band color combination of multispectral bands was used to monitor changes of land and vegetation cover from 1986 to 2009. This task also included the analysis of various secondary data, such as field data, socioeconomic data, environmental data and prospects growth. The main objective of this study was to improve our understanding of eco-environmental vulnerability and risk assessment; it´s causes basically show the intensity, its distribution and human-environment effect on the ecosystem, and identify the high and low sensitive areas and area of inundation due to future SLR, and the loss of land due to coastal erosion in the Apodi-Mossoró estuary in order to establish a strategy for sustainable land use. The developed model includes some basic factors such as geology, geomorphology, soils, land use / land cover, vegetation cover, slope, topography and hydrology. The numerical results indicate that 9.86% of total study area was under very high vulnerability, 29.12% high vulnerability, 52.90% moderate vulnerability and 2.23% were in the category of very low vulnerability. The analysis indicates that 216.1 km² and 362.8 km² area flooded on 1m and 10m in sea levels respectively. The sectors most affected were residential, industrial and recreational areas, agricultural land, and ecosystems of high environmental sensitivity. The results showed that changes in eco-environmental vulnerability have a significant impact on the sustainable development of the RN state, since the indicator is a function of sensitivity, exposure and status in relation to a level of damage. The model were presented as a tool to assist in indexing vulnerability in order to optimize actions and assess the implications of decisions makers and policies regarding the management of coastal and estuarine areas. In this context aspects such as population growth, degradation of vegetation, land use / land cover, amount and type of industrialization, SLR and government policies for environmental protection were considered the main factors that affect the eco-environmental changes over the last three decades in the Apodi-Mossoró estuary.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)