902 resultados para Processamento sísmico
Resumo:
The reverse time migration algorithm (RTM) has been widely used in the seismic industry to generate images of the underground and thus reduce the risk of oil and gas exploration. Its widespread use is due to its high quality in underground imaging. The RTM is also known for its high computational cost. Therefore, parallel computing techniques have been used in their implementations. In general, parallel approaches for RTM use a coarse granularity by distributing the processing of a subset of seismic shots among nodes of distributed systems. Parallel approaches with coarse granularity for RTM have been shown to be very efficient since the processing of each seismic shot can be performed independently. For this reason, RTM algorithm performance can be considerably improved by using a parallel approach with finer granularity for the processing assigned to each node. This work presents an efficient parallel algorithm for 3D reverse time migration with fine granularity using OpenMP. The propagation algorithm of 3D acoustic wave makes up much of the RTM. Different load balancing were analyzed in order to minimize possible losses parallel performance at this stage. The results served as a basis for the implementation of other phases RTM: backpropagation and imaging condition. The proposed algorithm was tested with synthetic data representing some of the possible underground structures. Metrics such as speedup and efficiency were used to analyze its parallel performance. The migrated sections show that the algorithm obtained satisfactory performance in identifying subsurface structures. As for the parallel performance, the analysis clearly demonstrate the scalability of the algorithm achieving a speedup of 22.46 for the propagation of the wave and 16.95 for the RTM, both with 24 threads.
Resumo:
Ambient seismic noise has traditionally been considered as an unwanted perturbation in seismic data acquisition that "contaminates" the clean recording of earthquakes. Over the last decade, however, it has been demonstrated that consistent information about the subsurface structure can be extracted from cross-correlation of ambient seismic noise. In this context, the rules are reversed: the ambient seismic noise becomes the desired seismic signal, while earthquakes become the unwanted perturbation that needs to be removed. At periods lower than 30 s, the spectrum of ambient seismic noise is dominated by microseism, which originates from distant atmospheric perturbations over the oceans. The microsseism is the most continuous seismic signal and can be classified as primary – when observed in the range 10-20 s – and secondary – when observed in the range 5-10 s. The Green‘s function of the propagating medium between two receivers (seismic stations) can be reconstructed by cross-correlating seismic noise simultaneously recorded at the receivers. The reconstruction of the Green‘s function is generally proportional to the surface-wave portion of the seismic wavefield, as microsseismic energy travels mostly as surface-waves. In this work, 194 Green‘s functions obtained from stacking of one month of daily cross-correlations of ambient seismic noise recorded in the vertical component of several pairs of broadband seismic stations in Northeast Brazil are presented. The daily cross-correlations were stacked using a timefrequency, phase-weighted scheme that enhances weak coherent signals by reducing incoherent noise. The cross-correlations show that, as expected, the emerged signal is dominated by Rayleigh waves, with dispersion velocities being reliably measured for periods ranging between 5 and 20 s. Both permanent stations from a monitoring seismic network and temporary stations from past passive experiments in the region are considered, resulting in a combined network of 33 stations separated by distances between 60 and 1311 km, approximately. The Rayleigh-wave, dispersion velocity measurements are then used to develop tomographic images of group velocity variation for the Borborema Province of Northeast Brazil. The tomographic maps allow to satisfactorily map buried structural features in the region. At short periods (~5 s) the images reflect shallow crustal structure, clearly delineating intra-continental and marginal sedimentary basins, as well as portions of important shear zones traversing the Borborema Province. At longer periods (10 – 20 s) the images are sensitive to deeper structure in the upper crust, and most of the shallower anomalies fade away. Interestingly, some of them do persist. The deep anomalies do not correlate with either the location of Cenozoic volcanism and uplift - which marked the evolution of the Borborema Province in the Cenozoic - or available maps of surface heat-flow, and the origin of the deep anomalies remains enigmatic.
Resumo:
In recent decades, changes in the surface properties of materials have been used to improve their tribological characteristics. However, this improvement depends on the process, treatment time and, primarily, the thickness of this surface film layer. Physical vapor deposition (PVD) of titanium nitrate (TiN) has been used to increase the surface hardness of metallic materials. Thus, the aim of the present study was to propose a numerical-experimental method to assess the film thickness (l) of TiN deposited by PVD. To reach this objective, experimental results of hardness (H) assays were combined with a numerical simulation to study the behavior of this property as a function of maximum penetration depth of the indenter (hmax) into the film/substrate conjugate. Two methodologies were adopted to determine film thickness. The first consists of the numerical results of the H x hmax curve with the experimental curve obtained by the instrumental indentation test. This methodology was used successfully in a TiN-coated titanium (Ti) conjugate. A second strategy combined the numerical results of the Hv x hmax curve with Vickers experimental hardness data (Hv). This methodology was applied to a TiN-coated M2 tool steel conjugate. The mechanical properties of the materials studied were also determined in the present study. The thicknesses results obtained for the two conjugates were compatible with their experimental data.
Resumo:
In the oil prospection research seismic data are usually irregular and sparsely sampled along the spatial coordinates due to obstacles in placement of geophones. Fourier methods provide a way to make the regularization of seismic data which are efficient if the input data is sampled on a regular grid. However, when these methods are applied to a set of irregularly sampled data, the orthogonality among the Fourier components is broken and the energy of a Fourier component may "leak" to other components, a phenomenon called "spectral leakage". The objective of this research is to study the spectral representation of irregularly sampled data method. In particular, it will be presented the basic structure of representation of the NDFT (nonuniform discrete Fourier transform), study their properties and demonstrate its potential in the processing of the seismic signal. In this way we study the FFT (fast Fourier transform) and the NFFT (nonuniform fast Fourier transform) which rapidly calculate the DFT (discrete Fourier transform) and NDFT. We compare the recovery of the signal using the FFT, DFT and NFFT. We approach the interpolation of seismic trace using the ALFT (antileakage Fourier transform) to overcome the problem of spectral leakage caused by uneven sampling. Applications to synthetic and real data showed that ALFT method works well on complex geology seismic data and suffers little with irregular spatial sampling of the data and edge effects, in addition it is robust and stable with noisy data. However, it is not as efficient as the FFT and its reconstruction is not as good in the case of irregular filling with large holes in the acquisition.
Resumo:
In the oil prospection research seismic data are usually irregular and sparsely sampled along the spatial coordinates due to obstacles in placement of geophones. Fourier methods provide a way to make the regularization of seismic data which are efficient if the input data is sampled on a regular grid. However, when these methods are applied to a set of irregularly sampled data, the orthogonality among the Fourier components is broken and the energy of a Fourier component may "leak" to other components, a phenomenon called "spectral leakage". The objective of this research is to study the spectral representation of irregularly sampled data method. In particular, it will be presented the basic structure of representation of the NDFT (nonuniform discrete Fourier transform), study their properties and demonstrate its potential in the processing of the seismic signal. In this way we study the FFT (fast Fourier transform) and the NFFT (nonuniform fast Fourier transform) which rapidly calculate the DFT (discrete Fourier transform) and NDFT. We compare the recovery of the signal using the FFT, DFT and NFFT. We approach the interpolation of seismic trace using the ALFT (antileakage Fourier transform) to overcome the problem of spectral leakage caused by uneven sampling. Applications to synthetic and real data showed that ALFT method works well on complex geology seismic data and suffers little with irregular spatial sampling of the data and edge effects, in addition it is robust and stable with noisy data. However, it is not as efficient as the FFT and its reconstruction is not as good in the case of irregular filling with large holes in the acquisition.
Resumo:
Several are the areas in which digital images are used in solving day-to-day problems. In medicine the use of computer systems have improved the diagnosis and medical interpretations. In dentistry it’s not different, increasingly procedures assisted by computers have support dentists in their tasks. Set in this context, an area of dentistry known as public oral health is responsible for diagnosis and oral health treatment of a population. To this end, oral visual inspections are held in order to obtain oral health status information of a given population. From this collection of information, also known as epidemiological survey, the dentist can plan and evaluate taken actions for the different problems identified. This procedure has limiting factors, such as a limited number of qualified professionals to perform these tasks, different diagnoses interpretations among other factors. Given this context came the ideia of using intelligent systems techniques in supporting carrying out these tasks. Thus, it was proposed in this paper the development of an intelligent system able to segment, count and classify teeth from occlusal intraoral digital photographic images. The proposed system makes combined use of machine learning techniques and digital image processing. We first carried out a color-based segmentation on regions of interest, teeth and non teeth, in the images through the use of Support Vector Machine. After identifying these regions were used techniques based on morphological operators such as erosion and transformed watershed for counting and detecting the boundaries of the teeth, respectively. With the border detection of teeth was possible to calculate the Fourier descriptors for their shape and the position descriptors. Then the teeth were classified according to their types through the use of the SVM from the method one-against-all used in multiclass problem. The multiclass classification problem has been approached in two different ways. In the first approach we have considered three class types: molar, premolar and non teeth, while the second approach were considered five class types: molar, premolar, canine, incisor and non teeth. The system presented a satisfactory performance in the segmenting, counting and classification of teeth present in the images.
Resumo:
Several are the areas in which digital images are used in solving day-to-day problems. In medicine the use of computer systems have improved the diagnosis and medical interpretations. In dentistry it’s not different, increasingly procedures assisted by computers have support dentists in their tasks. Set in this context, an area of dentistry known as public oral health is responsible for diagnosis and oral health treatment of a population. To this end, oral visual inspections are held in order to obtain oral health status information of a given population. From this collection of information, also known as epidemiological survey, the dentist can plan and evaluate taken actions for the different problems identified. This procedure has limiting factors, such as a limited number of qualified professionals to perform these tasks, different diagnoses interpretations among other factors. Given this context came the ideia of using intelligent systems techniques in supporting carrying out these tasks. Thus, it was proposed in this paper the development of an intelligent system able to segment, count and classify teeth from occlusal intraoral digital photographic images. The proposed system makes combined use of machine learning techniques and digital image processing. We first carried out a color-based segmentation on regions of interest, teeth and non teeth, in the images through the use of Support Vector Machine. After identifying these regions were used techniques based on morphological operators such as erosion and transformed watershed for counting and detecting the boundaries of the teeth, respectively. With the border detection of teeth was possible to calculate the Fourier descriptors for their shape and the position descriptors. Then the teeth were classified according to their types through the use of the SVM from the method one-against-all used in multiclass problem. The multiclass classification problem has been approached in two different ways. In the first approach we have considered three class types: molar, premolar and non teeth, while the second approach were considered five class types: molar, premolar, canine, incisor and non teeth. The system presented a satisfactory performance in the segmenting, counting and classification of teeth present in the images.
Resumo:
The environment in which we live in, we constantly deal with a huge amount of dynamic information, thus, attention is an indispensable cognitive resource that allows an effective selection of stimuli for our survival. From this, investigating how we process our encouragement in movements and how the attention spreads into a space to serve more than one stimuli simultanously is something very important. The behavioural urgence hipothesis suggests that the encouragement in a movement of approaching shows a certain priority in the process related to objects which are in a movement away, but there are researches that point out that it might not happen in an attentive phase, but instead as a priorization of motor response. There are also many controversies found in researches about attentive focalization, in which some studies suggest that the focus of attention would work in a similar manner to a zoom lens, while some searches indicate that the focus of attention could be shared to answer some stimuli in non contiguous regions. This study tried to investigate through two experiments the effect of attentive priorization by encouragement in movements and how the attention is spread with distractors stimuli. The first experiment investigated if the amount of moving flows really influenced in the process of information. The results indicate an effect of priorization of the flows guided in relation to aleatory ones and also from the unique flow due to dual flow. The second experiment investigated how the distribution of attention is in a space with the use of flows as an exogenous cue. The results indicate that the focus of attention works as the one suggested in the zoom lens model.
Resumo:
Nesta dissertação apresentamos um trabalho de desenvolvimento e utilização de pulsos de radiofreqüência modulados simultaneamente em freqüência, amplitude e fase (pulsos fortemente modulados, SMP, do inglês Strongly Modulated Pulses) para criar estados iniciais e executar operações unitárias que servem como blocos básicos para processamento da informação quântica utilizando Ressonância Magnética Nuclear (RMN). As implementações experimentais foram realizas em um sistema de 3 q-bits constituído por spins nucleares de Césio 133 (spin nuclear 7/2) em uma amostra de cristal líquido em fase nemática. Os pulsos SMP´s foram construídos teoricamente utilizando um programa especialmente desenvolvido para esse fim, sendo o mesmo baseado no processo de otimização numérica Simplex Nelder-Mead. Através deste programa, os pulsos SMP foram otimizados de modo a executarem as operações lógicas desejadas com durações consideravelmente menores que aquelas realizadas usando o procedimento usual de RMN, ou seja, seqüências de pulsos e evoluções livres. Isso tem a vantagem de reduzir os efeitos de descoerência decorrentes da relaxação do sistema. Os conceitos teóricos envolvidos na criação dos SMPs são apresentados e as principais dificuldades (experimentais e teóricas) que podem surgir devido ao uso desses procedimentos são discutidas. Como exemplos de aplicação, foram produzidos os estados pseudo-puros usados como estados iniciais de operações lógicas em RMN, bem como operações lógicas que foram posteriormente aplicadas aos mesmos. Utilizando os SMP\'s também foi possível realizar experimentalmente os algoritmos quânticos de Grover e Deutsch-Jozsa para 3 q-bits. A fidelidade das implementações experimentais foi determinadas utilizando as matrizes densidade experimentais obtidas utilizando um método de tomografia da matriz densidade previamente desenvolvido.
Resumo:
As teorias do processamento da informação (PI) procuram respostas sobre o modo como o ser humano, processa a informação mentalmente (Cid, 2005). A grande preocupação centra-se na compreensão dos “fenómenos que se passam no interior da caixa negar” (Alves, 1995, pp.32 cit. Cid, 2005). Algures por todo o mundo cerca de 6 a 15%, da população (Nathan, 1979) tem dislexia. Apresentando dificuldades em aprender com problemas gerais de processamento da informação (Fonseca 1999). Nestes casos os fenómenos da “caixa negra” têm uma particular movimentação, e estas diferenças estão patentes 24horas por dia, 7 dias da semana, sendo uma dificuldade de aprendizagem para toda a vida (Frank & Livingston, 2004). Neste estudo verificamos as diferenças do processamento da informação através do tempo de reacção, atenção e memória entre sujeitos disléxicos (D) e não-disléxicos (ND). A amostra geral é constituída por 22 sujeitos de ambos os sexos, distinguida em dois grupos, o grupo de 10 D e 12 ND com idades compreendidas entre os 18 e 46 anos de idade (média ± desvio padrão de 25,40 ± 2,71). Os instrumentos de trabalho utilizados para tal propósito, foram os seguintes: o teste de barragem de Toulouse-Piéron (atenção Concentrada); O teste de memória visual Menvis-A (MV2); diversas provas de tempo de reacção (simples e de escolha) e o teste do quadro de Schultz (atenção distribuída). Os principais resultados obtidos indicam que o grupo de D são mais lentos no processamento da informação que o grupo de ND. Os resultados apontam que a atenção concentrada é semelhante entre os grupos, mas quanto à atenção distribuição o grupo de ND tem muito melhor performance que o grupo de D. Relativamente ao armazenam a informação visual a curto prazo o desempenho é semelhante para ambos os grupos. No entanto os ND reagem mais rápido aos estímulos que aparecem que os D, o que se reflecte na tomada de decisão, quando aumentam os estímulos que lhe são apresentados os D demoram mais tempo na tomada de decisão que os ND.
Resumo:
A cintigrafia de perfusão do miocárdio (CPM) é uma técnica usada no diagnóstico e estratificação de risco em pacientes com suspeita ou conhecida doença arterial coronária. O processamento da CPM é realizado maioritariamente de forma semi-automática. Neste permanecem passos manuais, que envolvem: delimitação da área de reconstrução; reorientação e ajustamento dos limites do miocárdio (longo eixo vertical - LEV; longo eixo horizontal - LEH; curto eixo. O desempenho dos técnicos de Medicina Nuclear (TMN) pode ser afetado por: fatores ambientais; fatores individuais (experiência profissional e características visuais). Acredita-se que a perceção visual ao nível do processamento da CPM se encontra relacionada com o estado de visão binocular. Assim, diferentes TMN que processem os mesmos dados poderão obter diferentes estimativas dos parâmetros quantitativos. Questão de investigação: Será que a experiência profissional e as características visuais do operador interferem na determinação dos PQ no processamento da CPM? Objetivos do estudo: Avaliar a influência da experiência profissional e das características visuais dos TMN na determinação dos PQ obtidos na CPM; e Analisar a variabilidade intra e inter-operador na determinação dos PQ obtidos na CPM.