974 resultados para Multiple attenuation. Deconvolution. Seismic processing
Resumo:
In order to evaluate the use of shallow seismic technique to delineate geological and geotechnical features up to 40 meters depth in noisy urban areas covered with asphalt pavement, five survey lines were conducted in the metropolitan area of São Paulo City. The data were acquired using a 24-bit, 24-channel seismograph, 30 and 100 Hz geophones and a sledgehammer-plate system as seismic source. Seismic reflection data were recorded using a CMP (common mid point) acquisition method. The processing routine consisted of: prestack band-pass filtering (90-250 Hz); automatic gain control (AGC); muting (digital zeroin) of dead/noisy traces, ground roll, air-wave and refracted-wave; CMP sorting; velocity analyses; normal move-out corrections; residual static corrections; f-k filtering; CMP stacking. The near surface is geologically characterized by unconsolidated fill materials and Quaternary sediments with organic material overlying Tertiary sediments with the water table 2 to 5 m below the surface. The basement is composed of granite and gneiss. Reflections were observed from 40 milliseconds to 65 ms two-way traveltime and were related to the silt clay layer and fine sand layer contact of the Tertiary sediments and to the weathered basement. The CMP seismic-reflection technique has been shown to be useful for mapping the sedimentary layers and the bedrock of the São Paulo sedimentary basin for the purposes of shallow investigations related to engineering problems. In spite of the strong cultural noise observed in these urban areas and problems with planting geophones we verified that, with the proper equipment, field parameters and particularly great care in data collection and processing, we can overcome the adverse field conditions and to image reflections from layers as shallow as 20 meters.
Resumo:
Um registro sísmico é frequentemente representado como a convolução de um pulso-fonte com a resposta do meio ao impulso, relacionada ao caminho da propagação. O processo de separação destes dois componentes da convolução é denominado deconvolução. Existe uma variedade de aproximações para o desenvolvimento de uma deconvolução. Uma das mais comuns é o uso da filtragem linear inversa, ou seja, o processamento do sinal composto, através de um filtro linear, cuja resposta de frequência é a recíproca da transformada de Fourier de um dos componentes do sinal. Obviamente, a fim de usarmos a filtragem inversa, tais componentes devem ser conhecidas ou estimadas. Neste trabalho, tratamos da aplicação a sinais sísmicos, de uma técnica de deconvolução não linear, proposta por Oppenheim (1965), a qual utiliza a teoria de uma classe de sistemas não lineares, que satisfazem um princípio generalizado de superposição, denominados de sistemas homomórficos. Tais sistemas são particularmente úteis na separação de sinais que estão combinados através da operação de convolução. O algoritmo da deconvolução homomórfica transforma o processo de convolução em uma superposição aditiva de seus componentes, com o resultado de que partes simples podem ser separadas mais facilmente. Esta classe de técnicas de filtragem representa uma generalização dos problemas de filtragem linear. O presente método oferece a considerável vantagem de que não é necessário fazer qualquer suposição prévia sobre a natureza do pulso sísmico fonte, ou da resposta do meio ao impulso, não requerendo assim, as considerações usuais de que o pulso seja de fase-mínima e que a distribuição dos impulsos seja aleatória, embora a qualidade dos resultados obtidos pela análise homomórfica seja muito sensível à razão sinal/ruído, como demonstrado.
Resumo:
A motivação geológica deste trabalho reside no imageamento de estruturas de bacias sedimentares da região Amazônica, onde a geração e o acúmulo de hidrocarboneto estão relacionados com a presença de soleiras de diabásio. A motivação sísmica reside no fato de que essas rochas intrusivas possuem grandes contrastes de impedância com a rocha encaixante, o que resulta em múltiplas, externas e internas, com amplitudes semelhantes as das primárias. O sinal sísmico das múltiplas podem predominar sobre o sinal das reflexões primárias oriundas de interfaces mais profundas, o que pode dificultar o processamento, a interpretação e o imageamento da seção sísmica temporal. Neste trabalho, estudamos a atenuação de múltiplas em seções sintéticas fonte-comum (FC) através da comparação de dois métodos. O primeiro método resulta da combinação das técnicas Wiener-Hopf-Levinson de predição (WHLP) e o de empilhamento superfície-de-reflexão-comum (CRS), e denominando WHLP-CRS, onde o operador é desenhado exclusivamente no domínio do tempo-espaço. O segundo método utilizado é o filtro de velocidade (ω-k) aplicado após o empilhamento superfície-de-reflexão (CRS), onde o operador é desenhado exclusivamente no domínio bidimensional de freqüência temporal-espacial. A identificação das múltiplas é feita na seção de afastamento-nulo (AN) simulada com o empilhamento CRS, e utiliza o critério da periodicidade entre primária e suas múltiplas. Os atributos da frente de onda, obtidos através do empilhamento CRS, são utilizados na definição de janelas móveis no domínio tempo-espaço, que são usadas para calcular o operador WHLP-CRS. O cálculo do filtroω-k é realizado no domínio da freqüência temporal-espacial, onde os eventos são selecionados para corte ou passagem. O filtro (ω-k) é classificado como filtro de corte, com alteração de amplitude, mas não de fase, e limites práticos são impostos pela amostragem tempo-espaço. Em termos práticos, concluímos que, para o caso de múltiplas, os eventos separados no domínio x-t não necessariamente se separam no domínio ω-k, o que dificulta o desenho de um operador ω-k semelhante em performance ao operador x-t.
Resumo:
A medição de parâmetros físicos de reservatórios se constitui de grande importância para a detecção de hidrocarbonetos. A obtenção destes parâmetros é realizado através de análise de amplitude com a determinação dos coeficientes de reflexão. Para isto, faz-se necessário a aplicação de técnicas especiais de processamento capazes de corrigir efeitos de divergência esférica. Um problema pode ser estabelecido através da seguinte questão: Qual o efeito relativamente mais importante como responsável pela atenuação de amplitudes, o espalhamento geométrico ou a perda por transmissividade? A justificativa desta pergunta reside em que a correção dinâmica teórica aplicada a dados reais visa exclusivamente o espalhamento geométrico. No entanto, a análise física do problema por diferentes direções põe a resposta em condições de dúvida, o que é interessante e contraditório com a prática. Uma resposta embasada mais fisicamente pode dar melhor subsídio a outros trabalhos em andamento. O presente trabalho visa o cálculo da divergência esférica segundo a teoria Newman-Gutenberg e corrigir sismogramas sintéticos calculados pelo método da refletividade. O modelo-teste é crostal para que se possa ter eventos de refração crítica além das reflexões e para, com isto, melhor orientar quanto à janela de aplicação da correção de divergência esférica o que resulta em obter o então denominado “verdadeiras amplitudes”. O meio simulado é formado por camadas plano-horizontais, homogêneas e isotrópicas. O método da refletividade é uma forma de solução da equação de onda para o referido modelo, o que torna possível um entendimento do problema em estudo. Para se chegar aos resultados obtidos foram calculados sismogramas sintéticos através do programa P-SV-SH desenvolvido por Sandmeier (1998), e curvas do espalhamento geométrico em função do tempo para o modelo estudado como descrito por Newman (1973). Demonstramos como uma das conclusões que a partir dos dados do modelo (velocidades, espessuras, densidades e profundidades) uma equação para a correção de espalhamento geométrico visando às “verdadeiras amplitudes” não é de fácil obtenção. O objetivo maior então deveria ser obter um painel da função de divergência esférica para corrigir as verdadeiras amplitudes.
Resumo:
As reflexões múltiplas presentes nos sismogramas ocultam informações importantes sobre os refletores em subsuperfície e, podem até tornar completamente invisíveis as reflexões primárias, como no caso dos sismogramas marinhos, que muitas das vezes, exibem uma aparência anelar com fortes superposições das reflexões múltiplas, sobre as reflexões primárias. Problema este que tem sido alvo de importantes pesquisas, com o intuito de identificar, atenuar e/ou eliminá-las, através de vários métodos populares. O objetivo principal deste trabalho é a identificação das reflexões múltiplas. Com essa finalidade foi gerada, por modelamento direto, uma seção sísmica, com afastamento nulo (AN), contendo reflexões primárias e múltiplas simétricas de primeira ordem. Posteriormente, foi aplicada a migração cinemática do tipo Kirchhoff para obter o modelo em profundidade, apresentando uma boa recuperação dos refletores, bem como a presença de um refletor fictício, quando comparado com a seção anteriormente especificada. Foi obtida uma seção sísmica AN, do modelo migrado, na qual não é observado o segundo refletor, devido à ausência de contraste de impedância, entre a segunda e terceira camada, sendo este o primeiro indício de que o refletor fictício deste modelo é uma múltipla. Outro indício sobre a existência da múltipla foi a simetria encontrada entre as curvaturas do primeiro e terceiro refletor. Finalmente, foram calculados os parâmetros das frentes de ondas Hipotéticas Ponto de Incidência Normal (PIN) e Normal (N), bem como a velocidade Normal Moveout (NMO), tanto para os eventos de reflexões primárias como os eventos de reflexões múltiplas, para o modelo direto e para o modelo migrado. Em seguida, foram realizadas as comparações destes parâmetros, o que permitiu confirmar a veracidade dos indícios anteriores para a identificação das reflexões múltiplas.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Resumo:
Metalinguistic skill is the ability to reflect upon language as an object of thought. Amongst metalinguistic skills, two seem to be associated with reading and spelling: morphological awareness and phonological awareness. Phonological awareness is the ability of reflecting upon the phonemes that compose words, and morphological awareness is the ability of reflecting upon the morphemes that compose the words. The latter seems to be particularly important for reading comprehension and contextual reading, as beyond phonological information, syntactic and semantic information are required. This study is set to investigate - with a longitudinal design - the relation between those abilities and contextual reading measured by the Cloze test. The first part of the study explores the relationship between morphological awareness tasks and Cloze scores through simple correlations and, in the second part, the specificity of such relationship was inquired using multiple regressions. The results give some support to the hypothesis that morphological awareness offers an independent contribution regarding phonological awareness to contextual reading in Brazilian Portuguese.
Resumo:
Background: Proteinaceous toxins are observed across all levels of inter-organismal and intra-genomic conflicts. These include recently discovered prokaryotic polymorphic toxin systems implicated in intra-specific conflicts. They are characterized by a remarkable diversity of C-terminal toxin domains generated by recombination with standalone toxin-coding cassettes. Prior analysis revealed a striking diversity of nuclease and deaminase domains among the toxin modules. We systematically investigated polymorphic toxin systems using comparative genomics, sequence and structure analysis. Results: Polymorphic toxin systems are distributed across all major bacterial lineages and are delivered by at least eight distinct secretory systems. In addition to type-II, these include type-V, VI, VII (ESX), and the poorly characterized "Photorhabdus virulence cassettes (PVC)", PrsW-dependent and MuF phage-capsid-like systems. We present evidence that trafficking of these toxins is often accompanied by autoproteolytic processing catalyzed by HINT, ZU5, PrsW, caspase-like, papain-like, and a novel metallopeptidase associated with the PVC system. We identified over 150 distinct toxin domains in these systems. These span an extraordinary catalytic spectrum to include 23 distinct clades of peptidases, numerous previously unrecognized versions of nucleases and deaminases, ADP-ribosyltransferases, ADP ribosyl cyclases, RelA/SpoT-like nucleotidyltransferases, glycosyltranferases and other enzymes predicted to modify lipids and carbohydrates, and a pore-forming toxin domain. Several of these toxin domains are shared with host-directed effectors of pathogenic bacteria. Over 90 families of immunity proteins might neutralize anywhere between a single to at least 27 distinct types of toxin domains. In some organisms multiple tandem immunity genes or immunity protein domains are organized into polyimmunity loci or polyimmunity proteins. Gene-neighborhood-analysis of polymorphic toxin systems predicts the presence of novel trafficking-related components, and also the organizational logic that allows toxin diversification through recombination. Domain architecture and protein-length analysis revealed that these toxins might be deployed as secreted factors, through directed injection, or via inter-cellular contact facilitated by filamentous structures formed by RHS/YD, filamentous hemagglutinin and other repeats. Phyletic pattern and life-style analysis indicate that polymorphic toxins and polyimmunity loci participate in cooperative behavior and facultative 'cheating' in several ecosystems such as the human oral cavity and soil. Multiple domains from these systems have also been repeatedly transferred to eukaryotes and their viruses, such as the nucleo-cytoplasmic large DNA viruses. Conclusions: Along with a comprehensive inventory of toxins and immunity proteins, we present several testable predictions regarding active sites and catalytic mechanisms of toxins, their processing and trafficking and their role in intra-specific and inter-specific interactions between bacteria. These systems provide insights regarding the emergence of key systems at different points in eukaryotic evolution, such as ADP ribosylation, interaction of myosin VI with cargo proteins, mediation of apoptosis, hyphal heteroincompatibility, hedgehog signaling, arthropod toxins, cell-cell interaction molecules like teneurins and different signaling messengers.
Resumo:
A polarimetric X-band radar has been deployed during one month (April 2011) for a field campaign in Fortaleza, Brazil, together with three additional laser disdrometers. The disdrometers are capable of measuring the raindrop size distributions (DSDs), hence making it possible to forward-model theoretical polarimetric X-band radar observables at the point where the instruments are located. This setup allows to thoroughly test the accuracy of the X-band radar measurements as well as the algorithms that are used to correct the radar data for radome and rain attenuation. For the campaign in Fortaleza it was found that radome attenuation dominantly affects the measurements. With an algorithm that is based on the self-consistency of the polarimetric observables, the radome induced reflectivity offset was estimated. Offset corrected measurements were then further corrected for rain attenuation with two different schemes. The performance of the post-processing steps was analyzed by comparing the data with disdrometer-inferred polarimetric variables that were measured at a distance of 20 km from the radar. Radome attenuation reached values up to 14 dB which was found to be consistent with an empirical radome attenuation vs. rain intensity relation that was previously developed for the same radar type. In contrast to previous work, our results suggest that radome attenuation should be estimated individually for every view direction of the radar in order to obtain homogenous reflectivity fields.
Resumo:
Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.
Resumo:
Statistical modelling and statistical learning theory are two powerful analytical frameworks for analyzing signals and developing efficient processing and classification algorithms. In this thesis, these frameworks are applied for modelling and processing biomedical signals in two different contexts: ultrasound medical imaging systems and primate neural activity analysis and modelling. In the context of ultrasound medical imaging, two main applications are explored: deconvolution of signals measured from a ultrasonic transducer and automatic image segmentation and classification of prostate ultrasound scans. In the former application a stochastic model of the radio frequency signal measured from a ultrasonic transducer is derived. This model is then employed for developing in a statistical framework a regularized deconvolution procedure, for enhancing signal resolution. In the latter application, different statistical models are used to characterize images of prostate tissues, extracting different features. These features are then uses to segment the images in region of interests by means of an automatic procedure based on a statistical model of the extracted features. Finally, machine learning techniques are used for automatic classification of the different region of interests. In the context of neural activity signals, an example of bio-inspired dynamical network was developed to help in studies of motor-related processes in the brain of primate monkeys. The presented model aims to mimic the abstract functionality of a cell population in 7a parietal region of primate monkeys, during the execution of learned behavioural tasks.
Resumo:
In this thesis two major topics inherent with medical ultrasound images are addressed: deconvolution and segmentation. In the first case a deconvolution algorithm is described allowing statistically consistent maximum a posteriori estimates of the tissue reflectivity to be restored. These estimates are proven to provide a reliable source of information for achieving an accurate characterization of biological tissues through the ultrasound echo. The second topic involves the definition of a semi automatic algorithm for myocardium segmentation in 2D echocardiographic images. The results show that the proposed method can reduce inter- and intra observer variability in myocardial contours delineation and is feasible and accurate even on clinical data.
Resumo:
The Southern Tyrrhenian subduction system shows a complex interaction among asthenospheric flow, subducting slab and overriding plate. To shed light on the deformations and mechanical properties of the slab and surrounding mantle, I investigated seismic anisotropy and attenuation properties through the subduction region. I used both teleseisms and slab earthquakes, analyzing shear-wave splitting on SKS and S phases, respectively. The fast polarization directions φ, and the delay time, δt, were retrieved using the method of Silver and Chan [1991. SKS and S φ reveal a complex anisotropy pattern across the subduction zone. SKS-rays sample primarily the sub-slab region showing rotation of fast directions following the curved shape of the slab and very strong anisotropy. S-rays sample mainly the slab, showing variable φ and a smaller δt. SKS and S splitting reveals a well developed toroidal flow at SW edge of the slab, while at its NE edge the pattern is not very clear. This suggests that the anisotropy is controlled by the slab rollback, responsible for about 100 km slab parallel φ in the sub-slab mantle. The slab is weakly anisotropic, suggesting the asthenosphere as main source of anisotropy. To investigate the physical properties of the slab and surrounding regions, I analyzed the seismic P and S wave attenuation. By inverting high-quality S-waves t* from slab earthquakes, 3D attenuation models down to 300 km were obtained. Attenuation results image the slab as low-attenuation body, but with heterogeneous QS and QP structure showing spot of high attenuation , between 100-200 km depth, which could be due dehydration associated to the slab metamorphism. A low QS anomaly is present in the mantle wedge beneath the Aeolian volcanic arc and could indicate mantle melting and slab dehydration.
Resumo:
This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.