999 resultados para Atenuação de múltiplas. Deconvolução. Processamento sísmico
Resumo:
O facto de se ignorar a contribuição da resistência às forças laterais que solicitam as paredes de alvenaria das estruturas em zonas sísmicas é um tema que tem dado muito de que falar na engenharia estrutural. Em Portugal os projetistas preferem as soluções em betão armado pela sua ampla aplicação e bons resultados ao longo dos tempos. Perante o exposto anteriormente e com o objetivo de abrir possibilidades a novas alternativas que permitam de igual forma bons comportamentos estruturais, pensando na construção económica e com segurança, apresenta-se este estudo que propõe a aplicação de alvenaria confinada em zonas sísmicas de Portugal. A presente dissertação está composta por seis capítulos. No capítulo I apresenta-se uma introdução ao problema que permita ao leitor o enquadramento no tema, indicando de igual forma os objetivos específicos que foram precisos cumprir para chegar ao objetivo geral pretendido, isto é, a proposta duma metodologia de conceção de estruturas de habitação usando alvenaria confinada em zonas de risco sísmico em Portugal, delimitadas para edificações de 1 a 2 andares. Os requisitos gerais de ligação considerados baseiam-se no documento “SEISMIC DESIGN GUIDE FOR LOW-RISE CONFINED MASONRY BUILDINGS” os quais foram complementados com dados recomendados pelos EC 6 e EC 8 baseados na norma portuguesa. No capítulo II mostram-se antecedentes de estudos nacionais e internacionais da alvenaria confinada, assim mesmo encontram-se as bases teóricas com definições e requerimentos importantes a serem considerados no momento da conceção do projeto, ainda nesta seção indicam-se os passos para a aplicação do método de cálculo de esforços resistentes das paredes de alvenaria confinada (Método Simplificado), tal método foi aplicado e validado em modelos de elementos finitos desenhados com materiais e características sísmicas de comum aplicação em Portugal como se apresenta no capítulo III. No capítulo IV analisam-se os resultados obtidos, levando a que no capítulo V se descreva uma proposta de aplicação de alvenaria confinada em zonas de baixa sismicidade e alta sismicidade em Portugal. Finalmente no capítulo VI os resultados obtidos levam a concluir que em zonas de baixa sismicidade o dimensionamento de densidade de paredes é dependente das cargas gravíticas para edifícios de 1 e 2 níveis. No caso de zonas de alta sismicidade são as forças sísmicas as condicionantes. Ainda para edifícios de 2 níveis nestas zonas,a espessura mínima das paredes é de 0,20 m.
Resumo:
Dissertação para obtenção do grau de Mestre no Instituto Superior de Ciências da Saúde Egas Moniz
Resumo:
O objetivo deste trabalho é prover um aplicativo de celular e um protocolo para aquisição de imagens para contagem dos ovos de Aedes aegypti com as seguintes características: facilidade de uso, alta acurácia e custo baixo. O mosquito Ae. aegypti, popularmente conhecido como mosquito da dengue, é um importante vetor de arboviroses como a própria dengue, a chikungunya, a zika e a febre amarela em seu ciclo urbano. O monitoramento entomológico é uma maneira de melhorar a capacidade de predição e na detecção precoce de epidemias das doenças mencionadas. Este monitoramento é majoritariamente baseado no índice larvário, o qual lista a quantidade de casas infectadas, ou a quantidade de ovos de Aedes coletados em palhetas em ovitrampas. Estas palhetas são normalmente de eucatex, mas existem pesquisas atuais testando o algodão.A contagem dos ovos coletados em ovitrampas é feita manualmente, a qual demanda tempo, profissionais qualificados para o manuseio de equipamento laboratorial (lupas e microscópios) e conhecimento entomológico. Buscou-se criar um método para acelerar o trabalho feito pelos profissionais em controle entomológico. A metodologia contou com a criação de um aplicativo e com um processo de contagem dos ovos, o qual consiste em quatro passos: a) Fotografar as palhetas em ovitrampas utilizando de uma câmera de celular; b) Transformar as fotos em uma imagem binarizada, removendo todos os elementos que não são ovos; c) Contar a área de cada elemento; d) A partir do uso de um classificador especialmente desenvolvido, estimar a quantidade de ovos baseado na área de cada elemento. Nos resultados, foi possível notar que houve uma disparidade na contagem de ovos em palhetas de algodão, a qual teve um erro médio próximo a zero, em relação às palhetas de eucatex, as quais tiveram erro médio acima de 5\%. Dos pontos mais importantes das conclusões, destacam-se a possibilidade de melhoria contínua do aplicativo por permanecer na nuvem, com possibilidade de avanços conforme novas descobertas, assim como o excelente custo-benefício obtido, por conseguir operar com baixo custo monetário.
Resumo:
A sociedade é digital e vivencia as benesses, desafios e paradigmas dessa era. As mudanças estão aceleradas e o tempo de adaptação a elas mais curto a cada dia. As relações comunicativas do homem com as máquinas estão se alterando de maneira profunda, com destaque para a multiplicação de telas audiovisuais que permeiam a vida das pessoas, as quais hoje são assessoradas por meio de inúmeros Assistentes Digitais Pessoais (PDAs) e outros displays ubíquos que convergem de forma radical entre si. A profusão tecnológica e apropriação sensorial dos instrumentos contemporâneos são exemplos tangíveis disso. Com tal cenário em primeiro plano, nossa pesquisa propõe contextualizar, descrever e analisar as novas faces e interfaces da comunicação que se materializam nas atuais plataformas audiovisuais digitais, cada vez mais móveis, conectadas e velozes. Dessa forma, empreende-se uma pesquisa exploratória que se valerá prioritariamente de levantamento bibliográfico específico e análise de dados estatísticos. A pesquisa indicou que as múltiplas telas, de fato, estão modificando a dinâmica dos processos comunicativos, os quais precisam ser recompreendidos.
Resumo:
Despite the advances in the cure rate for acute myeloid leukemia, a considerable number of patients die from their disease due to the occurrence of multidrug resistance (MDR). Overexpression of the transporter proteins P-glycoprotein (Pgp) and multidrug resistance-associated protein (MRP) confer resistance to the treatment these leukemias. OBJECTIVE: To analyze the expression of the Gpp and MRP1 in patients with AML by flow cytometry (FC) and to determine the correlation between expression and demographic and also clinical and laboratorial variables. METHODS: Bone marrow and peripheral blood samples from 346 patients with a diagnosis of AML were assessed for the expression of Pgp and MRP1 by FC. RESULTS: The expression of Pgp and MRP1 was found in 111 (32.1%) and 133 (38.4%) patients, respectively, with greater prevalence in older patients and lower in adolescents, observing also a high incidence in patients with refractory disease, recurrence and secondary in comparison with the cases of de novo AML. Regarding the laboratory findings, we observed a higher correlation statistically significant between the expression of Pgp and MRP1 in AML CD34+ and FAB AML M7, M5A and M2 and lower the M3 subtype, not observed statistically significant correlation between the phenotype MDR and other laboratory data such with hemoglobin, leukocyte count, platelet count, aberrant expression of lymphoid antigens (CD2, CD7 and CD19) and clinical signs related to the disease. CONCLUSIONS: The results showed that the detection of MDR phenotype by flow cytometry can be a molecular marker for prognosis independent patients diagnosed with AML.
Resumo:
Shadows and illumination play an important role when generating a realistic scene in computer graphics. Most of the Augmented Reality (AR) systems track markers placed in a real scene and retrieve their position and orientation to serve as a frame of reference for added computer generated content, thereby producing an augmented scene. Realistic depiction of augmented content with coherent visual cues is a desired goal in many AR applications. However, rendering an augmented scene with realistic illumination is a complex task. Many existent approaches rely on a non automated pre-processing phase to retrieve illumination parameters from the scene. Other techniques rely on specific markers that contain light probes to perform environment lighting estimation. This study aims at designing a method to create AR applications with coherent illumination and shadows, using a textured cuboid marker, that does not require a training phase to provide lighting information. Such marker may be easily found in common environments: most of product packaging satisfies such characteristics. Thus, we propose a way to estimate a directional light configuration using multiple texture tracking to render AR scenes in a realistic fashion. We also propose a novel feature descriptor that is used to perform multiple texture tracking. Our descriptor is an extension of the binary descriptor, named discrete descriptor, and outperforms current state-of-the-art methods in speed, while maintaining their accuracy.
Resumo:
The reverse time migration algorithm (RTM) has been widely used in the seismic industry to generate images of the underground and thus reduce the risk of oil and gas exploration. Its widespread use is due to its high quality in underground imaging. The RTM is also known for its high computational cost. Therefore, parallel computing techniques have been used in their implementations. In general, parallel approaches for RTM use a coarse granularity by distributing the processing of a subset of seismic shots among nodes of distributed systems. Parallel approaches with coarse granularity for RTM have been shown to be very efficient since the processing of each seismic shot can be performed independently. For this reason, RTM algorithm performance can be considerably improved by using a parallel approach with finer granularity for the processing assigned to each node. This work presents an efficient parallel algorithm for 3D reverse time migration with fine granularity using OpenMP. The propagation algorithm of 3D acoustic wave makes up much of the RTM. Different load balancing were analyzed in order to minimize possible losses parallel performance at this stage. The results served as a basis for the implementation of other phases RTM: backpropagation and imaging condition. The proposed algorithm was tested with synthetic data representing some of the possible underground structures. Metrics such as speedup and efficiency were used to analyze its parallel performance. The migrated sections show that the algorithm obtained satisfactory performance in identifying subsurface structures. As for the parallel performance, the analysis clearly demonstrate the scalability of the algorithm achieving a speedup of 22.46 for the propagation of the wave and 16.95 for the RTM, both with 24 threads.
Resumo:
Ambient seismic noise has traditionally been considered as an unwanted perturbation in seismic data acquisition that "contaminates" the clean recording of earthquakes. Over the last decade, however, it has been demonstrated that consistent information about the subsurface structure can be extracted from cross-correlation of ambient seismic noise. In this context, the rules are reversed: the ambient seismic noise becomes the desired seismic signal, while earthquakes become the unwanted perturbation that needs to be removed. At periods lower than 30 s, the spectrum of ambient seismic noise is dominated by microseism, which originates from distant atmospheric perturbations over the oceans. The microsseism is the most continuous seismic signal and can be classified as primary – when observed in the range 10-20 s – and secondary – when observed in the range 5-10 s. The Green‘s function of the propagating medium between two receivers (seismic stations) can be reconstructed by cross-correlating seismic noise simultaneously recorded at the receivers. The reconstruction of the Green‘s function is generally proportional to the surface-wave portion of the seismic wavefield, as microsseismic energy travels mostly as surface-waves. In this work, 194 Green‘s functions obtained from stacking of one month of daily cross-correlations of ambient seismic noise recorded in the vertical component of several pairs of broadband seismic stations in Northeast Brazil are presented. The daily cross-correlations were stacked using a timefrequency, phase-weighted scheme that enhances weak coherent signals by reducing incoherent noise. The cross-correlations show that, as expected, the emerged signal is dominated by Rayleigh waves, with dispersion velocities being reliably measured for periods ranging between 5 and 20 s. Both permanent stations from a monitoring seismic network and temporary stations from past passive experiments in the region are considered, resulting in a combined network of 33 stations separated by distances between 60 and 1311 km, approximately. The Rayleigh-wave, dispersion velocity measurements are then used to develop tomographic images of group velocity variation for the Borborema Province of Northeast Brazil. The tomographic maps allow to satisfactorily map buried structural features in the region. At short periods (~5 s) the images reflect shallow crustal structure, clearly delineating intra-continental and marginal sedimentary basins, as well as portions of important shear zones traversing the Borborema Province. At longer periods (10 – 20 s) the images are sensitive to deeper structure in the upper crust, and most of the shallower anomalies fade away. Interestingly, some of them do persist. The deep anomalies do not correlate with either the location of Cenozoic volcanism and uplift - which marked the evolution of the Borborema Province in the Cenozoic - or available maps of surface heat-flow, and the origin of the deep anomalies remains enigmatic.
Resumo:
In recent decades, changes in the surface properties of materials have been used to improve their tribological characteristics. However, this improvement depends on the process, treatment time and, primarily, the thickness of this surface film layer. Physical vapor deposition (PVD) of titanium nitrate (TiN) has been used to increase the surface hardness of metallic materials. Thus, the aim of the present study was to propose a numerical-experimental method to assess the film thickness (l) of TiN deposited by PVD. To reach this objective, experimental results of hardness (H) assays were combined with a numerical simulation to study the behavior of this property as a function of maximum penetration depth of the indenter (hmax) into the film/substrate conjugate. Two methodologies were adopted to determine film thickness. The first consists of the numerical results of the H x hmax curve with the experimental curve obtained by the instrumental indentation test. This methodology was used successfully in a TiN-coated titanium (Ti) conjugate. A second strategy combined the numerical results of the Hv x hmax curve with Vickers experimental hardness data (Hv). This methodology was applied to a TiN-coated M2 tool steel conjugate. The mechanical properties of the materials studied were also determined in the present study. The thicknesses results obtained for the two conjugates were compatible with their experimental data.
Resumo:
In the context of current capitalist society, marked by the logic that restricts the human person their status as workforce, in order to generate profits, old age is often treated as an underprivileged life stage. This reality becomes more intense considering the sharp aging process that affects brazilian society is accompanied by the country's entry into a globalized world and tensioned by the dictates of capital. Thus, despite the increasing development of policies to strengthen the guarantee of elderly rights, it is necessary to establish effective strategies of these measures to ensure a higher quality of life to these subjects. Therefore, it is necessary to develop studies that problematize the issue of the elderly, which represent a growing portion of the population, and hence have more visible demands, including in health. With the increase in the elderly population in Brazil it is possible to realize the country is going through a demographic transition and epidemiological changes that contribute to change the landscape of health care of the elderly, especially the hospitalization. Thus, this study aimed to analyze the multiple aspects of ensuring the rights of elderly patients admitted to the State Hospital Dr. Ruy Pereira dos Santos (HRPS), located in Natal / RN, whose most patients are elderly. Specifically sought to understand the aging process, its social consequences and the vulnerability to which it is exposed, especially during the disease situation; understand the process of construction of the Brazilian public health and their actions for older people; learn the expressions of citizenship formation in Brazil with regard to policies for older people; and investigate the design of health professionals about the guarantee of the right of hospitalized elderly. Starting from an integrated coordinated theoretical and practical possibilities, a qualitative research and literature character, documentary and field was held. For this, there were four semi-structured interviews with health research locus Hospital professionals - namely, two social workers, a doctor and a nurse - as well as life stories with the hospitalized elderly patients, one in each deck the said Hospital, totaling three. The results pointed to the difficulty of health policy become effective as law and stressed one historical scenario violation of the rights of elderly hospitalized patients, which persists due to the precarious situation and the difficulty of effective implementation of the Unified Health System (SUS ) and other public policies to that end.
Resumo:
In the oil prospection research seismic data are usually irregular and sparsely sampled along the spatial coordinates due to obstacles in placement of geophones. Fourier methods provide a way to make the regularization of seismic data which are efficient if the input data is sampled on a regular grid. However, when these methods are applied to a set of irregularly sampled data, the orthogonality among the Fourier components is broken and the energy of a Fourier component may "leak" to other components, a phenomenon called "spectral leakage". The objective of this research is to study the spectral representation of irregularly sampled data method. In particular, it will be presented the basic structure of representation of the NDFT (nonuniform discrete Fourier transform), study their properties and demonstrate its potential in the processing of the seismic signal. In this way we study the FFT (fast Fourier transform) and the NFFT (nonuniform fast Fourier transform) which rapidly calculate the DFT (discrete Fourier transform) and NDFT. We compare the recovery of the signal using the FFT, DFT and NFFT. We approach the interpolation of seismic trace using the ALFT (antileakage Fourier transform) to overcome the problem of spectral leakage caused by uneven sampling. Applications to synthetic and real data showed that ALFT method works well on complex geology seismic data and suffers little with irregular spatial sampling of the data and edge effects, in addition it is robust and stable with noisy data. However, it is not as efficient as the FFT and its reconstruction is not as good in the case of irregular filling with large holes in the acquisition.
Resumo:
In the oil prospection research seismic data are usually irregular and sparsely sampled along the spatial coordinates due to obstacles in placement of geophones. Fourier methods provide a way to make the regularization of seismic data which are efficient if the input data is sampled on a regular grid. However, when these methods are applied to a set of irregularly sampled data, the orthogonality among the Fourier components is broken and the energy of a Fourier component may "leak" to other components, a phenomenon called "spectral leakage". The objective of this research is to study the spectral representation of irregularly sampled data method. In particular, it will be presented the basic structure of representation of the NDFT (nonuniform discrete Fourier transform), study their properties and demonstrate its potential in the processing of the seismic signal. In this way we study the FFT (fast Fourier transform) and the NFFT (nonuniform fast Fourier transform) which rapidly calculate the DFT (discrete Fourier transform) and NDFT. We compare the recovery of the signal using the FFT, DFT and NFFT. We approach the interpolation of seismic trace using the ALFT (antileakage Fourier transform) to overcome the problem of spectral leakage caused by uneven sampling. Applications to synthetic and real data showed that ALFT method works well on complex geology seismic data and suffers little with irregular spatial sampling of the data and edge effects, in addition it is robust and stable with noisy data. However, it is not as efficient as the FFT and its reconstruction is not as good in the case of irregular filling with large holes in the acquisition.
Resumo:
Several are the areas in which digital images are used in solving day-to-day problems. In medicine the use of computer systems have improved the diagnosis and medical interpretations. In dentistry it’s not different, increasingly procedures assisted by computers have support dentists in their tasks. Set in this context, an area of dentistry known as public oral health is responsible for diagnosis and oral health treatment of a population. To this end, oral visual inspections are held in order to obtain oral health status information of a given population. From this collection of information, also known as epidemiological survey, the dentist can plan and evaluate taken actions for the different problems identified. This procedure has limiting factors, such as a limited number of qualified professionals to perform these tasks, different diagnoses interpretations among other factors. Given this context came the ideia of using intelligent systems techniques in supporting carrying out these tasks. Thus, it was proposed in this paper the development of an intelligent system able to segment, count and classify teeth from occlusal intraoral digital photographic images. The proposed system makes combined use of machine learning techniques and digital image processing. We first carried out a color-based segmentation on regions of interest, teeth and non teeth, in the images through the use of Support Vector Machine. After identifying these regions were used techniques based on morphological operators such as erosion and transformed watershed for counting and detecting the boundaries of the teeth, respectively. With the border detection of teeth was possible to calculate the Fourier descriptors for their shape and the position descriptors. Then the teeth were classified according to their types through the use of the SVM from the method one-against-all used in multiclass problem. The multiclass classification problem has been approached in two different ways. In the first approach we have considered three class types: molar, premolar and non teeth, while the second approach were considered five class types: molar, premolar, canine, incisor and non teeth. The system presented a satisfactory performance in the segmenting, counting and classification of teeth present in the images.
Resumo:
Several are the areas in which digital images are used in solving day-to-day problems. In medicine the use of computer systems have improved the diagnosis and medical interpretations. In dentistry it’s not different, increasingly procedures assisted by computers have support dentists in their tasks. Set in this context, an area of dentistry known as public oral health is responsible for diagnosis and oral health treatment of a population. To this end, oral visual inspections are held in order to obtain oral health status information of a given population. From this collection of information, also known as epidemiological survey, the dentist can plan and evaluate taken actions for the different problems identified. This procedure has limiting factors, such as a limited number of qualified professionals to perform these tasks, different diagnoses interpretations among other factors. Given this context came the ideia of using intelligent systems techniques in supporting carrying out these tasks. Thus, it was proposed in this paper the development of an intelligent system able to segment, count and classify teeth from occlusal intraoral digital photographic images. The proposed system makes combined use of machine learning techniques and digital image processing. We first carried out a color-based segmentation on regions of interest, teeth and non teeth, in the images through the use of Support Vector Machine. After identifying these regions were used techniques based on morphological operators such as erosion and transformed watershed for counting and detecting the boundaries of the teeth, respectively. With the border detection of teeth was possible to calculate the Fourier descriptors for their shape and the position descriptors. Then the teeth were classified according to their types through the use of the SVM from the method one-against-all used in multiclass problem. The multiclass classification problem has been approached in two different ways. In the first approach we have considered three class types: molar, premolar and non teeth, while the second approach were considered five class types: molar, premolar, canine, incisor and non teeth. The system presented a satisfactory performance in the segmenting, counting and classification of teeth present in the images.
Resumo:
The Traveling Salesman with Multiple Ridesharing (TSP-MR) is a type of the Capacitated Traveling Salesman, which presents the possibility of sharing seats with passengers taking advantage of the paths the salesman travels through his cycle. The salesman shares the cost of a path with the boarded passengers. This model can portray a real situation in which, for example, drivers are willing to share parts of a trip with tourists that wish to move between two locations visited by the driver’s route, accepting to share the vehicle with other individuals visiting other locations within the cycle. This work proposes a mathematical formulation for the problem, and an exact and metaheuristics algorithms for its solution, comparing them.